In negotiating the Digital Services Act, EU law-makers balanced tackling disinformation with protecting free speech. The Commission’s last-minute proposal for stricter regulation of tech platforms during crises undermines this balance.
Online
disinformation – material propagated with the intention to mislead – is a
serious threat to the EU. It has contributed to many of the EU’s recent
challenges: panic about immigration, the rise of the far right and
left, Islamophobia, vaccine hesitancy and Brexit. Europe’s rivals, in
particular Russia and China,
use disinformation campaigns as low-cost and low-risk methods to foment
dissent and promote their preferred narratives about these issues in
the EU. Russia’s invasion of Ukraine is the latest battleground online.
Platforms like Twitter, YouTube, TikTok and Instagram have been flooded with Putin’s lies about the ‘Nazification’ of Ukraine.
This
flood of disinformation comes as the EU is finalising the Digital
Services Act (DSA), a major new law designed to regulate online
platforms, including social media platforms like Facebook, Twitter and
TikTok which are used to disseminate disinformation. The DSA forces
large platforms to be more transparent and accountable for tackling
disinformation. As law-makers finalise the DSA, the European Commission
has begun insisting it needs stronger powers to direct how platforms
tackle disinformation during crises. These powers would undermine the
careful compromises law-makers have already agreed in the DSA – and risk
making platforms’ responses to disinformation worse.
In the EU,
spreading false or misleading information is not generally illegal.
Freedom of expression includes the right to express incorrect views. And
the distinction between ‘fake news’ and ‘legitimate opinion’ is often
contested. Despite the EU’s recent decision to ban Russian media
outlets Russia Today and Sputnik from broadcasting in the EU,
policy-makers generally recognise that simply banning disinformation is
not a realistic or desirable option. Instead, the EU has sought to curb
the impact of lies peddled online in ways which preserve free speech.
For example, the EU’s 2018 Action Plan against Disinformation
focused on identifying disinformation, supporting independent media and
fact-checkers, and promoting media literacy. The EU’s European External
Action Service (EEAS) also set up strategic communications divisions,
known as the StratCom Task Forces. The EU’s 2020 European Democracy Action Plan also established a framework for collecting evidence about foreign disinformation campaigns. As the Kremlin propagated lies
about its invasion of Ukraine, for example, this evidence allowed the
EU High Representative for Foreign Affairs and Security Policy to quickly name these strategies publicly and correct false information.
Disinformation is hardly limited to the online world. More polarised people are less likely to use social media – being older, they tend to rely on newspapers and TV. Conversely, most users of social media are exposed to a wide diversity of opinions. But – perhaps because regulating foreign tech firms is easier than tackling problems with some of the EU’s own media outlets
– lawmakers remain focused on the importance of online platforms.
Though they each have different acceptable use policies, online
platforms do not typically ban all disinformation, both because
identifying misleading material is difficult and to protect freedom of
speech. More important is how disinformation is amplified. Rather than
showing a chronological view of posts, most platforms now use
personalised algorithms, designed to show users the most relevant and
engaging content. Disinformation is often designed to exploit these
algorithms, being emotive to attract user engagement. ‘Troll factories’,
like Russia’s so-called Internet Research Agency,
also co-ordinate the actions of many different user accounts, tricking
algorithms into believing that content is genuinely engaging, so that
platforms show that content to more users.
Currently, the EU primarily relies on platforms self-regulating
to avoid these problems. Self-regulation focuses on easier issues, like
regulating online advertising and increasing the prominence of
reputable news sources. But even though the EU is trying to strengthen
self-regulation, voluntary steps will probably remain insufficient. For
example, self-regulation has not forced online platforms to devote
enough resources to protecting EU users. Facebook chose to deploy more content moderators in other parts of the world than EU member-states, particularly the US, Brazil and India. Disinformation in the largest EU languages,
including Italian, French, Portuguese and Spanish, is far less likely
to be quickly assessed than content in English. This is a concern
because many disinformation campaigns are deployed rapidly and locally.
For example, in recent weeks, Russia has been particularly intent on
stoking anti-Ukrainian sentiment in eastern EU member-states such as Poland.
Full paper
CER
© CER
Key
Hover over the blue highlighted
text to view the acronym meaning
Hover
over these icons for more information
Comments:
No Comments for this Article