Taming the Big Tech tiger

by Claudia Prettner, Legal/Policy Adviser – Technology, Data Protection, Human Rights, Amnesty International

This opinion piece was originally published on Social Europe.

New EU digital rules need to tackle the business model of surveillance capitalism.

Last December the European Commission proposed new rules to govern the digital space—the Digital Services Act Package—to tackle the power of Big Tech and ensure human rights are protected online. The rules are a huge opportunity to reshape the digital world. But ultimately the proposals fail to strike at the heart of the companies’ surveillance-based business model.

It has become virtually impossible to escape technology giants such as Google and Facebook when gaining access to the internet and information online. Their power has been entrenched by network and lock-in effects, increased entry barriers, the leveraging of dominance in one sector to increase dominance in another, downranking of services offered by would-be competitors and buy-and-bury strategies.

Given their dominance over our digital lives and their business models, based on invasive surveillance and tracking of one’s every move, the platforms pose major threats to human rights—including rights to privacy, freedom of expression and non-discrimination.

Sources of dysfunction

The health of the digital space has been degenerating over recent years. Two years ago, for the 30th birthday of the world wide web, its inventor, Sir Tim Berners-Lee, wrote about three sources of dysfunction on the web: deliberate, malicious intent, such as criminal behaviour and online harassment; advertising-based revenue models which reward ‘clickbait’ and the spread of misinformation; and polarisation and poor quality in online discourse. Last month, he talked about the need for social networks ‘where bad things happen less’.

Shoshana Zuboff, Harvard professor and author of The Age of Surveillance Capitalism, recently spoke alongside Big Tech’s biggest foe in the European Union, Margrethe Vestager, the executive vice-president of the European Commission whose brief is ‘a Europe fit for the digital age’. Zuboff described how technology giants were ‘creating unprecedented concentrations of knowledge—and that gives them vast power’, while Vestager acknowledged the polarisation to which Big Tech had contributed.

This knowledge and power have been gained through the harvesting of personal data on a massive scale—data which Big Tech has used to create predictions about every aspect of our lives and which it, in turn, translates into profit. To show us more advertisements and increase its revenue, its algorithms are designed to maximise views, clicks, ‘likes’ and shares, and keep us staring at screens. This often amplifies disinformationpolarisation and advocacy of hatred, given that divisive and sensationalist content is more likely to drive engagement.

Two legislative proposals

The Digital Services Act package consists of two legislative proposals: the Digital Services Act (DSA) and the Digital Markets Act (DMA)

The DSA sets out obligations for providers of intermediary services—internet-access providers, webhosting, ‘social media’ platforms—depending on their nature, size and impact on the online ecosystem. First, it establishes clearer rules for tackling illegal content online and includes content-moderation obligations. While it maintains existing liability exemptions for online intermediaries, the DSA includes new obligations for hosting service providers to put in place ‘notice and action mechanisms’, so that they can be notified of potentially illegal content on their services.

The DSA also imposes transparency obligations on intermediaries. Online platforms which display advertisements need to ensure users can identify in real time the advertising nature of an object, the person on whose behalf it was displayed and meaningful information about the main targeting parameters. Furthermore, very large online platforms (VLOPs)—reaching at least 45 million users within the EU—need to make publicly available their advertising repositories.

Finally, VLOPs must assess ‘any significant systemic risks stemming from the functioning and use made of their services’, including any negative effects on fundamental rights. VLOPs also need to put in place risk-mitigation measures, such as adapting content-moderation or recommender systems—algorithms which determine the content seen online—and limiting the display of advertisements. They also need to explain in their terms and conditions their recommender systems and options to modify them, with at least one option not based on profiling.

The DMA, on the other hand, is concerned with tackling power imbalances and unfair business practices by the ‘gatekeepers’ and opening up platform markets. It imposes ‘do’s and don’ts’ on providers of core platform services—search engines, social networking, video-sharing—which have a significant impact on the internal market, act as gatekeepers between business and end users and enjoy an entrenched and durable position.

For instance, they have to allow third-party providers of ancillary services (such as identification or payment) access to the same operating system, hardware or software features available to the gatekeeper. Furthermore, gatekeepers are prohibited from combining personal data from different sources and from signing in users to different services, unless the user has been presented with the specific choice and consents.

Systemic threats

These rules are a good start but they do not go far enough to curb the systemic human-rights threats posed by Big Tech.

The rules on notice and action mechanisms mean the provider may lose its immunity from liability unless it acts expeditiously in response to such notices by removing or disabling access to the illegal content. This could incentivise platforms to over-remove content, thereby risking the curtailment of freedom of expression.

For instance, if a user wrongfully notifies a platform of allegedly illegal content, it might react in a precautionary way and take the content down so as to avoid any liability. These rules should be updated to reflect the principle that intermediaries should not bear liability for failure to remove content of which they are not aware and which they have not modified, absent a judicial order.

Furthermore, while increased transparency in platforms’ practices is welcome, several of the DSA’s obligations were already (at least partially) implemented by voluntary initiatives such as transparency reports and online advertising libraries or already written into law (for example, online advertising transparency rules). Transparency obligations alone are in any event insufficient to tackle the harms caused by the platforms’ intrusive profiling and their business model.

Radical overhaul

What we need is a radical overhaul of Big Tech’s surveillance-based model. A first step could be strict limits on targeted advertising based on the processing of personal data—with a shift to less intrusive forms, such as contextual advertising, which do not require tracking of user behaviour, in line with calls from the European Parliament and the European data protection supervisor. In the same spirit, the DSA should introduce profiling-free recommender systems as the norm, not an option.

The obligations on VLOPs to manage systemic risks could be seen as revolutionary but should extend to compulsory and effective human-rights due diligence. This would require that they not only identify and mitigate but also cease and prevent any human-rights abuses linked to their operations and underlying business model. Platforms must be held liable when they have caused or contributed to human-rights harms or when they have failed to carry out such due diligence.

The DMA is too focused on the relationships between online platforms and their business users and should take end users’ rights more into account. It must impose the obligation on gatekeepers of cross-platform interoperability. This would allow users to communicate between different core services and platforms without the need to sign up to a gatekeeper—thereby genuinely opening up the market and allowing the emergence of alternative platforms with more rights-respecting terms and conditions.

The Digital Services Act package gives us a once-in-a-generation opportunity to steer the ship back on course. If the EU wants really to tackle the power of Big Tech and end its human-rights abuses, it must strike at the core of its business model.