Donald Trump’s campaign team has accused the European Union (EU) of meddling in the US Presidential election. Immediately prior to the commencement of a two-hour online interview featuring Trump and Twitter owner Elon Musk, EU Commissioner for Internal Market Thierry Breton tweeted that Twitter – now known as X – must comply with the EU’s Digital Services Act, which obliges tech companies to enact content moderation measures.
Breton wrote that such responsibilities increase as audience numbers rise and added a copy of a letter he had written to Mr. Musk. Twitter CEO Linda Yaccarino hit back, saying Breton was trying to “stretch a law intended to apply in Europe to political activities in the US.”
Trump campaign spokesman Steven Cheung was more direct in his criticisms of Breton, saying the EU should mind its own business and stop meddling in American elections. “The European Union is an enemy of free speech and has no authority of any kind to dictate how we campaign,” Cheung added.
The Digital Services Act was introduced in 2022 and took effect last August. It places obligations on the world’s largest online platforms, including Amazon, Google, and Facebook, and holds them accountable for content. The legislation calls on companies to remove illegal goods or services from their sites and to construct a simple method for users to report such content.
It also bans targeted advertising based on sexual orientation, religion, ethnicity, or political beliefs and limits what children can see.
Some companies quickly responded positively and began implementing the law’s requirements – including Google and Meta, the owner of Facebook. Platforms that do not respond and adhere to the EU laws risk fines of up to 6% of their global annual revenue.
Amid criticisms of free speech curtailment, the European Commission insisted it had struck the right balance. It claims it is committed to freedom of expression, and the legislation protects users from “government interference in people’s freedom of expression and information.” For instance, it requires platforms to allow user appeals against removed content, obliges tech companies to be transparent in how they moderate their content, and “introduces new tools to assess and rectify biases in recommender systems.”