Contact B'nai B'rith

1120 20th Street NW, Suite 300N Washington, D.C. 20036

info@bnaibrith.org

202-857-6600

As we all power through the pandemic and adapt to new ways of living and working, one thing has become clear: the digital space is indispensable – much more so than we had previously understood. Coupled with this, the pervasiveness of hate and anti-Semitism online has become increasingly hard to ignore. Indeed, COVID-19 has both accelerated anti-Semitic conspiracy ideologies and put a spotlight on already existing issues affecting the sense of security and dignity of Jewish citizens across the world. 

If we are to find a silver lining, it’s that the urgency of the matter has finally elevated online hate to policy-makers’ desks in a serious manner and has forced platforms to take some modest steps to address the issue – see for instance Facebook’s new policy around removing Holocaust denial content. It’s not the first time the digital space has received scrutiny – legislative and otherwise – but the current tone of the debate presents a clear shift: 

Transatlantic Trends in Digital Governance 

When, in 2000, the E-Commerce Directive was born to govern and harmonize the EU’s digital space, a great deal of attention was given to accommodating major online platforms and creating a liability shield for them as intermediaries. 20 years later, when confronted with the growing challenges of disinformation, conspiracy ideologies and hate speech, the EU’s discourse has rightly changed. In its recently concluded consultation for the upcoming Digital Services Act – legislation that will likely constitute the EU’s digital legal framework for at least the next decade – the focus has finally turned towards platform responsibility and user safety.
 
This mirrors the approach in the United States. Section 230 of the Communications Decency Act of 1996 essentially provided immunity from liability for intermediary platforms and users. It is what allowed major platforms such as Facebook and Twitter to grow in the first place and capture the public space to the extent that they have. Yet currently, there is bipartisan agreement over the need to reform legislation to meet new challenges. What’s more, the platforms themselves seem to welcome official guidelines, given the immense pressure they’ve been under with regard to content moderation. 

In the EU, the largest IT companies – Facebook, Microsoft, Twitter, Google, and more recently Instagram, Snapchat and TikTok – have taken significant steps to tackle illegal hate speech by signing onto a voluntary Code of Conduct on countering illegal hate speech online. While in 2016 this was a milestone achievement, today, as the opportunity for reform is in front of us, we must move beyond illegal content and pay dedicated attention to tackling the overflowing amount of legal but harmful content online. More importantly, we cannot rely on voluntary compliance by platforms – however positive the results have, thus far, been – but must ensure efforts are coordinated and harmonized within upcoming legislation, and commitments are made into legally binding agreements. 

A positive development in this sense has been the proposal of the European Commission to categorize illegal hate speech and hate crimes as euro-crimes. This would allow for an EU-wide harmonization of rules and standards for what is clearly a cross-border issue. It is important that EU member states approve this proposal unanimously. 

Tackling Anti-Semitism Online 

As attention is now turning to users’ safety and dealing with harmful content, tackling anti-Semitism must be a core part of this shift. For one thing, the EU’s vocal efforts to tackle anti-Semitism – ones with real tangible effects and national level echoes – must also permeate the online space. More fundamentally, however, that’s because of the inseparable link between anti-Semitism and the main challenges that are faced in the online space today such as conspiracy ideologies, radicalization, far-right and Islamist extremism and COVID-19 misinformation. Understanding how to tackle anti-Semitism and, indeed, generating the political will to do so will not only help rid the online space of anti-Jewish hatred, but will contribute towards addressing the broader issues that it underscores. 

Civil society has an important role to play here in articulating the needs of the community, educating policy-makers, platforms and other relevant stakeholders such as the Inter-Parliamentary Taskforce on Online Anti-Semitism, and proposing innovative solutions. To that end, B’nai B’rith International has contributed to a joint policy position by major Jewish organizations – a 10-point set of recommendations to be implemented as part of the future EU Digital Services Act (You can see more at www.deleteantisemitism.org). If we are to make significant progress in addressing online anti-Semitism, we cannot shy away from putting in place clear legislative measures. We need regular and transparent data collection and analysis to better understand the spread of hate online; platforms must make algorithms transparent, so that scrutiny prevents them from leading users to extremist content; we must disincentivize profit stemming from harmful content and create clear references in community standards about what constitutes anti-Semitic content. 

Beyond this, we must acknowledge that content falling short of immediate incitement to violence or clear Holocaust denial – and thus, constituting legal speech – still poses serious threats: it contributes to radicalization, feeds conspiracy ideologies that often have externalities in the physical world and chips away at the fundamental right to a sense of safety of those targeted. Innovative solutions are required to facilitate reporting, early detection and communication with law enforcement in cases that fall outside the scope of illegal speech. In this context, the anonymity granted by the online space is a particularly thorny issue. It’s a central aspect supra-national bodies, governments and platforms must consider through law or self-regulation. 

Grappling with Difficult Questions 

The conversation around how to best govern the digital space is certainly a difficult one. Questions around freedom of expression, concerns over government overreach and skepticism about censorship at the hand of private platforms are all justified. But spare a thought on the silencing effect that hate speech has on the freedom of expression, of practice and of manifestation in public life of those affected. In the physical space, it goes without question that we put in place regulations to facilitate our interactions and foster good governance. The digital space should be no different.


Alina Bricman is the Director of EU Affairs at B’nai B’rith International. She formerly served as president of the European Union of Jewish Students (EUJS) from 2017 to 2019 and worked for the Representation of the European Commission in Romania and for the Median Research Centre, a Romanian civil society NGO focused on civil engagement and combating xenophobia.  She studied political science at the National School of Political and Administrative Studies in Bucharest and at the Central European University in Budapest.