NEW YORK — Reacting to Meta’s announcement regarding changes to content moderation on its platforms, Yfat Barak-Cheney, Executive Director of the World Jewish Congress Technology and Human Rights Institute (TecHRI), issued the following comment:
“We have long been outspoken about the limitations of fact-checking systems, which have often been influenced by political biases and are far from ideal. However, the introduction of Meta’s new community notes feature must be approached with great caution. Platforms like X and Wikipedia, which employ similar user-driven concepts, have demonstrated how easily misinformation and disinformation can be manipulated, and putting the onus on the vulnerable communities to report and correct information online.
In an online environment already marked by hostility, we are deeply concerned that the reduction of protections and clear guidelines will open the floodgates to content that fuels real-world threats, including violent acts targeting Jewish communities and individuals.
Meta has made important strides in recent years to make its platforms safer, and it is critical that this work continues. Rolling back these efforts risks undoing hard-won progress at a time when vigilance against online hate and antisemitism is needed more than ever.”
From Individual Posts to Behavioral Patterns:
For years, the focus of content moderation has been on identifying and removing specific pieces of harmful content. However, this approach often misses the broader patterns that lead to radicalization and violence. Increasingly, platforms are shifting their efforts to examine behavioral patterns—how users interact, form groups, and escalate harmful ideologies over time.This evolution is crucial. Hate is rarely an isolated incident. It’s a societal process, one that thrives within groups and networks. By identifying patterns earlier, platforms can prevent harm before it fully materializes. For 2025, expect continued advancements in this area, with research into antisemitism serving as a key test case for understanding these dynamics. Antisemitism often underpins broader hate movements, making it a crucial lens for early intervention strategies.
The Global Free Speech vs. Content Moderation Divide:
The tug-of-war between free speech and content regulation is intensifying. In the U.S., the pendulum is swinging toward reduced moderation, fueled by political and cultural trends. Platforms like Twitter (now X) and Truth Social reflect this shift, creating a freer but riskier online environment. Yesterday’s announcements by Meta are another stride in this direction.
Meanwhile, Europe is charting the opposite course. The Digital Services Act (DSA) imposes strict requirements on platforms to tackle illegal hate speech, with an emphasis on transparency and accountability. This growing divergence creates tension for global companies, which may adopt inconsistent standards across regions. For researchers, policymakers, and activists, this fragmentation makes it harder to push for meaningful global action against online hate.
The year ahead will likely see this divide deepen. The U.S. will continue to prioritize free speech, while Europe’s approach could expand as the DSA's enforcement unfolds. How platforms navigate this split—and how civil society engages—will define the boundaries of global content regulation in the near future.
Fragmentation and Localized Regulation:
Regulatory fragmentation isn’t just an international issue; it’s also a domestic one. In the U.S., states like California and Texas have taken it upon themselves to address content moderation, resulting in a patchwork of laws and regulations. While this offers opportunities to advance specific issues—such as Holocaust education and the designation of antisemitic acts as terrorism—it also creates a maze of rules for platforms to navigate.This fragmentation benefits bad actors, who can exploit regulatory gaps to spread their messages with relative ease. For 2025, policymakers and researchers must focus on finding ways to close these loopholes while leveraging localized regulation to push forward meaningful standards.
Decentralized Platforms: A New Challenge:
The rise of decentralized and federated platforms, such as Mastodon, BlueSky, and Telegram, presents a growing challenge for content moderation. These platforms, which often lack centralized oversight, allow users to set their own rules—or evade rules altogether. As a result, hate speech and extremist content migrate quickly from mainstream platforms to these less-regulated spaces.The trend toward decentralization is likely to accelerate in 2025. To counter this, we need to equip individuals and communities with the tools to moderate content within their own digital spaces. This includes developing accessible lexicons for identifying antisemitism and other forms of hate, as well as creating user-friendly moderation tools for decentralized platforms.
The Double-Edged Sword of AI:
Artificial intelligence (AI) is revolutionizing content moderation, offering tools to scale detection and enforcement. However, AI systems are far from perfect. They often struggle to identify the coded, nuanced nature of hate speech and antisemitism, especially as extremists become more sophisticated in masking their intent.Generative AI adds another layer of complexity. In 2025, expect to see an increase in the use of AI to create hate-filled content, from memes and videos to fully scripted podcasts. To counter this, we must focus on refining AI systems to detect these threats while addressing inherent biases that could hinder their effectiveness.
A Call for Collaboration:
As we face these trends, one thing is clear: Collaboration is key. Addressing online hate requires a united effort from researchers, policymakers, platforms, and civil society. The intersection of technological innovation, legal frameworks, and societal change is a complex space, but with coordinated action, we can begin to make real progress.
The challenges of 2025 may seem daunting, but they also present opportunities. By staying ahead of these trends, we can build a safer, more inclusive digital landscape—one where hate has no place to flourish.
Keep track of WJC's fight against online hate through the WJC Technology and Human Rights Institute’s Blog.