Last week, I had the pleasure of attending a pre-launch event for TikTok’s Footnotes. Launched on Wednesday, TikTok is the latest in the growing trend among social platforms that are embracing crowd-sourced context tools to address misinformation. While TikTok presents Footnotes as an addition to its moderation toolkit, platforms like Meta (as of March 2025) and X/Twitter (since 2021) have already shifted away from professional fact-checking entirely in favor of user-generated “Community Notes.”
These systems are being marketed as solutions to the problems inherent in traditional fact-checking. Meta, for instance, announced its move to a Community Notes as a way to “reduce bias.” By March, the system had gone live in the U.S.—with all professional fact-checkers removed.
Given that 54% of US adults get news from social media sites, and these changes reshape content moderation practices on social media across all issues, we at the WJC Institute for Technology and Human Rights (TecHRI), are watching this shift closely. We are especially concerned with the impact it will have on Jewish social media users and other minority communities.
What are Community Notes?
Community Notes are public, crowd-sourced annotations meant to add context to potentially misleading posts. A closed pool of select users spend time researching, writing, and submitting notes as well as rating other notes, and a sophisticated algorithm determines which note, if at all, will be made visible under the original post according to how “helpful” it has been graded by the users. The “bridge algorithm” used by the platforms, requires that users who previously disagreed on the “helpfulness” of other notes, now agree that the note is “helpful” in order for it to be posted. Meta based their algorithm on X’s. There is of course no transparency on how exactly this is being decided.
In theory, this creates a decentralized check on misinformation. In practice? Far from that.
Here are just some of our concerns.
Manipulation. As we’ve seen and reported about with other user-based systems, there is ample space for manipulation of such systems. Even if we take it that the platforms have the best intentions, anything that is open for users to submit data, links, and rate, is very open to users manipulating the system. Users “disagreeing” on notes can be as simple as my friend and I coordinating some posts we will work on. While the companies work on ensuring bots are not able to join as contributors, this is a real worry.
Users Time. Users entered into the community notes programs are expected to spend their own time flagging misleading content (if they recognize it), researching it, writing the community note. They are also required to read many other notes and grade them. While the interfaces are rather simple, the amount of time required is significant, and the success rates are not guaranteed. A respected Washington Post columnist, who tested the system, said only 3 out of 65 notes he submitted were ever published—a 5% success rate. The Digital Democracy Institute of the Americas (DDIA) looked into English and Spanish contributions to notes in the first 4 years of the system on X. The 82,800 notes published in English were just 7.1% of all submitted notes. Our Jewish community users already struggle to report antisemitic hate speech they find online because it is time consuming and the results many times frustrating. TecHRI has built a whole system to enable easier reporting because of this. Notes require much more time with little guarantees and unclear benefits.
Virality Time. By the time a note gets published, days have gone by. DDIA found the average to be 14 days. That is a lifetime in social media virality. How fast can this system actually be so that it will make a difference?
Original Post focus. While the platforms are working to ensure that the notes will apply to reposts of the content, information on a single post is shared in many ways across the platform that the note published on an original note will probably only have a tiny effect.
Visibility of the note. Assuming those who saw the original piece of content will get a notification that a community note has been published about it, the chances they will go back and read it are, well, slim.
The contributors. It is unclear who the contributors are or will be. In contrast to users on social media who are supposed to be “real people”, contributors and their notes are kept anonymous. Moreover, U.S.-based beta contributors for Meta are expected to annotate global content—raising serious concerns about epistemic legitimacy and cultural competence. Why should a U.S. user interpreting media coverage of Israel, Eastern Europe, or the Middle East be the final word on “context”?
The sources. The contributors can link pretty much any website as the source for their note. We’ll have to trust other contributors to assess the reliability of the source and whether it is verified information or not.
What’s at Stake for Jewish Communities?
For Jewish communities and other minorities, the risks are layered and significant.
- Structural Imbalance: Crowd-based moderation favors majority voices. Jewish users—already a minority—will be structurally outnumbered. That’s not “community moderation”; it’s majority rule.
- Unequal Burden: These systems depend on unpaid labor. Marginalized communities are now expected to monitor and correct content with no guarantee their input will be accepted or seen.
- Transparency Gaps: Will trusted Jewish partners get access to backend data? Will they be able to see which antisemitism-related notes get published—and whether they reinforce or debunk harmful stereotypes?
- Representation: Who gets selected to write Community Notes? Will platforms ensure representation from affected communities?
Could AI Offer a Better Path?
While these systems have just been launched, there should be a better model. One possibility: AI-based fact-checking. Here, users would flag content, but instead of relying on a small, opaque pool of contributors, AI could generate notes using large-scale data patterns. Community members and experts could still play a role—by helping train the system or providing feedback on edge cases—without carrying the full burden of authorship.
People are already flocking to Chatbots for fact checking, and those are proving to be contributing to the problem rather than solving it. But dedicated AI systems could provide better results. Such systems wouldn’t eliminate bias. But they could significantly reduce the procedural imbalance built into current crowd-sourced systems. It would also scale more effectively, offering timely moderation for the volume of content on today’s platforms.
Bottom Line: Community Notes might sound democratic, but in their current form, they risk becoming another system where the loudest—or largest—voices dominate. For Jewish communities and other marginalized groups, we need safeguards: transparency, representation, and alternative models that don’t outsource responsibility without accountability.