
Operates 'Community Notes' (formerly Birdwatch), the most prominent collaborative verification system at scale.
UK's independent fact-checking charity that builds automated tools (Full Fact AI) to help fact-checkers identify claim repetition.

United Kingdom · Company
Combines AI with expert human analysis to detect and mitigate disinformation and harmful content online.
Builds 'Check', an open-source platform for collaborative digital media verification used by newsrooms and NGOs.
Provides trust ratings for news websites using a team of journalists, creating a dataset used by AI and platforms.
United States · University
Home of the Tech & Check Cooperative and developers of ClaimBuster, an automated live fact-checking tool.
Factmata
United Kingdom · Company
Developed AI tools to score content for hate speech and propaganda (acquired by Cision).
Belgium · Consortium
An EU-funded research project (Horizon Europe) developing AI tools for disinformation analysis and verification.
Provides risk ratings for news domains to help advertisers avoid funding disinformation, using a mix of AI and human review.
United States · Consortium
A consortium of news organizations setting standards for transparency and trust indicators in digital news.
Collaborative truth-verification platforms layer AI heuristics (claim detection, source clustering, semantic similarity) with crowdsourced review workflows modeled after Wikipedia or GitHub. Users submit claims, AI surfaces supporting or contradicting evidence, and accredited reviewers vote, attach citations, and sign cryptographic attestations. The result is an auditable ledger describing how each verdict was reached, with provenance tokens that publishers can embed next to articles or videos.
Civic groups, social platforms, and brands deploy these systems during elections or crises to triage viral claims and coordinate responses. OTT services integrate verdict badges into player interfaces, while messaging apps expose fact-checking bots that tap the same ledger. Some implementations reward contributors with reputation points or micro-payments funded by philanthropies and news consortiums.
Maintaining trust (TRL 4) requires governance: councils define reviewer tiers, bias audits are public, and appeals mechanisms exist. Projects like Meedan, Full Fact, and MIT’s PACT framework pioneer shared schemas, and regulators look to these platforms as a blueprint for co-regulation. As misinformation campaigns grow more sophisticated, collaborative verification will become a frontline defense complementing platform moderation.