
A network analysis company that maps social media landscapes to detect disinformation and coordinated inauthentic behavior.
A technology company detecting disinformation and social media manipulation using machine learning.

United States · Startup
Uses AI to detect narrative manipulation and disinformation risks for enterprises and governments.
Network Contagion Research Institute (NCRI)
United States · Nonprofit
A neutral and independent third party that tracks, exposes, and combats misinformation and hate across social media.
A cross-disciplinary program of research, teaching, and policy engagement for the study of abuse in current information technologies.
Provides a trust and safety platform for online platforms to detect malicious content and actors.
A social threat intelligence platform that uncovers fake accounts, bots, and disinformation campaigns.
Combines AI with expert human analysis to detect and mitigate disinformation and harmful content online.
Provides trust ratings for news websites using a team of journalists, creating a dataset used by AI and platforms.

Primer.ai
United States · Company
An AI company providing natural language processing and knowledge graph generation for intelligence analysts.
Authenticity graph modeling tools ingest provenance metadata, social graph interactions, financial disclosures, and platform trust signals to build knowledge graphs describing who cites whom and how narratives propagate. They run graph neural networks to spot sudden bursts of coordination, cross-reference with watermark or C2PA attestations, and surface nodes whose credibility ratings differ across communities. Visual dashboards help researchers trace how a manipulated clip jumped platforms or where legitimate sources cluster.
Newsrooms, election regulators, and brand safety teams use these graphs to vet user-generated submissions, prioritize fact-checking, and design intervention strategies that reinforce trusted voices. Streaming platforms feed authenticity scores into recommendation algorithms, while advertisers query the graphs before booking creator partnerships. The tooling also supports restorative workflows—highlighting underrepresented sources with strong trust metrics.
Data access (TRL 3–4) is the biggest hurdle; platforms guard APIs and privacy laws limit raw sharing. Initiatives like the Coalition for Content Provenance and the Trust Project provide standardized signals, and regulators increasingly require transparency reports. As more provenance data becomes machine-readable, authenticity graphs will underpin newsroom CMSs and social listening suites, acting as radar systems for information integrity.