
An open technical standard body addressing the prevalence of misleading information online through content provenance.
Provides an enterprise platform for deepfake detection across audio, video, and image formats using multi-model analysis.

Netherlands · Startup
Specializes in visual threat intelligence and deepfake detection, monitoring the web for malicious synthetic media.
Runs the Semantic Forensics (SemaFor) program to develop technologies for automatically detecting, attributing, and characterizing falsified media.
Develops silicon spin qubits using advanced 300mm wafer manufacturing processes.
Focuses on image provenance and authentication, helping verify that media has not been altered (the inverse of detection).
Provides a deepfake scanner tool designed to detect synthetic manipulation in videos.
Through Copilot and the 'Recall' feature in Windows, Microsoft is integrating persistent memory and agentic capabilities directly into the operating system.
Specializes in voice security and authentication, actively developing liveness detection to stop audio deepfakes.
Provides liveness detection software to prevent identity theft via deepfakes or masks during biometric verification.
Deepfake detection networks combine vision transformers, audio forensics, and watermark validators trained against ever-changing generative model families. They look for physiological inconsistencies, pixel-level blending artifacts, and speech spectral anomalies, fusing those scores with cryptographic provenance (C2PA, watermark hashes) to decide whether a clip is trustworthy. Many run as containerized microservices so news organizations can keep inference on-prem and update weights weekly.
Newsrooms wire the detectors directly into ingest systems, so user-submitted footage, agency feeds, and social clips receive authenticity scores before reaching producers. Flagged segments trigger human review, and downstream platforms receive metadata describing the findings, enabling contextual labels on OTT services or social networks. Political campaigns and sports leagues also deploy the tech to protect live events from real-time manipulation attempts.
Arms races continue: open-source model releases quickly invalidate many detectors, and regulators demand transparency about false positives. Europe’s DSA, India’s IT Rules, and the US White House watermarking commitments push broadcasters to disclose provenance data to viewers. Vendors now ship explainability dashboards and adversarial training toolkits, suggesting that deepfake detection will remain an active, continuously updated layer of every professional media supply chain.