
Synthetic media forensics encompasses techniques and tools for detecting, analyzing, and attributing AI-generated or heavily manipulated content including deepfakes, AI-generated images, synthetic audio, and manipulated video. The field uses multiple approaches: technical detection that identifies artifacts or inconsistencies left by generation algorithms, watermarking and provenance tracking that embed metadata in content, signal analysis that examines statistical properties, and machine learning models trained to recognize synthetic content. These tools help verify authenticity, investigate misinformation, and maintain trust in digital media as synthetic content becomes more realistic and widespread.
The technology addresses the growing threat of synthetic media being used for misinformation, fraud, and manipulation as AI generation tools become more accessible and realistic. Forensic tools can help identify fake content, trace its origin, and provide evidence of manipulation. Applications include journalism and fact-checking, law enforcement investigations, social media platform moderation, legal proceedings where media authenticity matters, and protecting individuals from deepfake attacks. Companies, research institutions, and standards bodies are developing forensic tools and techniques.
At TRL 5, synthetic media forensics tools are available and being used, though detection accuracy and robustness continue to improve as generation techniques advance. The technology faces challenges including keeping pace with rapidly improving generation techniques, reducing false positives and negatives, detecting high-quality synthetic content with minimal artifacts, and ensuring tools work across diverse content types. However, as synthetic media becomes more prevalent, forensic capabilities become increasingly important. The technology could help maintain trust in digital media by enabling detection of synthetic content, supporting investigations of misinformation, and providing tools for verification, though it represents an ongoing arms race with generation techniques, requiring continuous development to remain effective as synthetic media quality improves.
An open technical standard body addressing the prevalence of misleading information online through content provenance.
Provides an enterprise platform for deepfake detection across audio, video, and image formats using multi-model analysis.
Runs the Semantic Forensics (SemaFor) program to develop technologies for automatically detecting, attributing, and characterizing falsified media.
Focuses on image provenance and authentication, helping verify that media has not been altered (the inverse of detection).
Develops both generative dubbing tools and deepfake detection algorithms for government use.
Specializes in voice security and authentication, actively developing liveness detection to stop audio deepfakes.
Specializes in visual threat intelligence and deepfake detection, monitoring the web for malicious synthetic media.
Provides cloud-based AI models for content moderation, including detection of NSFW content, hate symbols, and AI-generated media.
Develops silicon spin qubits using advanced 300mm wafer manufacturing processes.