How digital forensics could prove what’s real in the age of deepfakes
Summary
The article envisions a future, around 2030, where "reality notaries" use advanced digital forensics to combat widespread deepfakes and AI-generated content, protecting individuals from fraud and false accusations. The process begins with securing evidence using cryptographic hashing to ensure data integrity, similar to handling physical crime scene evidence. Further investigation involves checking Content Credentials (C2PA), though often stripped online, and analyzing metadata for inconsistencies, such as mismatched timestamps. The notary uses open-source intelligence (OSINT) to find earlier versions of the media, sometimes revealing that content was recorded from a screen, evidenced by physics anomalies like screen flicker. Deeper analysis looks for invisible watermarks (like Google DeepMind's SynthID), detector flags, and inconsistencies in visual artifacts, such as mismatched digital noise or blurring between a synthesized face and the surrounding environment. In a case involving murder footage, the notary proves the video is a deepfake by noting the shooter's face lacked the expected compression grain, revealed inconsistencies in hand dominance, and used trigonometry on the scene to calculate the shooter's height, proving the son's face was superimposed onto another person's body and then recorded off a screen to forge a false certificate of authenticity.
(Source:Scientific American)