There is a pitched struggle underway between the makers of fake AI-generated videos and images and forensics experts trying desperately to uncover them. And the detectives are losing, Axios reports.
Now, experts are attempting an end-run: They are developing methods to verify photos and videos at the precise moment they’re taken, leaving no room for doubt about their authenticity. This could portend a cynical future in which media must leave a digital trail of breadcrumbs in order to be believed.
The consensus today is that detecting deepfakes after they’ve been created is a stopgap — not a permanent solution. As technology progresses, it will be increasingly difficult to decipher what is real and what isn't.
So the solution has to be to verify media as close to its creation as possible.
Several startups are working on this nascent technology.
TruePic, a venture-backed startup, wants to work with hardware manufacturers — Qualcomm, for now — to log photos and videos the instant they’re captured.
Amber, a small San Francisco startup, sends an encrypted record of photos and videos to a blockchain, so viewers can check if clips were later altered.
Serelay, based in the U.K., saves outputs from about 100 sensors in a phone — GPS, pressure sensor, gyroscope, etc. — to check the veracity of a photo.
Executives from all three companies told Axios we are years away from a solution, and for now, they are working with industries that need to be able to trust incoming videos and photos — TruePic with insurers, Amber with body camera makers, and Serelay with media companies.
Perspective: Want to know just how good deepfakes have become *today*? ThisPersonDoesNotExist.com serves up a rotating gallery of pictures of different faces — but each face is completely fake and computer-generated.