Deepfakes are getting out of hand and have become crucial to detect it. As AI continues to evolve, the emergence of deepfakes — a.k.a manipulated media content that appears real — is causing many challenges. To aid in differentiating between genuine and AI-generated content, Siwei Lyu, a computer science professor at the University of Buffalo, has created a tool to easily detect it, the DeepFake-o-meter, The Guardian reports.
The meter is accessible to all with no charge— but still needs some work: DeepFake-o-meter is an open-source tool that integrates over a dozen algorithms from various research labs. Users can upload suspected content for the tool to assess what percentage — range from 0 to 100% — whether or not it’s fake. But we can’t rely on it completely, the online tool still needs us, humans, involved in the process. The algorithms can only predict what it has been fed so far — it can get better the more we use it.
Deepfakes can come in three forms: Audio, photo, and videos. Audio is one of the hardest due to reliance on sound alone. Remember the viral Bidencall where US President Joe Biden’s voice was used in the call to urge voters not to participate in the primary election. The tell-tale sign that it was AI-generated audio was that it lacked natural emotion and a conversational tone. Another giveaway was the inconsistent background noise.
Photos are much easier to examine. By focusing on fingers, extra lines, odd shadows, or that the image looks airbrushed. An example would be a deepfake Trump’s photo with black voters as used in a strategy to pull in more voters. Videos, however, are harder to fake. Clues are the stiff movements, unnatural blinking and recurrence of facial expressions — which can be seen in President Volodymyr Zelensky’s ‘ surrender video ’ of his armed forces to russia.