Deepfake detectors can be defeated, computer scientists show for the first time

Systems designed to detect deepfakes — videos that manipulate real-life footage via artificial intelligence — can be deceived, computer scientists have shown. Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake.


Click here for original story, Deepfake detectors can be defeated, computer scientists show for the first time


Source: ScienceDaily