Recently, tech giants like Microsoft and Facebook have been developing software that can accurately detect fake videos based on the Deepfake neural network. As it turned out, such systems are easy to deceive, as recently proved by programmers from the United States.
A group of scientists from the University of California at San Diego presented the results of the study at the computer vision conference WACV 2021. According to them, it is enough to use the so-called “adversarial” examples, which are changed input data in each separate frame of the video stream. It turned out that this method works even after compression and post-processing.
Neural networks capable of detecting fake video concentrate on elements that modern Deepfake algorithms are not yet capable of efficiently processing. First of all, this is blinking and some other scenes. It was found that in this case, it is enough for the attacker to have knowledge of the operating features of the software that detects fake video.
So, if the authors of Deepfake had full access to the detector model, then the probability of deceiving the system reached 99%. At the same time, if the attacker had only partial access to the software, then the chances of creating a video sequence that passed the test dropped to 86%. The authors of the methodology refused to publish it publicly
Last Updated on February 14, 2021 by Admin