'Deepfake' war: U.S. fighting manipulated 'media' with new weapon
WND Staff
The technology already is here to create images, scenes and scenarios on video that look real but aren’t. They can give the false impression that someone is doing or saying something very bad.
And it’s no longer just the few with major financial resources, large studios and deep pockets who can create such falsehoods.
So the federal government is working a plan to identify and expose those efforts.
The Defense Advanced Research Projects Agency, DARPA, which explores the cutting edge of tech development, explains in a report that some statistical detection techniques have had some success uncovering media manipulations.
Fortunately, the agency reports, “automated manipulation capabilities used to create falsified content often rely on data-driven approaches that require thousands of training examples, or more, and are prone to making semantic errors.”
It is those errors and failures that “provide an opportunity for the defenders to gain an advantage.”
DARPA is working on the Semantic Forensics (SemaFor) program to identify technologies that make the automatic detection, attribution and characterization of falsified media assets a reality.
Its objective is to create a set of algorithms that “dramatically increase the burden on the creators of falsified media, making it exceedingly difficult for them to create compelling manipulated content that goes undetected.”
The program is focusing on semantic detection, attribution and characterization.
“At the intersection of media manipulation and social media lies the threat of disinformation designed to negatively influence viewers and stir unrest,” said Matt Turek, a program manager.
“While this sounds like a scary proposition, the truth is that not all media manipulations have the same real-world impact. The film industry has used sophisticated computer generated editing techniques for years to create compelling imagery and videos for entertainment purposes. More nefarious manipulated media has also been used to target reputations, the political process, and other key aspects of society. Determining how media content was created or altered, what reaction it’s trying to achieve, and who was responsible for it could help quickly determine if it should be deemed a serious threat or something more benign.”
The manipulation, which can be done to audio, images, video and text, is becoming more advanced.
“There is a difference between manipulations that alter media for entertainment or artistic purposes and those that alter media to generate a negative real-world impact. The algorithms developed on the SemaFor program will help analysts automatically identify and understand media that was falsified for malicious purposes,” said Turek.
The agency explained its plan: “Semantic detection algorithms will determine if multi-modal media assets were generated or manipulated, while attribution algorithms will infer if the media originated from a purported organization or individual. Determining how the media was created, and by whom could help determine the broader motivations or rationale for its creation, as well as the skillsets at the falsifier’s disposal. Finally, characterization algorithms will reason about whether multi-modal media was generated or manipulated for malicious purposes.”
Article printed from WND: https://www.wnd.com
URL to article: https://www.wnd.com/2019/09/deepfake-war-u-s-fighting-manipulated-media-new-weapon/