In an effort to protect the integrity of our elections, the defense department has made it their mission to stop “large-scale, automated disinformation attacks.”
Bloomberg writes that “The Defense Advanced Research Projects Agency wants custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips. If successful, the system after four years of trials may expand to detect malicious intent and prevent viral fake news from polarizing society.”
According to those in the agency, they have made a lot of progress over the years when it comes to fighting “fake news”, and those intending to spread disinformation. “A decade ago, today’s state-of-the-art would have registered as sci-fi — that’s how fast the improvements have come,” said Andrew Grotto at the Center for International Security at Stanford University. “There is no reason to think the pace of innovation will slow any time soon.”
This has been a serious issue for quite some time. According to Yahoo!:
After the 2016 election, Facebook Chief Executive Officer Mark Zuckerberg played down fake news as a challenge for the world’s biggest social media platform. He later signaled that he took the problem seriously and would let users flag content and enable fact-checkers to label stories in dispute. These judgments subsequently prevented stories being turned into paid advertisements, which were one key avenue toward viral promotion.
In June, Zuckerberg said Facebook made an “execution mistake” when it didn’t act fast enough to identify a doctored video of House Speaker Nancy Pelosi in which her speech was slurred and distorted.
“Where things get especially scary is the prospect of malicious actors combining different forms of fake content into a seamless platform,” Grotto said. “Researchers can already produce convincing fake videos, generate persuasively realistic text, and deploy chatbots to interact with people. Imagine the potential persuasive impact on vulnerable people that integrating these technologies could have: an interactive deepfake of an influential person engaged in AI-directed propaganda on a bot-to-person basis.”
So where does the military fit in? Basically, they will operate as another stopgap for preventing fake news from going viral.
“A comprehensive suite of semantic inconsistency detectors would dramatically increase the burden on media falsifiers, requiring the creators of falsified media to get every semantic detail correct, while defenders only need to find one, or a very few, inconsistencies,” the agency said in its Aug. 23 concept document for the Semantic Forensics program.
What are these “inconsistencies”, you might ask. According to the agency, its software not noticing mismatched earrings in a fake video or photo. Other indicators, which may be noticed by humans but missed by machines, include weird teeth, messy hair and unusual backgrounds.
The algorithm testing process includes evaluation of hundreds of thousands of pieces of content, from articles, to videos, to radio interviews. The program has three separate phases that will run over the course of over four years. First they will cover the news, and then move on to attacking propaganda.
“Mirroring this rise in digital imagery is the associated ability for even relatively unskilled users to manipulate and distort the message of the visual media,” according to the agency’s website. “While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns.”
The long timeframe means this program won’t have any impact on the 2020 election. At the very least, the Trump administration is taking the threat of “fake news” and foreign interference very seriously.