[Editor’s Note: Mad Scientist Laboratory is pleased to publish today’s post, heralding the advent of the post-truth era with the convergence of deepfakes, AI-generated bodies and faces, and AI writing technologies. These tools are revolutionizing the nature of competition and could have a devastating impact on nations’ will to fight once competition has transitioned into armed conflict — Beware! (Note: Some of the embedded links in this post are best accessed using non-DoD networks.)]
“Three things cannot be long hidden: the sun, the moon, and the truth” – Siddhārtha Gautama, the Buddha
Even this quote is not entirely truthful. What the Buddha really said was, “Monks, there are these three things which shine forth for all to see, which are not hidden. Which three? The disc of the moon shines for all to see; it is not hidden. The disc of the sun does likewise. The Dhamma-Discipline [dhamma-vinaya] of a Tathagata [Buddha] shines for all to see; it is not hidden. These are the three things.”
But what if the truth becomes increasingly hard to discern? What if authenticity (i.e., full trustworthiness) is actually dying? The advent of the Internet brought with it the global spread of a myriad of hoaxes, urban myths, and the dreaded fake news. During the first decade in the twenty-first century, it was a recurrent weekly theme to see a fake celebrity death spread like wildfire.
While propaganda, deception, and information warfare has existed in some form or fashion from ancient times through modern history (e.g., Soviet maskirovka), the convergence of technology and these political/warfare areas has truly weaponized disinformation on social media and throughout the political arena. This employment of new era information warfare seeks not to necessarily change opinions, but erode trust in conventional institutions, induce trepidation and doubt, and instill a sense of indecisiveness that allows adversaries and nefarious actors the chance to achieve their ends, fait accompli.
The emergence of weaponized social media, as typified in P.W. Singer’s “LikeWar — The Weaponization of Social Media,” is potentially just the tip of the iceberg compared to the emergence of some disruptive technologies in the artificial intelligence (AI) and machine learning (ML) sector. There are three specific AI/ML applications that could bring about the Death of Authenticity:
1) Deepfakes – Videos that are constructed to make a person appear to say or do something that they never said or did (similar to the appearances of Presidents John F. Kennedy, Lyndon B. Johnson, and Richard M. Nixon with Forrest Gump in the 1996 eponymous movie). AI has improved this capability so greatly that it is extremely difficult to discern deepfakes from real video by the naked eye and ear – as seen in recent examples such as acclaimed director Jordan Peele’s video of President Obama. Deepfakes are alarming to national security experts as they could trigger accidental escalation, undermine trust in authorities, and cause unforeseen havoc. Significant efforts are underway to use the same technologies enabling deep fakes – AI/ML – to detect and counter them.
2) AI-Generated Bodies and Faces – AI-driven Generative Adversarial Networks (GANs) are being used to generate entirely original and fake faces and even whole bodies. While this technology has commercial applications in such areas as video game design, online clothing sales, and human resources, it also has a profound impact on information warfare. Troll and bot armies are of increasing concern to military and government officials who worry about their effects on political environments and electoral outcomes. Imagine if you will this same threat to the political and governmental landscape with amplified psychological effects from realistic bodies and faces that humanize such bots.
3) AI Writing – A text generation tool created by OpenAI, a research institute based in San Francisco, can now compose original text in realistic prose. The tool is continuing to improve, generating convincing headlines, posts, articles, and comments, entirely free from human input. AI’s ability to generate new, fictional material is not in and of itself a significant concern – humans can and do do this now (see The Onion and other satirical sites). What is worrying is the scale at which this can be accomplished. What if AI were to generate hundreds of thousands, if not millions of comments or posts, geared at either supporting or undermining a specific issue or cause?
The convergence of these three technologies could spell the death of authenticity. How will the masses struggle with being flooded with a steady stream of AI-generated deepfakes constantly conveying mixed messages and troll armies that are indistinguishable from their fellow citizens, students, and Soldiers? A constant bombardment of messages by false media and fabricated personalities has the potential to erode the relationship between governments and their citizens, provoking severe reactions throughout the world and leading people to question the very reality they believe.
If you enjoyed this post, please read:
– MAJ Chris Telley‘s post on the strategic threat presented by AI-enabled Information Operations in Influence at Machine Speed: The Coming of AI-Powered Propaganda.
– Our review of the Australian Broadcasting Corporation‘s two part series on deepfakes and the Deep Video Portraits video from SIGGRAPH 2018 in the October 2018 edition of “The Queue” (see the first entry).
– Our review of Mad Scientist P.W. Singer and co-author Emerson T. Brooking’s book LikeWar — The Weaponization of Social Media.
– COL Stefan J. Banach‘s complementary posts on Virtual War – A Revolution in Human Affairs (Parts I and II).
– CNN‘s Special Report on how Finland is winning the war on fake news. What it’s learned may be crucial to Western democracy.
… and crank up The Eurythmics‘ Would I Lie to You?
The following comment was submitted by Mad Sci contributor Dr. Nir Buras:
“This post is not about the loss of authenticity. Like Buddha said, the truth can remain hidden for only so long. Therefore it is about the trustworthiness of machines and their products. If anything, it warns the purveyors of machine information and artificial intelligence that unless they establish legal frameworks to guarantee the quality and nature of their designs and products they will lose public support for the further development of machines.
As Lincoln said, you can fool some of the people all the time and all the people some of the time, but you can’t fool all the people all the time. While the boys and girls (immature mentalities) play before us with their computing and AI, they are literally sawing the branch they are sitting on. If they generate computers and AI that are untrustworthy, these machines and their products will simply fall off the face of the earth for their lack of utility. The sixth great extinction may be that of machines, not organisms.
Machine Projected deep fakes are illusions. To stop seeing them all we need is to turn off the screen or its equivalent. A deep fake android will be quickly exposed and disabled.
Unfortunately, it seems that for now we are under the spell of machines and gadgetry and that we are not taking responsibility for our destiny. The article reads as if computer screens are the primary interface of humanity with itself, which it is not, never has been, and ultimately can’t be. Which brings us back to Man Machine Rules.
But the disruption and failures that inauthentic intelligence warfare will wreak may indicate that we are in a Dark Age already, the disinformation and illusion and lying foisted on us an effect, not a cause. We are already deep in the future. What now?”
Must forcefully agree with the point of the post and note that the issue is how we adapt to these technologies…or not. To be blunt, the vast majority of the global population is extremely vulnerable to misinformation, and almost everyone is psychologically and educationally incapable of handling these hyperfakes. When one cannot trust the information one encounters, what does one do? How does one watch, for instance, a political candidate’s speech or a debate between candidates, and trust that what one is watching is actually what is being said? We are used to lies in hard copy print, used to lies in electronic written word format, but lies which are the actual images and words of real or simulated humans are beyond our ken to deal with.
We will have to get ahead of the curve and form some kind of authenticity standard for the data we encounter and which is distributed. We will need to understand that a campaign of visual and verbal deception will be waged against our country and citizens for the rest of time, apparently, and need to decide what to do about it. Will an act of information warfare committed by a person or group of them in some nation be considered an act of terrorism or war? Or just a crime? How do we respond? These decisions must be made and acted upon swiftly, and we need to ensure all Americans are aware of these threats. Adversaries both foreign and domestic will use these cheap and effective tools to nefarious ends, and they will spread like viruses unless we are psychologically inoculated against their effects. It is going to be like when the gun and nuclear weapon were invented, and just like climate change – a new normal and those who can adapt will survive and those who do not will have a less desirable outcome.
I agree that cultural norms have not kept apace with technological advancement concerning how one interacts with information. More worrisome in my view is that even if we can develop sophisticated methods to increase confidence in a source’s authenticity, people will not wish to do the work of evaluating and pruning their info diet. Even without the proliferation of deep fakes, it is already trivial to find sufficient examples to support any preferred narrative (Chinese robber fallacy), and few desire to change their minds.
I’m curious too as to proper responses to adversaries’ actions to influence the idea space. Already, we struggle to define a proportional conventional military response to a cyber attack on our infrastructure. Determining a commensurate response to influence campaigns will be even more ambiguous.