[Editor’s Note: Mad Scientist Laboratory is pleased to publish our latest iteration of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the previous month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]
1. “Why Business Leaders Need to Read More Science Fiction,” by Eliot Peper, Harvard Business Review, 24 July 17.
There are no facts about the future and the future is not a linear extrapolation from the present. We inherently understand this about the future, but Leaders oftentimes seek to quantify the unquantifiable. Eliot Peper opens his Harvard Business Review article with a story about one of the biggest urban problems in New York City at the end of the 19th century – it stank! Horses were producing 45,000 tons of manure a month. The urban planners of 1898 convened a conference to address this issue, but the experts failed to find a solution. More importantly, they could not envision a future 14 years hence, when cars would outnumber horses. The urban problem of the future was not horse manure, but motor vehicle-generated pollution and road infrastructure. All quantifiable data available to the 1898 urban planners only extrapolated to more humans, horses, and manure. It is likely that any expert sharing an assumption about cars over horses would have been laughed out of the conference hall. Flash forward a century and the number one observation from the 9/11 Commission was that the Leaders and experts responsible for preventing such an attack lacked imagination. Story telling and the science fiction genre allow Leaders to imagine beyond the numbers and broaden the assumptions needed to envision possible futures. Story telling also helps Leaders and futurists to envision the human context around emerging technologies. For more on Science Fiction and futuring, watch Dr. David Brin‘s Mad Scientist presentation.
2. “Automated Valor,” by August Cole, Proceedings Magazine, U.S. Naval Institute, May 2018.
Fellow Mad Scientist August Cole’s short story, commissioned by the British Army Concepts Branch, explores the future of urban warfare from a refreshingly new, non-US perspective. Sparking debate about force development and military operations in the 2030s, this story portrays a vivid combat scenario in a world where autonomous weapons have proliferated. Mr. Cole’s story embraces a number of Future Operational Environment themes familiar to Mad Scientists, including combat leadership and team identity (Soldier and machine), human trust of AI decision-making, virtual and earned citizenship, deep fakes, small unit tactical operations, and multi-national Joint operations against an expansionist Chinese super power. Visualizing the future fight from this British Commonwealth perspective provides a new twist in story telling, describing what it will mean to be a Soldier on the battlefield in 2039, depending on machine teammates in the close fight.
3. Altered Carbon, Netflix series, 2018 (based upon a 2002 novel by Richard K. Morgan) — submitted by Mad Scientist Pat Filbert.
Set 300+ years in a futuristic Earth, the show’s main character, or more to the point, his “cortical stack” (alien technology, reverse-engineered for human use that records the sum total of an individual’s consciousness) has been “imprisoned” for 250 years and is “released” back into the general population to solve a mysterious murder. At this time, AI exists in and fully interacts with both the physical and cyber domains. The show incorporates a number of aspects related to trust in AI and technology. Such aspects enable a future where combat is fought by “stored soldiers” on distant worlds using advanced technological capabilities. Some humans have accepted AI projections as near-peers, so the trust factor comes up repeatedly between the humans who accept and embrace this technology and those who remain skeptical, like Will Smith’s character in I, Robot. The implications of AI becoming sentient and capable of violence are at the core of the morality argument against AI technology. The popular acceptance of AI possessing human-like qualities would definitely be a “leap forward” in more than just technology. For additional insights on this topic, watch Mad Scientist Linda MacDonald Glenn‘s presentation.
4. “SOCOM’s Top 10 Technologies“ Podcast, National Defense Magazine, National Defense Industry Association, 3 May 2018 — submitted by Marie Murphy.
This podcast provides a summary of some of the primary emerging technologies that the United States Special Operations Command (SOCOM) and the Department of Defense are developing for military application. In the immediate future — exoskeletons and commercial drone use; in the deep future — quantum computing and China‘s rise to dominate the microelectronics market by 2030 are highlighted in the list. Stew Magnuson, Editor-in-Chief of National Defense Magazine, states that technology is nearing the end of the applicability of Moore’s Law. Due to this, a major consideration for the development of new scientific and technical advancements is private, profit-driven industry, which will certainly be responsible for future cutting-edge technologies. Given that many innovations the military uses or seeks to apply now stem from private sector innovation, what happens when Moore’s Law expires and technology moves too quickly for military research and adaptation?
5. “Researchers use ‘League of Legends’ to gain insights into mental models,” by Matt Shipman, Medical Xpress, 8 May 2018.
Researchers analyzed the decision-making habits of gamers that play League of Legends in order to identify and build mental models. Identifying these models will help understand how they are built and, more importantly, how they change over time as players gain proficiency from novice to expert. The researchers analyzed survey responses based on the game and compared the differences between novices, journeymen, and experts. There were clear differences in the way the mental models were organized based on experience, with experts making abstract connections and even showing signs of subnetworks. The researchers plan to use this information for better game design and the development / tailoring of training programs. The Army could leverage the potential of these mental models with neural feedback to accelerate Soldier learning, breaking the tyranny of the 10,000 hour rule of expertise. That said, this information could also prove to be a weapon in the hands of an adversary. What happens to game theory if the adversary knows how your mind works, what your proclivities are, and what courses of action you are likely to favor? What happens if the adversary can identify, based on your actions, who in your unit is a novice and who is an expert, and targets them accordingly (i.e., focusing on defeating the experts first, while leaving the less experienced)? Accessing this information could provide an adversary with an advantage that may prove the difference between success and defeat. Learn more about cognitive enhancement in fellow Mad Scientist Dr. Amy Kruse’s podcast, Human 2.0, hosted by our colleagues at Modern War Institute.
6. “Alexa and Siri Can Hear This Hidden Command. You Can’t,“ by Craig S. Smith, The New York Times, 10 May 2018.
Researchers at the University of California, Berkeley, have exploited mainstream commercial Artificial Intelligence (AI) assistants (e.g., Siri, Alexa, Google Assistant) in order to secretly send commands. The researchers were able to send secret messages to the devices that were embedded in an existing audio track that were undetectable to the human ear. The track could be played and the AI could be told to do any number of things, from transferring money, to adding an item to a shopping list, or opening a malicious website. The adversarial applications of this are immense and abundant. A nefarious actor could surreptitiously activate a device, mute it, and then send and receive information stored on it or even use it to unlock doors, start cars, or call other devices. As the Army becomes more reliant on AI and automation, its vulnerability to Personalized Warfare attacks via these axes will increase. Will the Army ever be able to use voice activated devices that can be so easily compromised by an undetectable source?
7. “New Jelly-Like Neural Implant Eliminates the Need to Drill Through the Brain,” by Dan Robitzski, Futurism, 24 May 2018
At a recent workshop, the Mad Scientist community was informed of the constraints associated with neural embedded man-machine interfaces – namely, conventional electrode materials will degrade relatively quickly via corrosion brought on by the human brain’s inflammatory immune system response. This challenge may have been overcome by researchers at Carnegie Mellon University, funded by the Defense Advanced Research Projects Agency (DARPA), who have developed a “flexible, squishy silicon-based hydrogel that sticks to neural tissue, bringing non-invasive electrodes to the brain’s surface.” As a tissue analog, this hydrogel is less likely to trigger the brain’s natural defensive response, thus potentially revolutionizing the integration of prosthetics and medical devices with patients’ brains. As with most disruptive technologies, preliminary niche applications (in this case, medical) may jump, initially to the edge, then possibly ripple throughout society. The advent of hydrogel-based electrodes has the potential to accelerate the current transhumanism movement and facilitate direct brain-machine interfaces, as envisioned in Mr. Howard Simkin’s Sine Pari post. Projected forward, the possibility of an Internet of Everything and Everyone may prove to be a two-edged sword, facilitating both the direct upload of knowledge on demand, and the direct hacking of individuals.
If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: firstname.lastname@example.org — we may select it for inclusion in our next edition of “The Queue”!