157. The Democratization of Dual Use Technology

[Editor’s Note: Mad Scientist Laboratory is pleased to publish today’s post, exploring how the democratization of current benign technologies may present our Leaders with potentially devastating dual use applications.  How do we address the emergence of these and other dual use technologies so that we do not fall victim to a failure of imagination?]

Dual use technology has yielded significant breakthroughs for military applications, including Intercontinental Ballistic Missiles and nuclear power. However, with the broad democratization of advanced technology, we may now see an increase in seemingly benign solutions that could be weaponized. Average citizens now have access to an expansive array of powerful technologies that would have been considered cutting-edge only a few years ago – e.g., artificial intelligence, machine learning, computer vision, and robotics. These are all technologies that have real and direct military implications, even if they were initially created for civilian or consumer use.

This problem extends across all military domains. For example, space is cluttered as there are close to 18,000 artificial objects orbiting Earth and 300,000 more pieces of “space junk” further congesting the domain. A single speck of paint, traveling at thousands of miles per hour, could cause serious damage to orbiting satellites.1

Space broom using laser to de-orbit debris / Source: Wikimedia Commons, concept art by Fulvio314

Researchers at China’s Air Force Engineering University proposed building a laser-armed satellite to act as a “broom” to clean up rogue debris by burning off part of its mass, thus destabilizing its orbital path and sending it back into the Earth’s atmosphere, where it would quickly burn up.2 The scientists successfully proposed a solution to a potentially hazardous problem.  On the surface this is true, and therein lies the dilemma. How could a nefarious actor use a space-based laser weapon other than for clearing unwanted pieces of refuse? Technology has no allegiance, but its users do, and the weaponization of such technology could be problematic.

Lion fish / Source: Wikimedia Commons

In the maritime domain, a non-profit called Robots in Service of the Environment (RSE) created their Guardian unmanned underwater vehicle (UUV) whose purpose is to control lion fish, an invasive and destructive species introduced into the Caribbean with no natural predators to control its propagation.3 The robot costs around $1000 and includes an auto-pilot feature that can identify, stun, and capture the fish without the assistance of a human in the loop (though the option is there). Using computer vision, the UUV can discern the lion fish’s appearance from other species, deliver a stun charge, and subsequently suck the creature into a holding cell aboard the small vessel. The implication here is clear: an autonomous vehicle can positively identify something and deliver an intended effect without human intervention. In this case, the effect is a simple stun.

Crown of Thorns Starfish / Source: NOAA

Taking it a step further, a roboticist from Queensland University of Technology has developed a UUV to that behaves similarly to the RSE bot. However, instead of stunning its prey (reef-destroying sea stars), it delivers a single injection with a lethal dose of bile salts. Dubbed the RangerBot, this UUV can identify its prey with a 99.4 percent success rate and autonomously deliver the kill shot.4 The identification is so accurate that it even ignored the 3D-printed decoys and only engaged with its intended target. Another advantage of the RangerBot is that it can operate successfully when humans are unable to – at night, in tumultuous weather, and in high currents. While its targets are sea stars, this vehicle could be quickly and easily manipulated to positively identify a human target and deliver a lethal effect; identify strategically-placed communications cables in the sea and sever specific ones; or even seek out specific naval vessels and then affix explosive charges, undetected. An autonomous entity using computer vision could be taught to positively identify nearly any target and execute nearly any action as a consequence.

The technology to make this a reality exists today. It is being developed to clean our oceans, to de-clutter Low Earth Orbit, and to make our world a more livable place. It also has the potential to surprise with military applications. While we’re well aware of the RangerBot, is our Navy worried about an autonomous UUV patrolling the Great Barrier Reef? More importantly, can they detect one? Is there a cost effective solution to defend against small, cheap, and expendable devices? Regulation can help control potentially dangerous materials – e.g., chemical, nuclear, radiological, or biological, but is it possible to regulate technology that poses no apparent threat? Does the Army have a strategy for dealing with benign technology that could be weaponized quickly? Technology will continue to advance at a rapid pace, and will compound as time goes on. What emergent dual use technology (or convergence of technologies) could surprise our future Army?

If you enjoyed this post, please see the following about other dual use technologies:

Some Thoughts on Futures Work (Part I) , by Dr. Nick Marsella, informing us on how we can to avoid being surprised by the future.

LikeWar — The Weaponization of Social Media, our review of proclaimed Mad Scientist P.W. Singer and co-author Emerson T. Brooking’s same titled book.

Dead Deer, and Mad Cows, and Humans (?) … Oh My!  by proclaimed Mad Scientists LtCol Jennifer Snow and Dr. James Giordano, and returning guest blogger Joseph DeFranco, addressing how medical research on Transmissible Spongiform Encephalopathies (TSEs) could be misapplied to create novel biological warfare agents.

What’s in a Touch? Lessons from the Edge of Electronic Interface, by Dr. Brian Holmes, addressing how prosthetics R&D could provide a sense of touch to realize more lifelike lethal autonomous weapons systems.


1 https://www.popsci.com/china-space-laser/

2 Ibid.

3 https://www.technologyreview.com/f/613623/meet-the-robot-submarine-that-acts-as-a-lionfish-terminator/

4 https://www.hakaimagazine.com/news/rangerbot-programmed-to-kill/

156. What is the Threshold? Assessing Kinetic Responses to Cyber-Attacks

[Editor’s Note: Mad Scientist Laboratory is pleased to publish the latest post by proclaimed Mad Scientist and returning blogger Marie Murphy.  In the future operational environment, armed conflict in the traditional sense may be less prevalent, while competition may be the driving force behind national oppositions. On the cusp between these two lies crisis.  In the following post, Ms. Murphy examines the threshold for responding to cyber-attacks with kinetic strikes during crises — Enjoy! (Note:  Some of the embedded links in this post are best accessed using non-DoD networks.)]

Cyber-attacks are quickly manifesting as a ubiquitous feature of modern warfare. However, the consequences of launching a cyber-attack are becoming more unpredictable and dependent on the individual case. Due to the rapid progression of cyber capabilities worldwide; codified laws, ethics, and norms have not yet caught up for every situation. Clarified by recent events between the Israelis and the Palestinians, the threshold for using kinetic weapons against a cyber-threat or in response to a cyber-attack appears to be when, not if, it is appropriate to cross domains. The U.S. Army needs leaders who are capable of operating in ill-defined spaces which necessitate a decision between engaging in physical violence in response to a cyber-attack and retaliating in the same domain.

There is a small window of opportunity, aptly called the “crisis phase,” to deescalate rising competition-based tensions before the outbreak of all-out conflict in the present cycle oscillating between the two. Whereas it is more easily determined what actions are appropriate in the competition and conflict phases, the crisis phase is a delicate balance of communication, interpretation, analysis, and assumption. Cyber-attacks in general are features of all three stages; however, cyber-attacks which are followed by kinetic responses may more commonly fall into the crisis phase because there’s the possibility for escalation to physical violence – or not, if the violence serves as an effective deterrent or the initial attacker does not have the capabilities to escalate in the physical domain.

Source: IDF

On May 5, 2019 Israel responded to an attempted cyber-attack from Hamas by destroying the building which housed Hamas’ cyber operations.i There was concern in the international community that this action had changed the rules of the game by permitting a state to respond with kinetic force to a cyber-attack which had no direct physical ramifications. The significance of Israel’s decision lies in that it is the first openly-acknowledged, immediate kinetic strike in response to a cyber-attack.ii The U.S. was the first state to use physical force in response to cyber activities in an airstrike targeting Junaid Hussain, an ISIS hacker, in 2015. However, this strike was planned months in advance, while the Israeli response to Hamas appeared to be in real-time.iii  Appearances can be deceiving. There are several factors that lie under the initial shock of Israel’s retaliation:

– First, the kinetic response was not launched in the middle of a cyber-attack; it was initiated after the attack had already been neutralized.iv

– Second, it is probable that the Israelis had already collected intelligence on this target. The speed of the attack does not necessarily reflect the speed of the Israeli’s ISR technology and analysis.

– Third, Israel’s response could be viewed as a psychological operation as well, reminding the Palestinians that one side possesses overwhelming capabilities and has the will to use them.v

– Finally, this attack must be viewed within the context of wider, ongoing conflict and the power dynamic already established between the parties.

This last point is crucial. While Israel’s response was an unprecedented, even historical step, it occurred within the ongoing continuum of Palestinian/Israeli kinetic strikes and counter-strikes in Gaza. It was not an isolated incident and is not necessarily indicative of future offensive cyber actions being met with physical violence on a global scale. In the multi-domain operations conducted by actors around the world, it is to be expected that domains will begin to be crossed in a single exchange. As the character of warfare changes to become more digitally integrated and more technologically advanced (leading to increased C4ISR capabilities) the context of actions will factor in more greatly to decision-making. This means that standard, “play-book” responses may not apply to every future situation. Dynamism in all phases of conflict, specifically the crisis phase, is critical to avoid misinterpretations with global repercussions.

Cyber-attacks occur on a daily basis worldwide, but very few bleed out into the physical domain or create outbreaks of new conflict.vi There is little evidence to support a claim that cyber-warfare operations alone are likely to escalate into physical violence; responses are usually proportional to and in the same domain as the provocation.vii However, when there is a background of preexisting physical violence, like between the Israelis and the Palestinians, the chance of cross-domain operations increases. Israel’s response did change the status quo to a certain degree as kinetic measures were rapidly deployed instead of a “hack-back” response.viii There is an argument for a slightly disproportional response as a deterrent and show of force, but knowing where to draw the line is also critical.ix Israel’s actions also helped to clarify an ethical quandary about the role of hackers. The debate as to whether they are combatants seems settled: hackers are a viable target if they attack a government or military.x This leads back to the original question about defining the threshold: when to use a kinetic response?

The complexity and relative anonymity of cyber-threats makes them harder to define, but generally speaking today, the rules and norms for acceptable uses of cyber capabilities are determined by the context of the conflict they’re deployed in, what the power dynamic between the relevant parties is, and what the alternative or escalatory options are for each party involved. Every state also interprets cyber norms differently in accordance to what best suits their strategic interests. The U.S. “prefers an effects- or consequences-based interpretation of “force” or “armed attack” with respect to cyber-attacks.” Essentially, the U.S. does not want to “draw boundaries too tight” to the point where its own rules begin to interfere with its own cyber operations.xi There have been international conversations about legislating cyberspace, especially for the purposes of defining warfare and conflict-inducing activities, but nothing has been codified or ratified.xii

Source:  U.S. Navy photo / MCS 3rd Class Erwin Miciano

The U.S. Department of Defense has long maintained that it reserves the right to use any response, including a kinetic response, against a cyber-attack. The target of the cyber-attack would most likely determine the response: an attack on the U.S. economy, government, or military could warrant both a digital and a kinetic response. The decision rests on the cost-benefit analysis of action versus inaction, if there was a strong likelihood that physical retaliation could spiral into the outbreak of violent conflict, and if the cyber-attack can be positively attributed.xiii

An example of near-war cyber tactics in which the crisis is closer to the competition phase is the EternalBlue attack on Baltimore City. Hackers used this malware to hold city computers and systems hostage.  Although no official U.S. Government statement has been made, multiple press outlets, including The New York Times, allege that the program was initially an NSA asset that the organization lost control of in 2017, having utilized it for five years. The vulnerability has since been patched by Microsoft, but hundreds of thousands of computers are allegedly still at risk. This attack hits America at its most susceptible sector– its “aging digital infrastructure.”xiv It also demonstrates how the majority of cyber-attacks are not responded to with physical violence, either because the attack cannot be positively attributed or the parties involved are unwilling or unable to escalate.

Cyber-attacks are becoming normalized facets of the competition, crisis, and conflict cycle. Whether or not using physical violence in response to a cyber-attack crosses legal or ethical lines depends on the context of the relationship between the attacker and the retaliator and prior conflict. With or without established norms and standardized accepted levels of response, cyber-attacks will continue to proliferate in all phases of military interactions. In a future of multi-domain operations, decisions about conflict escalation will likely depend on actions taken that are unseen by the public, so determining what is acceptable and what is escalatory is extremely difficult without understanding the full picture. But for now, there is a precedent for kinetic responses to be acceptable in the context of ongoing conflict. The threshold for using kinetic weapons does not appear to be if, but when, and just as importantly, when not to.

In the post above, Ms. Murphy shared her insights regarding one aspect of the future operational environment.  Mad Scientist wants to hear your thoughts on The Operational Environment: What Will Change and What Will Drive It – Today to 2035?  Learn more about our current crowdsourcing exercise here and get your submissions in NLT 1700 EDT, 15 July 2019!

If you enjoyed this post, please also see:

– CAPT L. R. Bremseth‘s Emerging Technologies as Threats in Non-Kinetic Engagements

– COL Stefan Banach‘s Virtual War – A Revolution in Human Affairs (Parts 1 & 2)

– Ms. Murphy‘s previous posts:

Trouble in Paradise: The Technological Upheaval of Modern Political and Economic Systems

The Final Frontier: Directed Energy Applications in Outer Space

Star Wars 2050

Virtual Nations: An Emerging Supranational Cyber Trend

Proclaimed Mad Scientist Marie Murphy is a rising senior at The College of William and Mary in Virginia, studying International Relations and Arabic. She is a regular contributor to the Mad Scientist Laboratory, interned at Headquarters, U.S. Army Training and Doctrine Command (TRADOC) with the Mad Scientist Initiative last summer, and has returned as a consultant this summer.  She was a Research Fellow for William and Mary’s Project on International Peace and Security.

Disclaimer:  The views expressed in this article do not imply endorsement by the U.S. Army Training and Doctrine Command, the U.S. Army, the Department of Defense, or the U.S. Government.  This piece is meant to be thought-provoking and does not reflect the current position of the U.S. Army.


i Borghard, Erica D., Jacquelyn Schneider. “Israel Responded to a Hamas Cyberattack with an Airstrike. That’s Not Such a Big Deal.” Washington Post, May 9, 2019. https://www.washingtonpost.com/politics/2019/05/09/israel-responded-hamas-cyberattack-with-an-airstrike-thats-big-deal/?utm_term=.f51d1c1c3da0

ii O’Flaherty, Kate. “Israel Retaliates to a Cyber-Attack With Immediate Physical Action in a World First.” Forbes, May 6, 2019. https://www.forbes.com/sites/kateoflahertyuk/2019/05/06/israel-retaliates-to-a-cyber-attack-with-immediate-physical-action-in-a-world-first/#627141e5f895

iii Newman, Lily Hay. “What Israel’s Strike on Hamas Hackers Means for Cyberwar.” Wired, May 6, 2019. https://www.wired.com/story/israel-hamas-cyberattack-air-strike-cyberwar/

iv Gross, Elias. “The Future Is Here, and It Features Hackers Getting Bombed.” Foreign Policy, May 6, 2019. https://foreignpolicy.com/2019/05/06/the-future-is-here-and-it-features-hackers-getting-bombed/

v O’Flaherty, Kate. “Israel Retaliates to a Cyber-Attack With Immediate Physical Action in a World First.” Forbes, May 6, 2019. https://www.forbes.com/sites/kateoflahertyuk/2019/05/06/israel-retaliates-to-a-cyber-attack-with-immediate-physical-action-in-a-world-first/#627141e5f895

vi Newman, Lily Hay. “What Israel’s Strike on Hamas Hackers Means for Cyberwar.” Wired, May 6, 2019. https://www.wired.com/story/israel-hamas-cyberattack-air-strike-cyberwar/

vii Borghard, Erica D., Jacquelyn Schneider. “Israel Responded to a Hamas Cyberattack with an Airstrike. That’s Not Such a Big Deal.” Washington Post, May 9, 2019. https://www.washingtonpost.com/politics/2019/05/09/israel-responded-hamas-cyberattack-with-an-airstrike-thats-big-deal/?utm_term=.f51d1c1c3da0

viii Cimpanu, Catalin. “In a First, Israel Responds to Hamas Hackers with an Airstrike.” ZDNet, May 5, 2019. https://www.zdnet.com/article/in-a-first-israel-responds-to-hamas-hackers-with-an-air-strike/

ix Baker, Stewart. “Four Principles to Guide the US Response to Cyberattacks.” Fifthdomain.com, February 7, 2019. https://www.fifthdomain.com/thought-leadership/2019/02/07/four-principles-to-guide-the-us-response-to-cyberattacks/

x Gross, Elias. “The Future Is Here, and It Features Hackers Getting Bombed.” Foreign Policy, May 6, 2019. https://foreignpolicy.com/2019/05/06/the-future-is-here-and-it-features-hackers-getting-bombed/

xi Waxman, Matthew C. “Cyber-Attacks and the Use of Force: Back to the Future of Article 2(4).” Yale Journal of International Law, Vol. 36, 2011. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1674565

xii O’Flaherty, Kate. “Israel Retaliates to a Cyber-Attack With Immediate Physical Action in a World First.” Forbes, May 6, 2019. https://www.forbes.com/sites/kateoflahertyuk/2019/05/06/israel-retaliates-to-a-cyber-attack-with-immediate-physical-action-in-a-world-first/#627141e5f895

xiii Alexander, David. “U.S. Reserves the Right to Meet Cyber Attack with Force.” Reuters, November 15, 2011. https://www.reuters.com/article/us-usa-defense-cybersecurity/u-s-reserves-right-to-meet-cyber-attack-with-force-idUSTRE7AF02Y20111116

xiv Perlroth, Nicole, Scott Shane. “In Baltimore and Beyond, a Stolen N.S.A. Tool Wreaks Havoc.” The New York Times, May 25, 2019. https://www.nytimes.com/2019/05/25/us/nsa-hacking-tool-baltimore.html

155. “The Queue”

[Editor’s Note: Mad Scientist Laboratory is pleased to present our latest edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Mad Scientist Initiative has come across during the previous month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment (OE). We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

1. Boston Dynamics prepares to launch its first commercial robot: Spot,” by James Vincent, The Verge, 5 June 2019.

Day by Day Armageddon: Ghost Road, by J.L. Bourne, Gallery Books, 2016, 241 pages.

Metalhead,” written by Charlie Brooker / directed by David Slade, Black Mirror, Netflix, Series 4, Episode 5.

Spot the Robot / Source: Boston Dynamics

Boston Dynamics, progenitors of a wide range of autonomous devices is now poised to retail its first robotic system — during an interview with The Verge at Amazon’s re:MARS Conference, Boston Dynamics CEO Marc Raibert announced that “Spot the robot will go on sale later this year.” Spot is “a nimble robot that handles objects, climbs stairs, and will operate in offices, homes and outdoors.” Its “robot arm is a prime example of Boston Dynamics’ ambitious plans for Spot. Rather than selling the robot as a single-use tool, it’s positioning it as a “mobility platform” that can be customized by users to complete a range of tasks.” Given its inherent versatility, Mr. Raibert likens Spotto … the Android [phone] of Androids,” with the market developing a gamut of apps, facilitating new and innovative adaptations.

But what about the tactical applications of such a device in the Future OE? Two works of fiction explore this capability through storytelling, with disparate visions of the future….

In the his fourth installment of the Day By Day Armageddon series, J. L. Bourne imagines one man’s trek across a ravaged America in search of the cure to a zombie apocalypse, and effectively explores the versatility of a Spot-like quadrupedal robot. Stumbling across the remains of a special operator, his protagonist Kil discovers the tablet and wrist band controls for a Ground Assault Reconnaissance & Mobilization Robot (GARMR) that, across the passage of the novel, ultimately anthropomorphizes into his “dog” named Checkers. After mastering the device’s capabilities via a tutorial on the tablet, Kil programs and employs Checkers as a roving ISR platform, effectively empowering him as a one-man Anti-Access/Area-Denial (A2/AD) “bubble.” Conducting both perimeter and long-range patrols, it tirelessly detects and alerts Kil to potential threats using its machine vision, audio, and video sensor feeds. Bourne’s versatile GARMR, however, is dependent on man-in-the-loop input and though capable of executing pre-programmed autonomous operations, Checkers remains compliant with current DoD policy regarding autonomy.

Netflix’s Black Mirror Metalhead episode, however, imagines its eponymous quadruped robot as a lethal autonomous system run amok, relentlessly hunting down humanity’s remnants in a post-apocalyptic Britain. Operating in wolf packs, these man-out-of-the-loop devices sit passively until a human target is acquired, whom they then tirelessly track, run to ground, and kill. The Metalhead episode riffs on the Skynet trope with another defense program gone rogue, this time an armed and deadly next-gen Spot. U.S. defense policy-makers would be wise to reconsider and plan accordingly for the coming commercialization and inevitable democratization of quadrupedal lethal autonomy – in light of recent drone attacks in Syria, Venezuela, and Yemen – a new killer genie is about to be unleashed from its bottle!

2.Small Businesses Aren’t Rushing Into AI,” by Sara Castellanos and Agam Shah, The Wall Street Journal, 9 June 2019.

While sixty-five percent of firms with more than 5,000 workers are using Artificial Intelligence (AI) or planning on it, only twenty-one percent of small businesses have similar plans. The upfront costs of AI tools, data architecture improvements, and the scarcity of people capable of implementing AI tools outpace the ability of small businesses. While the U.S. Army is not a small business, it faces many similar obstacles.

First, the U.S. Army is not “AI Ready.” While Google and Microsoft are working on AI tools that are not overly reliant on large data sets, today’s tools are trained and fueled by access to reliant big data. Key to enabling AI and the Army realizing the advantages of speed and improved decision-making is access to our own data. The Army is a data rich organization with information from previous training events, combat operations, and readiness status, but data rich does not mean data ready. Much of the Army’s data would be characterized as “dark data” — sitting in a silo, accessible for limited single-use purposes. To get the Army AI Ready, we need to implement a Service-wide effort to break down the silos and import all of the Army’s data into an open-source architecture that is accessible by a range of AI tools.

Second, the size and dispersed nature of the Army exasperates our ability to acquire and retain a high number of people capable of implementing AI tools. This AI defined future requires the creation of new jobs and skillsets to overcome the coming skills mismatch. The Army is learning as it builds out a cyber capable force and these lessons are probably applicable to what we will need to do to support an AI-enabled force. At a minimum, we must address a new form of tech literacy to lead these future formations.

While Google and Microsoft work to reduce the reliance on big data for training AI and lessen the need for AI coding, the Army should begin to improve its “AI Readiness” by implementing new data strategies, exploring new skillsets, and improving force tech literacy.

3.Have Strategists Drunk the ‘AI Race’ Kool-Aid?” by Zac Rogers, War on the Rocks, 4 June 2019.

When technological change is driven more by hubris and ideology than by scientific understanding, the institutions that traditionally moderate these forces, such as democratic oversight and the rule of law, can be eroded in pursuit of the next false dawn.”

In this article, Dr. Zac Rogers cautions those who are willing to leap headfirst into the technological abyss. Rogers provides a countering narrative to balance out the tech entrepreneurs who are ready to go full steam ahead with the so-called AI race, breaking down the competition and producing a robust analysis of the unintended effects of the digital age. The full implications of advancing AI offer a sobering reality, replete with warning of the potential breakdown of sociopolitical stability and of Western societies themselves. While countries continue to invest billions in AI development and innovation, Rogers reminds us that beneath the high-tech veneer of the 21st century we are still “human beings in social systems – to which all the usual caveats apply.” As asserted by Mr. Ian Sullivan,  our world is driven largely by thoughts, ideals, and beliefs, despite the increasing global connectivity we experience every day. To forget that we are, as put by Dr. Rogers, “always human” would be to lose touch with the very reality we are augmenting.

Rogers cautions against “idealized cybernetic systems” and implores those spearheading the foray into the technological unknown to take pause and remember what, ultimately, we stand to gain from these developments – and what we stand to lose.

4.For the good of humanity, AI needs to know when it’s incompetent,” by Nicole Kobie, Wired, 15 June 2019

Prowler.io is an AI platform for generalized decision-making for businesses aiming to augment human work with machine learning. Prowler.io considers four questions as it sets up the platform:  1) when does the AI know for certain that it’s right; 2) when does it know it’s wrong; 3) when does it know that it’s about to go wrong — timing is key, so humans in the loop have time to react; and 4) “how are we even sure the AI is asking the right questions.” For Prowler.io, keeping humans-in-the-loop is necessary due to the current lack of trust in machine-based decision-making.

The article points out that understanding why your AI-driven fund manager lost money is less useful than preventing bad buys in the first place. Explainable AI is not enough, you have to have trusted AI – and for that to happen, you need to have human decision-making in the loop.” One failing of AI is that it doesn’t inherently understand its own competency. For example, if a human worker needs help, they can ask for it. The dilemma is how will “understanding of personal limitations [be] built into code?” A worst case example of this was two deadly 737 Max crashes — “In both crashes, the commonality was that the autopilot did not understand its own incompetence.”

Correct and timely decisions are paramount for the Army and military applications of AI on the battlefield, now and in the future. “Even if there’s a human-in-the-loop that has oversight, how valuable is the human if they don’t understand what’s going on?” One disconnect is that the military and code writers equally have a blind spot in how they think about technological progress toward the future. That blind spot is thinking about AI largely disconnected from humans and the human brain. Rather than thinking about AI-enabled systems as connected to humans, we think about them as parallel processes. We talk about human-in-the-loop or human-on-the-loop largely in terms of the control over autonomous systems, rather than a comprehensive connection to and interaction with those systems. Having the idea that a human always has to overrule or an algorithm always has to overrule is not the right strategy. It really has to be focusing on what the human is good at and what the algorithm is good at, and combining those two things together. And that will actually make decision-making better, fairer, and more transparent.”

5.Garbage In, Garbage Out,” by Clare Garvie, Georgetown Law Center on Privacy & Technology, 16 May 2019.

A system relies on the information it is given. Feed it poor or questionable data and you will get poor or questionable results, hence the phrase, “garbage in, garbage out.” With facial recognition software becoming an increasingly common tool, law enforcement agencies are relying more heavily on a process that many experts admit is more art than science. Further, they’re stretching the bounds of the software when trying to identify potential suspects – feeding the system celebrity photos, composite sketches, and altered images in an attempt to “help” the system find the right person. However, studies by the National Institute of Standards and Technology (NIST) and Michigan State University concluded that composite sketches produced a positive match between 4.1 and 6.7 percent of the time. Despite, these figures relaying dubious or questionable results, some agencies are still following this process. As the Army becomes a more data-centric organization, it will be imperative to understand that “data” itself does not necessarily mean “good data.” If the Army feeds poor data into the system it will get poor results that may lead to unnecessary resource expenditure or even loss of life. Modernization will rely on accurate forecasting underpinned by a robust data set. How can the Army ensure it has the right data it needs? What processes need to be put in place now, to avoid potentially disastrous shortcuts?

6.Deepfakes, social media, and the 2020 election,” by John Villasenor, Brookings TechTank, 3 June 2019.

“…deepfakes are the inevitable next step in attacking the truth.”

The point of deepfakes is not necessarily to convince people that public figures said something that they did not. They’re designed to introduce doubt and confusion into people’s minds, wreaking havoc on the information-saturated environment Americans are accustomed to operating in. As this type of digital deception becomes more refined, social media platforms will face greater technological and logistical challenges in identifying and removing false information, not to mention walking the fine legal line of subjective content monitoring.

BuzzFeed released a video illustrating the power of deepfakes, showing the image of former US President Barack Obama uttering words voiced by director and actor Jordan Peele / Source: Jordan Peele, BuzzFeed, and Monkeypaw Productions

Deepfakes are also of concern outside of the social media arena: our strategic competitors can use this tool to manipulate information, delegitimizing or mischaracterizing American military actions and operations in the views of local actors in conflict zones and the wider global population. In addition to the weaponization of information by state actors, any individual with basic technological skill and access to a computer and the internet can create a video that drastically alters the perceptions of millions. So, that leads us to wonder: what happens when we can no longer trust the information right in front of our eyes? How can we make decisions when we have to question all of our evidence?

7.Space Exploration and the Age of the Anthropocosmos,” by Joi Ito, Wired, 30 May 2019.

Space is the final frontier – and it’s here. Joi Ito likens the current utopian, free-for-all stage of human/space interactions with the initial years of publicly-accessible internet. To encapsulate this era, Ito coins a new term, anthropocosmos, for this phase of human development in which people have a measurable impact on non-terrestrial environments. However, he cautions that if expansion into and use of space is left unchecked, then a “tragedy of the commons” situation will begin to arise. Moribah Jah alluded to the tragedy of the commons regarding orbital “space debris” congestion by suggesting that this phenomenon is already occurring in Earth’s orbital pathways. Ito continues his comparison by highlighting that, much like the internet, space in the future will be used for all sorts of purposes unimaginable to us today. He envisions a world where space becomes increasingly commercialized, monitored, and restricted by various actors trying to secure their own domains (such as governments) or turn a profit. Space can be a cooperative arena, but if it’s not, people on Earth and beyond the planet will feel the negative consequences of exploiting this newly-accessible environment.

8.Team of Teams: New Rules of Engagement for a Complex World,” by General Stanley McChrystal, Tantum Collins, David Silverman, and Chris Fussell, Penguin Random House, 12 May 2015.

This 2015 book on leadership, engagement, and teamwork primarily authored by retired General and JSOC Commander Stan McChrystal addresses the tension point between how military teams (and teams in the workforce in general) are traditionally organized and led and the emerging digital age and info/data-centric character of modern warfare.

The book highlights the need for conventional and special forces to transform their centralized and largely rigid ways of warfare into something more adaptive, fluid, and agile to counter the growing insurgency in Iraq in 2004. As forces in Iraq tackled a boiling sectarian civil war in hot spots like Fallujah, McChrystal’s Joint Special Operations Command morphed into a team of teams that efficiently leveraged intelligence experts, informant networks, interagency task forces, special operators, and a host of support personnel to kill, capture, and disrupt what had become al Qaeda in Iraq (AQI). This network built to defeat a network saw their efforts and metamorphosis culminate in the June 2006 airstrike on AQI leader and most wanted man in Iraq, Abu Musab al-Zarqawi.

The lessons gleaned from this book are not only applicable to special operations forces involved in manhunts or even to military operations as a whole, but to teams across the globe in all areas of business, academia, and service. The rapid nature of changing circumstances, enormity of big data, and pervasiveness of hyper-connectivity mean that organizations must shift away from being executive-centric, hierarchical, and rigid and become cross-functional, openly communicating, and mutually respecting teams of teams. The growing integration of artificial intelligence and machine learning in the workplace (that will likely include levels of, or assistance to, decision-making) will exacerbate the need for cross collaboration and a better top-to-bottom understanding of how the team of teams functions as a whole.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future OE, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

154. Takeaways from the Mad Scientist Science Fiction Writing Contest 2019

[Editor’s Note: At the conclusion of another successful Mad Scientist Science Fiction Writing Contest, we had received over 75 highly imaginative short stories. In addition to listing and linking you to the contest’s winning submission (as well as those of our finalists), today’s post reports back on the major cross-cutting themes we were able to distill from your collective creativity and provides effective writing tips from Dr. David Brin, our contest’s senior judge and multiple award-winning science fiction author. Enjoy!]

The U.S. Army finds itself at a historical inflection point, where disparate, yet related elements of an increasingly complex Operational Environment (OE) are converging, creating a situation where fast moving trends are rapidly transforming the nature of all aspects of society and human life – including the character of warfare. It is important to take a creative approach to projecting and anticipating both the transformational and enduring trends that will lend themselves to the depiction of the future. In this vein, the Army’s Mad Scientist Initiative sought out the creativity, unique ideas, and intellect of the nation (and beyond!) to describe a battlefield that does not yet exist.

Mad Scientist launched this science fiction writing contest with the following prompt:

On March 17th, 2030, the country of Donovia, after months of strained relations and covert hostilities, invades neighboring country Otso. Donovia is a wealthy nation that is a near-peer competitor to the United States. Like the United States, Donovia has invested heavily in disruptive technologies such as robotics, AI, autonomy, quantum information sciences, bio enhancements and gene editing, space-based weapons and communications, drones, nanotechnology, and directed energy weapons. The United States is a close ally of Otso and is compelled to intervene due to treaty obligations and historical ties. The United States is about to engage Donovia in its first battle with a near-peer competitor in over 80 years…

In this venture, we received over 75 submissions from an incredibly diverse audience — ranging from programmers to school teachers to career military officers. The submissions were incredibly rich and Mad Scientist was able to glean a great many lessons learned from all of them.

Our contest winner and finalists’ submissions were published by our colleagues at the Modern War Institute at West Point:

Winner:

AN41, by COL Jasper Jeffers

Finalists:

Starfire, by Mary Madigan

Uncle Sam Takes a R.E.S.T., by Melanie Page

Jonathan Roper, Traveling Consultant, by Hal Wilson

An additional number of submissions were published by our colleagues at Small Wars Journal.

Five cross-cutting themes emerged from across the collective submissions:

1) Machine Speed Information / Processing / Assessment and Real-time Connectivity and Reaction – Actions and reactions are occurring in such a truncated timeline that human operators have difficulty in comprehending or keeping up with these dynamic interactions.

2) Mixed or eXtended Reality (MR/XR) – Augmented, virtual, and synthetic realities are prevalent not only on the battlefield but throughout life. They enhance our warfighters’ situational understanding and play a critical part in how humans work together and collaborate, not only with themselves but with the machines with whom they relate. The pervasiveness of this technology is analogous to the intuitive familiarity people have today with the countless data interface screens encountered in contemporary living.

3) Human-machine Interface and/or Interconnectivity – Humans and machines not only teamed, but melded or wholly integrated at certain points in all of the stories. There comes a certain point where humans and robots (or their respective actions) cannot be distinguished from one another.

4) Disruptive Technologies – There were many novel and disruptive technologies featured in the stories (e.g., anti-tank drones, exoskeletons, active camouflage, nanite capsules, micro-drones, space vehicles, quantum cryptography, and quadruped bots), as well as seemingly innocuous technologies manipulated and repurposed for nefarious activities (e.g., farming robots used for surveillance).

5) Artificial Intelligence – AI is prevalent across the stories, not only as an aid in decision-making and a driver of autonomy, but also as an overlay of the social and political environment. It is often presented as a double-edged sword that both informs, guides, and aids humans, but is also prone to contextual misunderstandings or overreliance on data.

The collective body of submissions presented us with a myriad of lessons learned. These will help the Army to frame and ultimately shift the paradigm of how we think about the Future OE. We are extremely grateful to all of our participants and encourage everyone to continue to go boldly into the future!

While writing science fiction can be a fun and thoughtful endeavor, it can also prove challenging. Here are some great insights from Dr. David Brin (Nebula, Hugo, Locus, and Campbell award winning science fiction author) who served as our contest’s senior judge. Many thanks for the tips, Dr. Brin!

    • By far the most important pages are the first ones, when you hook the reader. And you need a great first paragraph to get them to read the first page. Starting with the Point of View’s name is certainly okay… even Heinlein did it now and then. But it would be better to start with an italicized internal thought, or an ironic observation or spoken words or actions. (See the example below)
    • Many readers are hard on writers who give info dumps from the narrator’s point of view. It’s better to reveal info as efficiently as possible via conversation, action, and the point of view character’s internal thoughts.
    • Many readers hate “repeatitis” where a word gets repeated a lot. English is so rich with synonyms and alternate ways of saying the same thing that you can usually avoid it, unless repetition is a deliberate poetical device. This stricture has no strong reason for it, and indeed, authors like Hemingway violated it a lot. But most professionals cater to this common reader irritation and hence, you’ll pick up a habit of minimizing even too many close repeats of “the.”
    • A more important habit to acquire, with stronger reason, is to feel uncomfortable with “was” and “had.”  Oh, sure — “had”, “were” and “was” are permitted and sometimes necessary, but always regrettable… each time should cause a wee bit of pain!  Because ‘had’ – and to a lesser extent “was” – often indicate the narrator, instead of the point of view character (POV), is telling instead of showing. If you look at my books, you’ll find I include lots of ideas and background of past events, but I pace them in with movement, action, conversation…
    • POV (point of view) is the hardest thing for a new writer to master. It gives your characters a “voice” and presence, and offers the reader a sense of vesting in the protagonist’s feelings and needs and will. This is all destroyed by authorial data-dumps that make you feel lectured-to by a narrator.
    • Prologues can be nice. But often they serve as crutches.

Example:

Lieutenant Jade Mahelona hated the noise and confusion of crowds, yet now she was stuck on crowd control in a busy tunnel-street of Deep Indianapolis while her carrier ship was in airdock for repairs. She’d joined Solar Defense Force to get away from Earth cities, and she’d loved every minute of her month of relative quiet on pirate patrol in the asteroids.

Try this instead:

Damn I hate crowd control duty. Over the tunnel noise and throng confusion of Deep Indianapolis, Jade could barely hear her sergeant growl in agreement, as if reading her mind.

“How long till the ship is fixed, lieutenant? I didn’t join SDF for this shit.”

Of course it was a coincidence – Mulcraft didn’t have her electric-empath sense… “Belay that,” She snapped. “We’ll be back out there on comfy pirate patrol in no time.”

Do you see how I dumped in far more information via internal (italicized) thoughts, sensory input, and conversation, without once using “had” or even “was”?  Now throw in some action… someone in the crowd throws something, and you’ve started rolling along, supplying lots of background info without an intruding narrator dump!

    • Find a dozen openings of novels you greatly admire and RE-TYPE THEM! Just re-reading them will not work.  I guarantee you will only understand how those authors did it if you retype the opening scene. And you’ll grasp that establishing POV early while minimizing data-dumping is the hardest thing for neos to learn, and absolutely essential to learn. No matter how wonderful your ideas are, they are useless unless you master how to hook.

Talk this over with your colleagues. Read aloud together and critique the first 5 paragraphs of lots of writers. Do nothing else in your workshop, till you all understand how to establish both the scene/situation and POV laced into conversation, action, and internal thoughts.

Alas, that’s all I have time for. But I hope it’s useful. Remember to read carefully my “advice article.” And above all keep at it! That’s the key to success.

If you enjoyed this post, please also see Ground Warfare in 2050: How It Might Look, by Dr. Alexander Kott…

… the following imaginative blog posts about warfare in the Future OE:

Demons in the Tall Grass, by Mr. Mike Matson

Biostorm: A Story of Future War, by Mr. Anthony DeCapite

Omega, by Mr. August Cole and Mr. Amir Husain…

… and previous submissions from our 2017 Science Fiction Writing Contest at Science Fiction: Visioning the Future of Warfare 2030-2050.

153. Critical Projection: Insights from China’s Science Fiction

We can talk quite glibly about ‘cognitive domains’ – but understanding contexts, especially social and cultural, is vital to discern motivation or intent’ Air Marshal Stringer, Director General Joint Force Development

[Editor’s Note: Per the author of today’s post, Lt Col Dave Calder, British Army, “This post looks at how science fiction (SF) can provide some critical utility to militaries. My first article looked broadly at where it can help us see the world differently; making parts of it seem strange so to highlight how it can be changed.  For this piece, I wanted to push the boundaries of where I believe SF can give us an intellectual edge. By assuming its critical utility is universal, I have tried use SF to gaze into the cultures of others to draw out insight that might shape, temper, or aid our decision making. By looking at China’s canon, I believe it is possible to get a sense of the scale of their ambition, the challenges to its rise as a global power, and understand Beijing’s view of the West. Enjoy Lt Col Calder’s post!]

Cixin Liu is a nine-time winner of the Galaxy Award, winner of the 2015 Hugo Award and the 2017 Locus Award as well as a nominee for the Nebula Award / Source: Wikimedia Commons

Today, Chinese SF enjoys a global audience, mainly thanks to the popularity of Cixin Liu’s Hugo Award-Winning Three Body Problem and the recent release of The Wandering Earth – China’s highest grossing SF film. This exposure, while welcome, eclipses a rich and well-established tradition which is over 100 years old. Writers like Lu Xun, for example, use SF as a means of political commentary and paint a dark picture of the Late Qing period and colonialism. Scholars of Chinese SF draw clear links between early works, like Huangjiang Diasou’s Tales of the Moon Colony, and the so-called ‘New-Wave’ of writing which has appeared over the past decade. One of their sharpest observations relates to how power can be derived from commoditising and rationing scientific knowledge. Where the colonial powers (and Jesuits before them) effectively influenced and subjugated China through the control of specific technologies, China today may be practising learned behaviours.1 Its overt (and covert) hoarding of intellectual property and desire to dominate disruptive technologies like AI and quantum computing might be seen as an attempt to assure its rise to a position of global leadership. Whether intentional or not, the tools of colonial modernity are today being played back on the West to China’s potential benefit.

The Chinese Lunar Exploration Program (aka Chang’e Project), is an ongoing series of robotic Moon missions by the China National Space Administration. / Source: Wikipedia

The nature of ‘New-Wave’ SF very much reflects China’s complexity and its future aspirations. Hopes and fears are intertwined and framed by a sense of destiny. Over the past 12 years, the themes of China’s SF canon have moved away from concerns of everyday life to far loftier, and literally celestial, aspirations. Cixin Liu’s short story, The Sun of China,2 captures a sense of a nation capable of realising its own goals rather than have its place in the world determined by others.3 This resonates with a national vision which has been expressed in terms of Jintao’s ‘Chinese Dream’, the more philosophical aspects of Xi Jinping‘s ‘Belt and Road’ initiative, and the Chang’e lunar programme. China casts itself as an agent in its future and seems to have the ideological and financial capital to realise its visions.

Conversely, SF is such that the aspirational can never be divorced from the critical. Margaret Atwood once remarked that “utopia and dystopia are essentially flip-sides of the same form, and that every utopia has a dystopia concealed within it.4  There is a growing realisation that the ‘Chinese Dream’ is distinct from its American predecessor and arouses a “nightmarish unconscious of a dream that does not necessarily belong to an individual but rather to a collective entity.”5 This increasing sense of alarm is starkly reflected in China’s SF:  Zhang Ran’s Ether, for example, is a blunt attack on the increasing ubiquity of surveillance in China and is a clear protest against censorship.6   Equally, Han Song’s My Fatherland Does not Dream is deeply critical of the Chinese Government’s inability to recognise where the limits of central government control and privacy lie and suggest its aspirations will fail to materialise unless such concerns are addressed.7 Permitting the publication of such subversive notions in what we take to be an oppressive society, keen to minimise dissent, is interesting in itself.

The relationship between the Chinese SF scene and the state is complex. Chinese authors enjoy relative immunity from censorship as SF is seen as a means to address China’s creativity deficit. China’s top SF magazine Science Fiction World is widely available, and many of the genre’s literary conventions are state-sponsored. Party officials wish to move China from being a state which replicates the World’s technology to one which invents it.8 At the same time, SF’s comparative obscurity as a literary genre means it lacks the popularity which would have it classed as ‘protest literature’. This willingness to balance subversion against economic reward arguably highlights the risk China is willing to take to mitigate deep concerns over its ability to meet the aspirations of the ‘Chinese Dream’. It also demonstrates the premium China places on innovation as a recipe for fueling future growth.

‘New-Wave’ SF also provides the West with a mirror which can be used to look back at ourselves through Chinese eyes. In works like Han Song’s 2066: Red Star over America, the U.S. (and by implication the West) appears morally intransigent and unwilling to compromise on those issues which might affect how power and influence are wielded in the international system. We come across as protective and very much defined as status quo-seeking powers. While we share common values, the natures of our imagined utopias are fundamentally different. Ours is driven by individual rather than societal happiness. This point of divergence represents a key factor which must be addressed to avoid future confrontation and conflict.

In conclusion, China’s SF has the potential to yield interesting social insights which might drive external behaviours. Like any state, China’s history remains relevant in framing its actions today. SF gives us a lens to appreciate such dynamics and compare them to what is happening today: the commoditization of scientific and technical knowledge and using it to exert influence is a learned rather than invented technique. This does not excuse China’s apparent disregard for intellectual property norms, but it helps explain it and put our reactions in context. In looking at today’s SF, we find a complex mix of aspirational themes and subversive undercurrents. Both help us understand China a little more. The ‘New-Wave’ allows us to weigh their ‘destiny’ in one hand, and their challenges in another. Lastly, SF’s objectivity allows us to stare back at ourselves through the lens of Chinese literature. Knowing how we are seen should influence our decision making, just as much as our characterisation of China does. This also allows us to compare ourselves to one another at a philosophical level. While we share some common values, we are currently moving down two different paths towards fundamentally different conceptions of utopia. At some point, we must re-converge if we are to avoid confrontation and conflict. That said, a clash is not inevitable. Understanding one another is the first step to accommodation: SF can play a role here to complement our more traditional methods of assessing strategic culture and deciphering Bejing’s intentions. It will not provide all the answers, but it might help find some.

If you enjoyed this post, please also read:

Lt Col Calder‘s first post, Science Fiction’s Hidden Codes

– Proclaimed Mad Scientist Elsa Kania‘s post, Quantum Surprise on the Battlefield? as well as China’s Drive for Innovation Dominance, drawn from her presentation at the Mad Scientist Bio Convergence and Soldier 2050 Conference at SRI International, Menlo Park, 8-9 March 2018.  Her podcast from this event, China’s Quest for Enhanced Military Technology, is hosted by Modern War Institute.

Ms. Cindy Hurst‘s post, A Closer Look at China’s Strategies for Innovation: Questioning True Intent

Lt Col David Calder is currently studying on the UK’s Advanced and Command Staff Course and is a Chief of Defence Staff Scholar. He is also undertaking a Masters by Research in Defence Studies with King’s College London; this is exploring how science fiction can be used to change military perspectives. He is an armoured engineer and has deployed to Iraq, Afghanistan and Estonia in recent years. (Twitter @drjcalder81)


1 Nathaniel Isaacson. “Science Fiction for the Nation: Tales of the Moon Colony and the Birth of Modern Chinese Science Fiction.” Science Fiction Studies 40, no. 1 (2013): 33-35.

2 This is a tale of a lowly window cleaner that is assigned to maintaining a solar shield (which is designed to reduce the effects of global warming) but succeeds in transforming it into a solar sail, enabling him to explore the stars.

3 Cixin, Liu. “Chinese Science Fiction and Chinese Reality.” Clarkesworld, no. 110 (2015).

4 Atwood, Margret, interview by David Barr Kirtley. Geek’s Guide to the Galaxy Podcast #94 (December 2013).

5 Song, Mingwei, and Theodore Huters, The Reincarnated Giant: An Anthology of Twenty-First-Century Chinese Science Fiction. New York: Columbia University Press, 2018. Introduction.

6 Ran, Zhang. “Ether.” Edited by trans. Clarkesworld (trans. Carmen Yiling Yan, Ken Liu) 100 (Jan 2015). Here omnipotent surveillance in a near-future China leads to the language being reduced to a limited and utilitarian vocabulary and the development of a complex and subversive method of communication using messages written on people’s hands.

7 Rojas, Carlos and Andrea Bachner. The Oxford Handbook of Modern Chinese Literatures. Oxford: Oxford University Press, 2016. 551-553. My Fatherland was banned until 2007. It depicts a population of an authoritarian state which has been using drugs to optimise production and erase memories of past atrocities. The state’s sleepwalking population are unknowingly manipulated into delivering the state’s economic revolution but do not share in the benefits this advancement generates for an ‘un-sleeping’ elite.

8 Neil Gaiman. “The Genre of Pornography, or the Pornography of Genre.” In The View from the Cheap Seats: Selected Nonfiction. London: HarperCollins, 2017.