183. Ethics, Morals, and Legal Implications

[Editor’s Note: The U.S. Army Futures Command (AFC) and Training and Doctrine Command (TRADOC) co-sponsored the Mad Scientist Disruption and the Operational Environment Conference with the Cockrell School of Engineering at The University of Texas at Austin on 24-25 April 2019 in Austin, Texas. Today’s post is excerpted from this conference’s Final Report and addresses how the speed of technological innovation and convergence continues to outpace human governance. The U.S. Army must not only consider how best to employ these advances in modernizing the force, but also the concomitant ethical, moral, and legal implications their use may present in the Operational Environment (see links to the newly published TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, and the complete Mad Scientist Disruption and the Operational Environment Conference Final Report at the bottom of this post).]

Technological advancement and subsequent employment often outpaces moral, ethical, and legal standards. Governmental and regulatory bodies are then caught between technological progress and the evolution of social thinking. The Disruption and the Operational Environment Conference uncovered and explored several tension points that the Army may be challenged by in the future.

Space

Cubesats in LEO / Source: NASA

Space is one of the least explored domains in which the Army will operate; as such, we may encounter a host of associated ethical and legal dilemmas. In the course of warfare, if the Army or an adversary intentionally or inadvertently destroys commercial communications infrastructure – GPS satellites – the ramifications to the economy, transportation, and emergency services would be dire and deadly. The Army will be challenged to consider how and where National Defense measures in space affect non-combatants and American civilians on the ground.

Per proclaimed Mad Scientists Dr. Moriba Jah and Dr. Diane Howard, there are ~500,000 objects orbiting the Earth posing potential hazards to our space-based services. We are currently able to only track less than one percent of them — those that are the size of a smart phone / softball or larger. / Source: NASA Orbital Debris Office

International governing bodies may have to consider what responsibility space-faring entities – countries, universities, private companies – will have for mitigating orbital congestion caused by excessive launching and the aggressive exploitation of space. If the Army is judicious with its own footprint in space, it could reduce the risk of accidental collisions and unnecessary clutter and congestion. It is extremely expensive to clean up space debris and deconflicting active operations is essential. With each entity acting in their own self-interest, with limited binding law or governance and no enforcement, overuse of space could lead to a “tragedy of the commons” effect.1  The Army has the opportunity to more closely align itself with international partners to develop guidelines and protocols for space operations to avoid potential conflicts and to influence and shape future policy. Without this early intervention, the Army may face ethical and moral challenges in the future regarding its addition of orbital objects to an already dangerously cluttered Low Earth Orbit. What will the Army be responsible for in democratized space? Will there be a moral or ethical limit on space launches?

Autonomy in Robotics

AFC’s Future Force Modernization Enterprise of Cross-Functional Teams, Acquisition Programs of Record, and Research and Development centers executed a radio rodeo with Industry throughout June 2019 to inform the Army of the network requirements needed to enable autonomous vehicle support in contested, multi-domain environments. / Source: Army.mil

Robotics have been pervasive and normalized in military operations in the post-9/11 Operational Environment. However, the burgeoning field of autonomy in robotics with the potential to supplant humans in time-critical decision-making will bring about significant ethical, moral, and legal challenges that the Army, and larger DoD are currently facing. This issue will be exacerbated in the Operational Environment by an increased utilization and reliance on autonomy.

The increasing prevalence of autonomy will raise a number of important questions. At what point is it more ethical to allow a machine to make a decision that may save lives of either combatants or civilians? Where does fault, responsibility, or attribution lie when an autonomous system takes lives? Will defensive autonomous operations – air defense systems, active protection systems – be more ethically acceptable than offensive – airstrikes, fire missions – autonomy? Can Artificial Intelligence/Machine Learning (AI/ML) make decisions in line with Army core values?

Deepfakes and AI-Generated Identities, Personas, and Content

Source: U.S. Air Force

A new era of Information Operations (IO) is emerging due to disruptive technologies such as deepfakes – videos that are constructed to make a person appear to say or do something that they never said or did – and AI Generative Adversarial Networks (GANs) that produce fully original faces, bodies, personas, and robust identities.2  Deepfakes and GANs are alarming to national security experts as they could trigger accidental escalation, undermine trust in authorities, and cause unforeseen havoc. This is amplified by content such as news, sports, and creative writing similarly being generated by AI/ML applications.

This new era of IO has many ethical and moral implications for the Army. In the past, the Army has utilized industrial and early information age IO tools such as leaflets, open-air messaging, and cyber influence mechanisms to shape perceptions around the world. Today and moving forward in the Operational Environment, advances in technology create ethical questions such as: is it ethical or legal to use cyber or digital manipulations against populations of both U.S. allies and strategic competitors? Under what title or authority does the use of deepfakes and AI-generated images fall? How will the Army need to supplement existing policy to include technologies that didn’t exist when it was written?

AI in Formations

With the introduction of decision-making AI, the Army will be faced with questions about trust, man-machine relationships, and transparency. Does AI in cyber require the same moral benchmark as lethal decision-making? Does transparency equal ethical AI? What allowance for error in AI is acceptable compared to humans? Where does the Army allow AI to make decisions – only in non-combat or non-lethal situations?

Commanders, stakeholders, and decision-makers will need to gain a level of comfort and trust with AI entities exemplifying a true man-machine relationship. The full integration of AI into training and combat exercises provides an opportunity to build trust early in the process before decision-making becomes critical and life-threatening. AI often includes unintentional or implicit bias in its programming. Is bias-free AI possible? How can bias be checked within the programming? How can bias be managed once it is discovered and how much will be allowed? Finally, does the bias-checking software contain bias? Bias can also be used in a positive way. Through ML – using data from previous exercises, missions, doctrine, and the law of war – the Army could inculcate core values, ethos, and historically successful decision-making into AI.

If existential threats to the United States increase, so does pressure to use artificial and autonomous systems to gain or maintain overmatch and domain superiority. As the Army explores shifting additional authority to AI and autonomous systems, how will it address the second and third order ethical and legal ramifications? How does the Army rectify its traditional values and ethical norms with disruptive technology that rapidly evolves?

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.
    • Ethics and the Future of War panel, facilitated by LTG Dubik (USA-Ret.) at the Mad Scientist Visualizing Multi Domain Battle 2030-2050 Conference, facilitated at Georgetown University, on 25-26 July 2017.

Just Published! TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, 7 October 2019, describes the conditions Army forces will face and establishes two distinct timeframes characterizing near-term advantages adversaries may have, as well as breakthroughs in technology and convergences in capabilities in the far term that will change the character of warfare. This pamphlet describes both timeframes in detail, accounting for all aspects across the Diplomatic, Information, Military, and Economic (DIME) spheres to allow Army forces to train to an accurate and realistic Operational Environment.


1 Munoz-Patchen, Chelsea, “Regulating the Space Commons: Treating Space Debris as Abandoned Property in Violation of the Outer Space Treaty,” Chicago Journal of International Law, Vol. 19, No. 1, Art. 7, 1 Aug. 2018. https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1741&context=cjil

2 Robitzski, Dan, “Amazing AI Generates Entire Bodies of People Who Don’t Exist,” Futurism.com, 30 Apr. 2019. https://futurism.com/ai-generates-entire-bodies-people-dont-exist

100. Prediction Machines: The Simple Economics of Artificial Intelligence

[Editor’s Note: Mad Scientist Laboratory is pleased to review Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Harvard Business Review Press, 17 April 2018.  While economics is not a perfect analog to warfare, this book will enhance our readers’ understanding of narrow Artificial Intelligence (AI) and its tremendous potential to change the character of future warfare by disrupting human-centered battlefield rhythms and facilitating combat at machine speed.]

This insightful book by economists Ajay Agrawal, Joshua Gans, and Avi Goldfarb penetrates the hype often associated with AI by describing its base functions and roles and providing the economic framework for its future applications.  Of particular interest is their perspective of AI entities as prediction machines. In simplifying and de-mything our understanding of AI and Machine Learning (ML) as prediction tools, akin to computers being nothing more than extremely powerful mathematics machines, the authors effectively describe the economic impacts that these prediction machines will have in the future.

The book addresses the three categories of data underpinning AI / ML:

Training: This is the Big Data that trains the underlying AI algorithms in the first place. Generally, the bigger and most robust the data set is, the more effective the AI’s predictive capability will be. Activities such as driving (with millions of iterations every day) and online commerce (with similar large numbers of transactions) in defined environments lend themselves to efficient AI applications.

Input: This is the data that the AI will be taking in, either from purposeful, active injects or passively from the environment around it. Again, defined environments are far easier to cope with in this regard.

Feedback: This data comes from either manual inputs by users and developers or from AI understanding what effects took place from its previous applications. While often overlooked, this data is critical to iteratively enhancing and refining the AI’s performance as well as identifying biases and askew decision-making. AI is not a static, one-off product; much like software, it must be continually updated, either through injects or learning.

The authors explore narrow AI rather than a general, super, or “strong” AI.  Proclaimed Mad Scientist Paul Scharre and Michael Horowitz define narrow AI as follows:

their expertise is confined to a single domain, as opposed to hypothetical future “general” AI systems that could apply expertise more broadly. Machines – at least for now – lack the general-purpose reasoning that humans use to flexibly perform a range of tasks: making coffee one minute, then taking a phone call from work, then putting on a toddler’s shoes and putting her in the car for school.”  – from Artificial Intelligence What Every Policymaker Needs to Know, Center for New American Security, 19 June 2018

These narrow AI applications could have significant implications for U.S. Armed Forces personnel, force structure, operations, and processes. While economics is not a direct analogy to warfare, there are a number of aspects that can be distilled into the following ramifications:

Internet of Battle Things (IOBT) / Source: Alexander Kott, ARL

1. The battlefield is dynamic and has innumerable variables that have great potential to mischaracterize the ground truth with limited, purposely subverted, or “dirty” input data. Additionally, the relative short duration of battles and battlefield activities means that AI would not receive consistent, plentiful, and defined data, similar to what it would receive in civilian transportation and economic applications.

2. The U.S. military will not be able to just “throw AI on it” and achieve effective results. The effective application of AI will require a disciplined and comprehensive review of all warfighting functions to determine where AI can best augment and enhance our current Soldier-centric capabilities (i.e., identify those workflows and processes – Intelligence and Targeting Cycles – that can be enhanced with the application of AI).  Leaders will also have to assess where AI can replace Soldiers in workflows and organizational architecture, and whether AI necessitates the discarding or major restructuring of either.  Note that Goldman-Sachs is in the process of conducting this type of self-evaluation right now.

3. Due to its incredible “thirst” for Big Data, AI/ML will necessitate tradeoffs between security and privacy (the former likely being more important to the military) and quantity and quality of data.

 

4. In the near to mid-term future, AI/ML will not replace Leaders, Soldiers, and Analysts, but will allow them to focus on the big issues (i.e., “the fight”) by freeing them from the resource-intensive (i.e., time and manpower) mundane and rote tasks of data crunching, possibly facilitating the reallocation of manpower to growing need areas in data management, machine training, and AI translation.

This book is a must-read for those interested in obtaining a down-to-earth assessment on the state of narrow AI and its potential applications to both economics and warfare.

If you enjoyed this review, please also read the following Mad Scientist Laboratory blog posts:

Takeaways Learned about the Future of the AI Battlefield

Leveraging Artificial Intelligence and Machine Learning to Meet Warfighter Needs

… and watch the following presentations from the Mad Scientist Robotics, AI, and Autonomy – Visioning Multi-Domain Battle in 2030-2050 Conference, 7-8 March 2017, co-sponsored by Georgia Tech Research Institute:

Artificial Intelligence and Machine Learning: Potential Application in Defense Today and Tomorrow,” presented by Mr. Louis Maziotta, Armament Research, Development, and Engineering Center (ARDEC).

Unmanned and Autonomous Systems, presented by Paul Scharre, CNAS.