111. AI Enhancing EI in War

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s guest blog post by MAJ Vincent Dueñas, addressing how AI can mitigate a human commander’s cognitive biases and enhance his/her (and their staff’s)  decision-making, freeing them to do what they do best — command, fight, and win on future battlefields!]

Humans are susceptible to cognitive biases and these biases sometimes result in catastrophic outcomes, particularly in the high stress environment of war-time decision-making. Artificial Intelligence (AI) offers the possibility of mitigating the susceptibility of negative outcomes in the commander’s decision-making process by enhancing the collective Emotional Intelligence (EI) of the commander and his/her staff. AI will continue to become more prevalent in combat and as such, should be integrated in a way that advances the EI capacity of our commanders. An interactive AI that feels like one is communicating with a staff officer, which has human-compatible principles, can support decision-making in high-stakes, time-critical situations with ambiguous or incomplete information.

Mission Command in the Army is the exercise of authority and direction by the commander using mission orders to enable disciplined initiative within the commander’s intent.i It requires an environment of mutual trust and shared understanding between the commander and his subordinates in order to understand, visualize, describe, and direct throughout the decision-making Operations Process and mass the effects of combat power.ii

The mission command philosophy necessitates improved EI. EI is defined as the capacity to be aware of, control, and express one’s emotions, and to handle interpersonal relationships judiciously and empathetically, at much quicker speeds in order seize the initiative in war.iii The more effective our commanders are at EI, the better they lead, fight, and win using all the tools available.

AI Staff Officer

To conceptualize how AI can enhance decision-making on the battlefields of the future, we must understand that AI today is advancing more quickly in narrow problem solving domains than in those that require broad understanding.iv This means that, for now, humans continue to retain the advantage in broad information assimilation. The advent of machine-learning algorithms that could be applied to autonomous lethal weapons systems has so far resulted in a general predilection towards ensuring humans remain in the decision-making loop with respect to all aspects of warfare.v, vi AI’s near-term niche will continue to advance rapidly in narrow domains and become a more useful interactive assistant capable of analyzing not only the systems it manages, but the very users themselves. AI could be used to provide detailed analysis and aggregated assessments for the commander at the key decision points that require a human-in-the-loop interface.

The Battalion is a good example organization to visualize this framework. A machine-learning software system could be connected into different staff systems to analyze data produced by the section as they execute their warfighting functions. This machine-learning software system would also assess the human-in-the-loop decisions against statistical outcomes and aggregate important data to support the commander’s assessments. Over time, this EI-based machine-learning software system could rank the quality of the staff officers’ judgements. The commander can then consider the value of the staff officers’ assessments against the officers’ track-record of reliability and the raw data provided by the staff sections’ systems. The Bridgewater financial firm employs this very type of human decision-making assessment algorithm in order to assess the “believability” of their employees’ judgements before making high-stakes, and sometimes time-critical, international financial decisions.vii Included in such a multi-layered machine-learning system applied to the battalion, there would also be an assessment made of the commander’s own reliability, to maximize objectivity.

Observations by the AI of multiple iterations of human behavioral patterns during simulations and real-world operations would improve its accuracy and enhance the trust between this type of AI system and its users. Commanders’ EI skills would be put front and center for scrutiny and could improve drastically by virtue of the weight of the responsibility of consciously knowing the cognitive bias shortcomings of the staff with quantifiable evidence, at any given time. This assisted decision-making AI framework would also consequently reinforce the commander’s intuition and decisions as it elevates the level of objectivity in decision-making.

Human-Compatibility

The capacity to understand information broadly and conduct unsupervised learning remains the virtue of humans for the foreseeable future.viii The integration of AI into the battlefield should work towards enhancing the EI of the commander since it supports mission command and complements the human advantage in decision-making. Giving the AI the feel of a staff officer implies also providing it with a framework for how it might begin to understand the information it is receiving and the decisions being made by the commander.

Stuart Russell offers a construct of limitations that should be coded into AI in order to make it most useful to humanity and prevent conclusions that result in an AI turning on humanity. These three concepts are:  1) principle of altruism towards the human race (and not itself), 2) maximizing uncertainty by making it follow only human objectives, but not explaining what those are, and 3) making it learn by exposing it to everything and all types of humans.ix

Russell’s principles offer a human-compatible guide for AI to be useful within the human decision-making process, protecting humans from unintended consequences of the AI making decisions on its own. The integration of these principles in battlefield AI systems would provide the best chance of ensuring the AI serves as an assistant to the commander, enhancing his/her EI to make better decisions.

Making AI Work

The potential opportunities and pitfalls are abundant for the employment of AI in decision-making. Apart from the obvious danger of this type of system being hacked, the possibility of the AI machine-learning algorithms harboring biased coding inconsistent with the values of the unit employing it are real.

The commander’s primary goal is to achieve the mission. The future includes AI, and commanders will need to trust and integrate AI assessments into their natural decision-making process and make it part of their intuitive calculus. In this way, they will have ready access to objective analyses of their units’ potential biases, enhancing their own EI, and be able overcome them to accomplish their mission.

If you enjoyed this post, please also read:

An Appropriate Level of Trust…

Takeaways Learned about the Future of the AI Battlefield

Bias and Machine Learning

Man-Machine Rules

MAJ Vincent Dueñas is an Army Foreign Area Officer and has deployed as a cavalry and communications officer. His writing on national security issues, decision-making, and international affairs has been featured in Divergent Options, Small Wars Journal, and The Strategy Bridge. MAJ Dueñas is a member of the Military Writers Guild and a Term Member with the Council on Foreign Relations. The views reflected are his own and do not represent the opinion of the United States Government or any of its agencies.


i United States, Army, States, United. “ADRP 5-0 2012: The Operations Process.” ADRP 5-0 2012: The Operations Process, Headquarters, Dept. of the Army., 2012, pp. 1–1.

ii Ibid. pp. 1-1 – 1-3.

iiiEmotional Intelligence | Definition of Emotional Intelligence in English by Oxford Dictionaries.” Oxford Dictionaries | English, Oxford Dictionaries, 2018, en.oxforddictionaries.com/definition/emotional_intelligence.

iv Trent, Stoney, and Scott Lathrop. “A Primer on Artificial Intelligence for Military Leaders.” Small Wars Journal, 2018, smallwarsjournal.com/index.php/jrnl/art/primer-artificial-intelligence-military-leaders.

v Scharre, Paul. ARMY OF NONE: Autonomous Weapons and the Future of War. W W NORTON, 2019.

vi Evans, Hayley. “Lethal Autonomous Weapons Systems at the First and Second U.N. CGE Meetings.” Lawfare, 2018, https://www.lawfareblog.com/lethal-autonomous-weapons-systems-first-and-second-un-gge-meetings.

vii Dalio, Ray. Principles. Simon and Schuster, 2017.

viii Trent and Lathrop.

ix Russell, Stuart, director. Three Principles for Creating Safer AI. TED: Ideas Worth Spreading, 2017, www.ted.com/talks/stuart_russell_3_principles_for_creating_safer_ai.

100. Prediction Machines: The Simple Economics of Artificial Intelligence

[Editor’s Note: Mad Scientist Laboratory is pleased to review Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Harvard Business Review Press, 17 April 2018.  While economics is not a perfect analog to warfare, this book will enhance our readers’ understanding of narrow Artificial Intelligence (AI) and its tremendous potential to change the character of future warfare by disrupting human-centered battlefield rhythms and facilitating combat at machine speed.]

This insightful book by economists Ajay Agrawal, Joshua Gans, and Avi Goldfarb penetrates the hype often associated with AI by describing its base functions and roles and providing the economic framework for its future applications.  Of particular interest is their perspective of AI entities as prediction machines. In simplifying and de-mything our understanding of AI and Machine Learning (ML) as prediction tools, akin to computers being nothing more than extremely powerful mathematics machines, the authors effectively describe the economic impacts that these prediction machines will have in the future.

The book addresses the three categories of data underpinning AI / ML:

Training: This is the Big Data that trains the underlying AI algorithms in the first place. Generally, the bigger and most robust the data set is, the more effective the AI’s predictive capability will be. Activities such as driving (with millions of iterations every day) and online commerce (with similar large numbers of transactions) in defined environments lend themselves to efficient AI applications.

Input: This is the data that the AI will be taking in, either from purposeful, active injects or passively from the environment around it. Again, defined environments are far easier to cope with in this regard.

Feedback: This data comes from either manual inputs by users and developers or from AI understanding what effects took place from its previous applications. While often overlooked, this data is critical to iteratively enhancing and refining the AI’s performance as well as identifying biases and askew decision-making. AI is not a static, one-off product; much like software, it must be continually updated, either through injects or learning.

The authors explore narrow AI rather than a general, super, or “strong” AI.  Proclaimed Mad Scientist Paul Scharre and Michael Horowitz define narrow AI as follows:

their expertise is confined to a single domain, as opposed to hypothetical future “general” AI systems that could apply expertise more broadly. Machines – at least for now – lack the general-purpose reasoning that humans use to flexibly perform a range of tasks: making coffee one minute, then taking a phone call from work, then putting on a toddler’s shoes and putting her in the car for school.”  – from Artificial Intelligence What Every Policymaker Needs to Know, Center for New American Security, 19 June 2018

These narrow AI applications could have significant implications for U.S. Armed Forces personnel, force structure, operations, and processes. While economics is not a direct analogy to warfare, there are a number of aspects that can be distilled into the following ramifications:

Internet of Battle Things (IOBT) / Source: Alexander Kott, ARL

1. The battlefield is dynamic and has innumerable variables that have great potential to mischaracterize the ground truth with limited, purposely subverted, or “dirty” input data. Additionally, the relative short duration of battles and battlefield activities means that AI would not receive consistent, plentiful, and defined data, similar to what it would receive in civilian transportation and economic applications.

2. The U.S. military will not be able to just “throw AI on it” and achieve effective results. The effective application of AI will require a disciplined and comprehensive review of all warfighting functions to determine where AI can best augment and enhance our current Soldier-centric capabilities (i.e., identify those workflows and processes – Intelligence and Targeting Cycles – that can be enhanced with the application of AI).  Leaders will also have to assess where AI can replace Soldiers in workflows and organizational architecture, and whether AI necessitates the discarding or major restructuring of either.  Note that Goldman-Sachs is in the process of conducting this type of self-evaluation right now.

3. Due to its incredible “thirst” for Big Data, AI/ML will necessitate tradeoffs between security and privacy (the former likely being more important to the military) and quantity and quality of data.

 

4. In the near to mid-term future, AI/ML will not replace Leaders, Soldiers, and Analysts, but will allow them to focus on the big issues (i.e., “the fight”) by freeing them from the resource-intensive (i.e., time and manpower) mundane and rote tasks of data crunching, possibly facilitating the reallocation of manpower to growing need areas in data management, machine training, and AI translation.

This book is a must-read for those interested in obtaining a down-to-earth assessment on the state of narrow AI and its potential applications to both economics and warfare.

If you enjoyed this review, please also read the following Mad Scientist Laboratory blog posts:

Takeaways Learned about the Future of the AI Battlefield

Leveraging Artificial Intelligence and Machine Learning to Meet Warfighter Needs

… and watch the following presentations from the Mad Scientist Robotics, AI, and Autonomy – Visioning Multi-Domain Battle in 2030-2050 Conference, 7-8 March 2017, co-sponsored by Georgia Tech Research Institute:

Artificial Intelligence and Machine Learning: Potential Application in Defense Today and Tomorrow,” presented by Mr. Louis Maziotta, Armament Research, Development, and Engineering Center (ARDEC).

Unmanned and Autonomous Systems, presented by Paul Scharre, CNAS.

99. “The Queue”

[Editor’s Note: Mad Scientist Laboratory is pleased to present our October edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

1. Table of Disruptive Technologies, by Tech Foresight, Imperial College London, www.imperialtechforesight.com, January 2018.

This innovative Table of Disruptive Technologies, derived from Chemistry’s familiar Periodic Table, lists 100 technological innovations organized into a two-dimensional table, with the x-axis representing Time (Sooner to Later) and the y-axis representing the Potential for Socio-Economic Disruption (Low to High). These technologies are organized into three time horizons, with Current (Horizon 1 – Green) happening now, Near Future (Horizon 2 – Yellow) occurring in 10-20 years, and Distant Future (Horizon 3 – Fuchsia) occurring 20+ years out. The outermost band of Ghost Technologies (Grey) represents fringe science and technologies that, while highly improbable, still remain within the realm of the possible and thus are “worth watching.” In addition to the time horizons, each of these technologies has been assigned a number corresponding to an example listed to the right of the Table; and a two letter code corresponding to five broad themes: DE – Data Ecosystems, SP – Smart Planet, EA – Extreme Automation, HA – Human Augmentation, and MI – Human Machine Interactions. Regular readers of the Mad Scientist Laboratory will find many of these Potential Game Changers familiar, albeit assigned to far more conservative time horizons (e.g., our community of action believes Swarm Robotics [Sr, number 38], Quantum Safe Cryptography [Qs, number 77], and Battlefield Robots [Br, number 84] will all be upon us well before 2038). That said, we find this Table to be a useful tool in exploring future possibilities and will add it to our “basic load” of disruptive technology references, joining the annual Gartner Hype Cycle of Emerging Technologies.

2. The inventor of the web says the internet is broken — but he has a plan to fix it, by Elizabeth Schulze, Cnbc.com, 5 November 2018.

Tim Berners-Lee, who created the World Wide Web in 1989, has said recently that he thinks his original vision is being distorted due to concerns about privacy, access, and fake news. Berners-Lee envisioned the web as a place that is free, open, and constructive, and for most of his invention’s life, he believed that to be true. However, he now feels that the web has undergone a change for the worse. He believes the World Wide Web should be a protected basic human right. In order to accomplish this, he has created the “Contract for the Web” which contains his principles to protect web access and privacy. Berners-Lee’s “World Wide Web Foundation estimates that 1.5 billion… people live in a country with no comprehensive law on personal data protection. The contract requires governments to treat privacy as a fundamental human right, an idea increasingly backed by big tech leaders like Apple CEO Tim Cook and Microsoft CEO Satya Nadella.” This idea for a free and open web stands in contrast to recent news about China and Russia potentially branching off from the main internet and forming their own filtered and censored Alternative Internet, or Alternet, with tightly controlled access. Berners-Lee’s contract aims at unifying all users under one over-arching rule of law, but without China and Russia, we will likely have a splintered and non-uniform Web that sees only an increase in fake news, manipulation, privacy concerns, and lack of access.

3. Chinese ‘gait recognition’ tech IDs people by how they walk, Associated Press News, 6 November 2018.

Source: AP

The Future Operational Environment’s “Era of Contested Equality” (i.e., 2035 through 2050) will be marked by significant breakthroughs in technology and convergences, resulting in revolutionary changes. Under President Xi Jinping‘s leadership, China is becoming a major engine of global innovation, second only to the United States. China’s national strategy of “innovation-driven development” places innovation at the forefront of economic and military development.

Early innovation successes in artificial intelligence, sensors, robotics, and biometrics are being fielded to better control the Chinese population. Many of these capabilities will be tech inserted into Chinese command and control functions and intelligence, security, and reconnaissance networks redefining the timeless competition of finders vs. hiders. These breakthroughs represent homegrown Chinese innovation and are taking place now.

A recent example is the employment of ‘gait recognition’ software capable of identifying people by how they walk. Watrix, a Chinese technology startup, is selling the software to police services in Beijing and Shanghai as a further push to develop an artificial intelligence and data drive surveillance network. Watrix reports the capability can identify people up to 165 feet away without a view of their faces. This capability also fills in the sensor gap where high-resolution imagery is required for facial recognition software.

4. VR Boosts Workouts by Unexpectedly Reducing Pain During Exercise, by Emma Betuel, Inverse.com, 4 October 2018.

Tricking the brain can be fairly low tech, according to Dr. Alexis Mauger, senior lecturer at the University of Kent’s School of Sport and Exercise Sciences. Research has shown that students who participated in a Virtual Reality-based exercise were able to withstand pain a full minute longer on average than their control group counterparts. Dr. Mauger hypothesized that this may be due to a lack of visual cues normally associated with strenuous exercise. In the case of the specific research, participants were asked to hold a dumbbell out in front of them for as long as they could. The VR group didn’t see their forearms shake with exhaustion or their hands flush with color as blood rushed to their aching biceps; that is, they didn’t see the stimuli that could be perceived as signals of pain and exertion. These results could have significant and direct impact on Army training. While experiencing pain and learning through negative outcomes is essential in certain training scenarios, VR could be used to train Soldiers past where they would normally be physically able to train. This could not only save the Army time and money but also provide a boost to exercises as every bit of effectiveness normally left at the margins could now be acquired.

5. How Teaching AI to be Curious Helps Machines Learn for Themselves, by James Vincent, The Verge, 1 November 2018, Reviewed by Ms. Marie Murphy.

Presently, there are two predominant techniques for machine learning: machines analyzing large sets of data from which they extrapolate patterns and apply them to analogous scenarios; and giving the machine a dynamic environment in which it is rewarded for positive outcomes and penalized for negative ones, facilitating learning through trial and error.

In programmed curiosity, the machine is innately motivated to “explore for exploration’s sake.” The example used to illustrate the concept of learning through curiosity details a machine learning project called “OpenAI” which is learning to win a video game in which the reward is not only staying alive but also exploring all areas of the level. This method has yielded better results than the data-heavy and time-consuming traditional methods. Applying this methodology for machine learning in military training scenarios would reduce the human labor required to identify and program every possible outcome because the computer finds new ones on its own, reducing the time between development and implementation of a program. This approach is also more “humanistic,” as it allows the computer leeway to explore its virtual surroundings and discover new avenues like people do. By training AI in this way, the military can more realistically model various scenarios for training and strategic purposes.

6. EU digital tax plan flounders as states ready national moves, by Francesco Guarascio, Reuters.com, 6 November 2018.

A European Union plan to tax internet firms like Google and Facebook on their turnover is on the verge of collapsing. As the plan must be agreed to by all 28 EU countries (a tall order given that it is opposed by a number of them), the EU is announcing national initiatives instead. The proposal calls for EU states to charge a 3 percent levy on the digital revenues of large firms. The plan aims at changing tax rules that have let some of the world’s biggest companies pay unusually low rates of corporate tax on their earnings. These firms, mostly from the U.S., are accused of averting tax by routing their profits to the bloc’s low-tax states.

This is not just about taxation. This is about the issue of citizenship itself. What does it mean for virtual nations – cyber communities which have gained power, influence, or capital comparable to that of a nation-state – that fall outside the traditional rule of law? The legal framework of virtual citizenship turn upside down and globalize the logic of the special economic zone — a geographical space of exception, where the usual rules of state and finance do not apply. How will these entities be taxed or declare revenue?

Currently, for the online world, geography and physical infrastructure remain crucial to control and management. What happens when it is democratized, virtualized, and then control and management change? Google and Facebook still build data centers in Scandinavia and the Pacific Northwest, which are close to cheap hydroelectric power and natural cooling. When looked at in terms of who the citizen is, population movement, and stateless populations, what will the “new normal” be?

7. Designer babies aren’t futuristic. They’re already here, by Laura Hercher, MIT Technology Review, 22 October 2018.

In this article, subtitled “Are we designing inequality into our genes?” Ms. Hercher echoes what proclaimed Mad Scientist Hank Greely briefed at the Bio Convergence and Soldier 2050 Conference last March – advances in human genetics will be applied initially in order to have healthier babies via the genetic sequencing and the testing of embryos. Embryo editing will enable us to tailor / modify embryos to design traits, initially to treat diseases, but this will also provide us with the tools to enhance humans genetically. Ms. Hercher warns us that “If the use of pre-implantation testing grows and we don’t address the disparities in who can access these treatments, we risk creating a society where some groups, because of culture or geography or poverty, bear a greater burden of genetic disease.” A valid concern, to be sure — but who will ensure fair access to these treatments? A new Government agency? And if so, how long after ceding this authority to the Government would we see politically-expedient changes enacted, justified for the betterment of society and potentially perverting its original intent? The possibilities need not be as horrific as Aldous Huxley’s Brave New World, populated with castes of Deltas and Epsilon-minus semi-morons. It is not inconceivable that enhanced combat performance via genetic manipulation could follow, resulting in a permanent caste of warfighters, distinct genetically from their fellow citizens, with the associated societal implications.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

83. A Primer on Humanity: Iron Man versus Terminator

[Editor’s Note: Mad Scientist Laboratory is pleased to present a post by guest blogger MAJ(P) Kelly McCoy, U.S. Army Training and Doctrine Command (TRADOC), with a theme familiar to anyone who has ever debated super powers in a schoolyard during recess. Yet despite its familiarity, it remains a serious question as we seek to modernize the U.S. Army in light of our pacing threat adversaries. The question of “human-in-the-loop” versus “human-out-of-the-loop” is an extremely timely and cogent question.]

Iron Man versus Terminator — who would win? It is a debate that challenges morality, firepower, ingenuity, and pop culture prowess. But when it comes down to brass tacks, who would really win and what does that say about us?

Mad Scientist maintains that:

  • Today: Mano a mano, Iron Man’s human ingenuity, grit, and irrationality would carry the day; however…
  • In the Future: Facing the entire Skynet distributed neural net, Iron Man’s human-in-the-loop would be overwhelmed by a coordinated, swarming attack of Terminators.
Soldier in Iron Man-like exoskeleton prototype suit

Iron Man is the super-empowered human utilizing Artificial Intelligence (AI) — Just A Rather Very Intelligent System or JARVIS — to augment the synthesizing of data and robotics to increase strength, speed, and lethality. Iron Man utilizes autonomous systems, but maintains a human-in-the- loop for lethality decisions. Conversely, the Terminator is pure machine – with AI at the helm for all decision-making. Terminators are built for specific purposes – and for this case let’s assume these robotic soldiers are designed specifically for urban warfare. Finally, strength, lethality, cyber vulnerabilities, and modularity of capabilities between Iron Man and Terminator are assumed to be relatively equal to each other.

Up front, Iron Man is constrained by individual human bias, retention and application of training, and physical and mental fatigue. Heading into the fight, the human behind a super powered robotic enhancing suit will make decisions based on their own biases. How does one respond to too much information or not enough? How do they react when needing to respond while wrestling with the details of what needs to be remembered at the right time and space? Compounding this is the retention and application of the individual human’s training leading up to this point. Have they successfully undergone enough repetitions to mitigate their biases and arrive at the best solution and response? Finally, our most human vulnerability is physical and mental fatigue. Without adding in psychoactive drugs, how would you respond to taking the Graduate Record Examinations (GRE) while simultaneously winning a combatives match? How long would you last before you are mentally and physically exhausted?

Terminator / Source: http://pngimg.com/download/29789

What the human faces is a Terminator who removes bias and optimizes responses through machine learning, access to a network of knowledge, options, and capabilities, and relentless speed to process information. How much better would a Soldier be with their biases removed and the ability to apply the full library of lessons learned? To process the available information that contextualizes environment without cognitive overload. Arriving at the optimum decision, based on the outcomes of thousands of scenarios.

Iron Man arrives to this fight with irrationality and ingenuity; the ability to quickly adapt to complex problems and environments; tenacity; and morality that is uniquely human. Given this, the Terminator is faced with an adversary who can not only adapt, but also persevere with utter unpredictability. And here the Terminator’s weaknesses come to light. Their algorithms are matched to an environment – but environments can change and render algorithms obsolete. Their energy sources are finite – where humans can run on empty, Terminators power off. Finally, there are always glitches and vulnerabilities. Autonomous systems depend on the environment that it is coded for – if you know how to corrupt the environment, you can corrupt the system.

Ultimately the question of Iron Man versus Terminator is a question of time and human value and worth. In time, it is likely that the Iron Man will fall in the first fight. However, the victor is never determined in the first fight, but the last. If you believe in human ingenuity, grit, irrationality, and consideration, the last fight is the true test of what it means to be human.

Note:  Nothing in this blog is intended as an implied or explicit endorsement of the “Iron Man” or “Terminator” franchises on the part of the Department of Defense, the U.S. Army, or TRADOC.

Kelly McCoy is a U.S. Army strategist officer and a member of the Military Leadership Circle. A blessed husband and proud father, when he has time he is either brewing beer, roasting coffee, or maintaining his blog (Drink Beer; Kill War at: https://medium.com/@DrnkBrKllWr). The views expressed in this article belong to the author alone and do not represent the Department of Defense.

82. Bias and Machine Learning

[Editor’s Note:  Today’s post poses four central questions to our Mad Scientist community of action regarding bias in machine learning and the associated ramifications for artificial intelligence, autonomy, lethality, and decision-making on future warfighting.]

We thought that we had the answers, it was the questions we had wrong” – Bono, U2

Source: www.vpnsrus.com via flickr

As machine learning and deep learning algorithms become more commonplace, it is clear that the utopian ideal of a bias-neutral Artificial Intelligence (AI) is exactly just that. These algorithms have underlying biases embedded in their coding, imparted by their human programmers (either consciously or unconsciously). These algorithms can develop further biases during the machine learning and training process.  Dr. Tolga Bolukbasi, Boston University, recently described algorithms as not being capable of distinguishing right from wrong, unlike humans that can judge their actions, even when they act against ethical norms. For algorithms, data is the ultimate determining factor.

Realizing that algorithms supporting future Intelligence, Surveillance, and Reconnaissance (ISR) networks and Commander’s decision support aids will have inherent biases — what is the impact on future warfighting? This question is exceptionally relevant as Soldiers and Leaders consider the influence of biases in man-machine relationships, and their potential ramifications on the battlefield, especially with regard to the rules of engagement (i.e., mission execution and combat efficiency versus the proportional use of force and minimizing civilian casualties and collateral damage).

It is difficult to make predictions, particularly about the future.” This quote has been attributed to anyone ranging from Mark Twain to Niels Bohr to Yogi Berra. Point prediction is a sucker’s bet. However, asking the right questions about biases in AI is incredibly important.

The Mad Scientist Initiative has developed a series of questions to help frame the discussion regarding what biases we are willing to accept and in what cases they will be acceptable. Feel free to share your observations and questions in the comments section of this blog post (below) or email them to us at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil.

1) What types of bias are we willing to accept? Will a so-called cognitive bias that forgoes a logical, deliberative process be allowable? What about a programming bias that is discriminative towards any specific gender(s), ethnicity(ies), race(s), or even age(s)?

2) In what types of systems will we accept biases? Will machine learning applications in supposedly non-lethal warfighting functions like sustainment, protection, and intelligence be given more leeway with regards to bias?

3) Will the biases in machine learning programming and algorithms be more apparent and/or outweigh the inherent biases of humans-in-the-loop? How will perceived biases affect trust and reliance on machine learning applications?

4) At what point will the pace of innovation and introduction of this technology on the battlefield by our adversaries cause us to forego concerns of bias and rapidly field systems to gain a decisive Observe, Orient, Decide, and Act (OODA) loop and combat speed advantage on the Hyperactive Battlefield?

For additional information impacting on this important discussion, please see the following:

An Appropriate Level of Trust… blog post

Ethical Dilemmas of Future Warfare blog post

Ethics and the Future of War panel discussion video

30. Leveraging Artificial Intelligence and Machine Learning to Meet Warfighter Needs

(Editor’s Note: The Mad Scientist Laboratory is pleased to present a companion piece to last Thursday’s post that addressed human-machine networks and their cross-domain effects. On 10 January 2018, CAPT George Galdorisi, (U.S. Navy-Ret.), presented his Mad Scientist Speaker Series topic entitled, Designing Unmanned Systems For the Multi-Domain Battle. CAPT Galdorisi has distilled the essence of this well-received presentation into the following guest blog post — enjoy!)

The U.S. military no longer enjoys technological superiority over a wide-range of potential adversaries. In the words of former Deputy Secretary of Defense Robert Work, “Our forces face the very real possibility of arriving in a future combat theater and finding themselves facing an arsenal of advanced, disruptive technologies that could turn our previous technological advantage on its head — where our armed forces no longer have uncontested theater access or unfettered operational freedom of maneuver.”

SILENT RUIN: Written by Brian David Johnson • Creative Direction: Sandy Winkelman
Illustration: Don Hudson & Kinsun Lo • Brought to you by Army Cyber Institute at West Point

The Army Cyber Institute’s graphic novel, Silent Ruin, posits one such scenario.

In order to regain this technological edge, the Department of Defense has crafted a Third Offset Strategy and a Defense Innovation Initiative, designed to help the U.S. military regain technological superiority. At the core of this effort are artificial intelligence, machine learning, and unmanned systems.

Much has been written about efforts to make U.S. military unmanned systems more autonomous in order to fully leverage their capabilities. But unlike some potential adversaries, the United States is not likely to deploy fully autonomous machines. An operator will be in the loop. If this is the case, how might the U.S. military best exploit the promise offered by unmanned systems?

One answer may well be to provide “augmented intelligence” to the warfighter. Fielding unmanned vehicles that enable operators to teach these systems how to perform desired tasks is the first important step in this effort. This will lead directly to the kind of human-machine collaboration that transitions the “artificial” nature of what the autonomous system does into an “augmented” capability for the military operator.

But this generalized explanation begs the question — what would augmented intelligence look like to the military operator? What tasks does the warfighter want the unmanned system to perform to enable the Soldier, Sailor, Airman, or Marine in the fight to make the right decision quickly in stressful situations where mission accomplishment must be balanced against unintended consequences?

Consider the case of an unmanned system conducting a surveillance mission. Today, an operator receives streaming video of what the unmanned system sees, and in the case of aerial unmanned systems, often in real-time. But this requires the operator to stare at this video for hours on end (the endurance of the U.S. Navy’s MQ-4C Triton is thirty hours). This concept of operations is an enormous drain on human resources, often with little to show for the effort.

Using basic augmented intelligence techniques, the MQ-4C can be trained to deliver only that which is interesting and useful to its human partner. For example, a Triton operating at cruise speed, flying between San Francisco and Tokyo, would cover the five-thousand-plus miles in approximately fifteen hours. Rather than send fifteen hours of generally uninteresting video as it flies over mostly empty ocean, the MQ-4C could be trained to only send the video of each ship it encounters, thereby greatly compressing human workload.


Taken to the next level, the Triton could do its own analysis of each contact to flag it for possible interest. For example, if a ship is operating in a known shipping lane, has filed a journey plan with the proper maritime authorities, and is providing an AIS (Automatic Identification System) signal; it is likely worthy of only passing attention by the operator, and the Triton will flag it accordingly. If, however, it does not meet these criteria (say, for example, the vessel makes an abrupt course change that takes it well outside normal shipping channels), the operator would be alerted immediately.

For lethal military unmanned systems, the bar is higher for what the operator must know before authorizing the unmanned warfighting partner to fire a weapon — or as is often the case — recommending that higher authority authorize lethal action. Take the case of military operators managing an ongoing series of unmanned aerial systems flights that have been watching a terrorist and waiting for higher authority to give the authorization to take out the threat using an air-to-surface missile fired from that UAS.

Using augmented intelligence, the operator can train the unmanned aerial system to anticipate what questions higher authority will ask prior to giving the authorization to fire, and provide, if not a point solution, at least a percentage probability or confidence level to questions such as:

• What is level of confidence this person is the intended target?

• What is this confidence based on?

– Facial recognition

– Voice recognition

– Pattern of behavior

– Association with certain individuals

– Proximity of known family members

– Proximity of known cohorts

• What is the potential for collateral damage to?

– Family members

– Known cohorts

– Unknown persons

• What are the potential impacts of waiting versus striking now?

These considerations represent only a subset of the kind of issues operators must train their unmanned systems armed with lethal weapons to deal with. Far from ceding lethal authority to unmanned systems, providing these systems with augmented intelligence and leveraging their ability to operate inside the enemy’s OODA loop, as well as ours, enables these systems to free the human operator from having to make real-time (and often on-the-fly-decisions) in the stress of combat.

Designing this kind of augmented intelligence into unmanned systems from the outset will ultimately enable them to be effective partners for their military operators.

If you enjoyed this post, please note the following Mad Scientist events:

– Our friends at Small Wars Journal are publishing the first five selected Soldier 2050 Call for Ideas papers during the week of 19-23 February 2018 (one each day) on their Mad Scientist page.

– Mark on your calendar the next Mad Scientist Speaker Series, entitled “A Mad Scientist’s Lab for Bio-Convergence Research, presented by Drs. Cooke and Mezzacappa, from RDECOM-ARDEC Tactical Behavior Research Laboratory (TBRL), scheduled for 27 February 2018 at 1300-1400 EST.

– Headquarters, U.S. Army Training and Doctrine Command (TRADOC) is co-sponsoring the Bio Convergence and Soldier 2050 Conference with SRI International at Menlo Park, California, on 08-09 March 2018. This conference will be live-streamed; click here to watch the proceedings, starting at 0840 PST / 1140 EST on 08 March 2018.

CAPT George Galdorisi, (U.S. Navy–Ret.), is Director for Strategic Assessments and Technical Futures at SPAWAR Systems Center Pacific. Prior to joining SSC Pacific, he completed a 30-year career as a naval aviator, culminating in fourteen years of consecutive service as executive officer, commanding officer, commodore, and chief of staff.

27. Sine Pari

(Editor’s Note: Mad Scientist Laboratory is pleased to present the following guest blog post by Mr. Howard R. Simkin, envisioning Army recruiting, Mid-Twenty First Century. The Army must anticipate how (or if) it will recruit augmented humans into the Future Force. This post was originally submitted in response to our Soldier 2050 Call for Ideas, addressing how humanity’s next evolutionary leap, its co-evolution with Artificial Intelligence (AI) and becoming part of the network, will change the character of war. This is the theme for our Bio Convergence and Soldier 2050 Conference — learn more about this event at the bottom of this post.)

///////////Personal Blog, Master Sergeant Grant Robertson, Recruiting District Seven…

This morning I had an in-person interview with a prospective recruit – Roberto Preciado. For the benefit of those of you who haven’t had one yet, I offer the following.

Roberto arrived punctually, a good sign. Before he entered I said, “RECOM, activate full spectrum recording and analysis.”

The disembodied voice of the Recruiting Command AI replied, “Roger.”

“Let him in.” I stood up to better assess him as he stepped through the doorway. He had dark hair and eyes, and was of slender build and medium height. My corneal implants allowed me to assess his general medical condition. He was in surprisingly good shape for his age.

We went through the usual formalities before getting down to business.

Roberto sat down gingerly, “I..um..I wanted to check out becoming part of Special Operations.”

“You came to the right place,” I replied. “So why Special Operations?”

“My uncle was in Special Operations during ‘the Big One.’ Next to my dad, he is the coolest person I ever met, so…” He searched for words, “So I decided to come and check it out.”

“Okay.” I began. “This isn’t your uncle’s Special Operations. Since the Big One, we’ve made quite a few” – I caught myself before saying changes – “upgrades.” I paused, “Roberto, before we take the enhanced reality tour, I’d like to know what augments you have had – if any.”

“Sure.” Roberto paused for a moment,“ Let’s see… I’ve got Daystrom Model 40B ER corneal implants, a Neuralink BCI jack, and a Samsung cognitive enhancement implant. That’s about it.”

“That’s fine. So you have no problems with augmentation then?”

“No, sir.”

“Don’t call me sir. I work for a living. Call me Sergeant.” I replied.

“Yes sir…I mean Sergeant.” Roberto replied somewhat nervously.

I smiled reassuringly, “Let’s continue with the most important question…do you like working with people?”

“Yes, Sergeant.”

My corneal implants registered a quick flash of green light. RECOM had monitored Roberto’s metabio signature for signs of deception and found none.

“In spite of all the gadgets we work with, we still believe that people are more important than hardware. If you don’t like working with people, then you are not who we want.” I said in a matter-of-fact tone. “So,” I continued, “What are your interests?”

“I like solving problems.” Roberto shifted in his chair slightly, “I’m pretty good in a hackathon, I can handle a 4D Printer, I like to tinker with bots, and I got all A’s in machine learning.”

“So you like working with AI?”

“Yeah,” Roberto grinned, “It is way cool.”

Reassured by another green flash, I asked, “How about sports?”

“Virtual or physical?”


“Both.”

“I like virtual rock climbing and…do MMORPGs count as a sport?”
[i.e., Massively Multi-Player Online Role Playing Games]

“Depends on the MMORPG.” I replied stifling a smile.

Roberto paused before answering, ‘Call of Duty – The Big One, Special Operations Edition’ and ‘Zombie Apocalypse’.”




I was beginning to like this kid. Apparently, so was RECOM who flashed another green light. “I’d say they count.” I nodded. “So how about physical sports?”

“I was on the track team and I still like distance running.” He smiled self-consciously, “Got a letter in track.” He thought for a moment, “I played a lot of soccer when I was a kid but never got really good at it. I think it was because when I was younger, I was really small.”

I nodded politely. “So Roberto, besides hackathons have you ever hacked devices?”

He looked a bit startled, then uncomfortable. “Well…I…yes…I have.”

“Don’t worry, this isn’t an interrogation.” I leaned forward a bit, “Son, we want people who can think, who can adapt commercial off-the-shelf technology for use in the field. We need innovative thinkers.”

“Okay.”

“So what devices did you hack?”

“I think the first one I hacked was a service bot when I was ten. You know, the house cleaning types?”

I nodded slightly.

“Well,” Roberto continued, “my parents wanted me to clean my room every day. They said it built character.” He smiled, “I guess they were right but I didn’t see it that way. So I hacked our service bot to clean my room whenever my parents were out of the house.”

“Did it work?”

“For a while. But you know smart houses…our AI realized that something wasn’t right and blabbed.” He shook his head, “Boy, did I get in trouble.”

“Was that the end of it?” I asked.

“For a while, then I figured out how to hack the whole house…AI and all. Machine learning is a nice skill to have.” He reflected for a moment, “It taught me a lesson – before you hack, you have to know the whole system.”

“Yes.” I nodded in agreement, “That’s a good point.”

My corneal implants flashed, “Probability of successful training completion – 95%.”

“So are you ready to jack into our training simulation? It’s not quite as good as what you are used to at home, but it will give you an idea of what your training will be like.”


“Yes sir…I mean Sergeant.”

For the next ten minutes, I guided him through a compressed experience of special operations training.

When we finished I asked, “So what do you think? Can you handle it?”

Roberto replied without hesitation, “Where do I sign?”

I smiled at the idea of signing a document. “Just read through the enlistment contract. If you agree, just place your right hand on the bio-scanner and look into the retinal scanner.”

Roberto slowly scrolled through the document while I sat quietly by. A few minutes later, the enlistment was complete.

That done, we set the date for his swearing in, as well as who would attend the ceremony. He departed, smiling. As for me, it was the beginning of a day without equal…but more of that in my next blog. ///////////End Personal Blog, Master Sergeant Grant Robertson, Recruiting District Seven


If you enjoyed this post, please note that Headquarters, U.S. Army Training and Doctrine Command (TRADOC) is co-sponsoring the Bio Convergence and Soldier 2050 Conference with SRI International at Menlo Park, California, on 08-09 March 2018. This conference will be live-streamed; click here to watch the proceedings, starting at 0845 PST / 1145 EST on 08 March 2018. Stay tuned to the Mad Scientist Laboratory for more information regarding this conference.

Howard R. Simkin is a Senior Concept Developer in the DCS, G-9 Concepts, Experimentation and Analysis Directorate, U.S. Army Special Operations Command. He has over 40 years of combined military, law enforcement, defense contractor, and government experience. He is a retired Special Forces officer with a wide variety of special operations experience.