56. An Appropriate Level of Trust…

The Mad Scientist team participates in many thought exercises, tabletops, and wargames associated with how we will live, work, and fight in the future. A consistent theme in these events is the idea that a major barrier to the integration of robotic systems into Army formations is a lack of trust between humans and machines. This assumption rings true as we hear the media and opinion polls describe how society doesn’t trust some disruptive technologies, like driverless cars or the robots coming for our jobs.

In his recent book, Army of None, Paul Scharre describes an event that nearly led to a nuclear confrontation between the Soviet Union and the United States. On September 26, 1983, LTC Stanislav Petrov, a Soviet Officer serving in a bunker outside Moscow was alerted to a U.S. missile launch by a recently deployed space-based early warning system. The Soviet Officer trusted his “gut” – or experientially informed intuition – that this was a false alarm. His gut was right and the world was saved from an inadvertent nuclear exchange because this officer did not over trust the system. But is this the rule or an exception to how humans interact with technology?

The subject of trust between Soldiers, Soldiers and Leaders, and the Army and society is central to the idea of the Army as a profession. At the most tactical level, trust is seen as essential to combat readiness as Soldiers must trust each other in dangerous situations. Humans naturally learn to trust their peers and subordinates once they have worked with them for a period of time. You learn what someone’s strengths and weaknesses are, what they can handle, and under what conditions they will struggle. This human dynamic does not translate to human-machine interaction and the tendency to anthropomorphize machines could be a huge barrier.

We recommend that the Army explore the possibility that Soldiers and Leaders could over trust AI and robotic systems. Over trust of these systems could blunt human expertise, judgement, and intuition thought to be critical to winning in complex operational environments. Also, over trust might lead to additional adversarial vulnerabilities such as deception and spoofing.

In 2016, a research team at the Georgia Institute of Technology revealed the results of a study entitled “Overtrust of Robots in Emergency Evacuation Scenarios”. The research team put 42 test participants into a fire emergency with a robot responsible for escorting them to an emergency exit. As the robot passed obvious exits and got lost, 37 participants continued to follow the robot and an additional 2 stood with the robot and didn’t move towards either exit. The study’s takeaway was that roboticists must think about programs that will help humans establish an “appropriate level of trust” with robot teammates.

In Future Crimes, Marc Goodman writes of the idea of “In Screen We Trust” and the vulnerabilities this trust builds into our interaction with our automation. His example of the cyber-attack against the Iranian uranium enrichment centrifuges highlights the vulnerability of experts believing or trusting their screens against mounting evidence that something else might be contributing to the failure of centrifuges. These experts over trusted their technology or just did not have an “appropriate level of trust”. What does this have to do with Soldiers on the future battlefield? Well, increasingly we depend on our screens and, in the future, our heads-up displays to translate the world around us. This translation will only become more demanding on the future battlefield with war at machine speed.

So what should our assumptions be about trust and our robotic teammates on the future battlefield?

1) Soldiers and Leaders will react differently to technology integration.

2) Capability developers must account for trust building factors in physical design, natural language processing, and voice communication.

3) Intuition and judgement remain a critical component of human-machine teaming and operating on the future battlefield. Speed becomes a major challenge as humans become the weak link.

4) Building an “appropriate level of trust” will need to be part of Leader Development and training. Mere expertise in a field does not prevent over trust when interacting with our robotic teammates.

5) Lastly, lack of trust is not a barrier to AI and robotic integration on the future battlefield. These capabilities will exist in our formations as well as those of our adversaries. The formation that develops the best concepts for effective human-machine teaming, with trust being a major component, will have the advantage.

Interested in learning more on this topic? Watch Dr. Kimberly Jackson Ryan (Draper Labs).

[Editor’s Note:  A special word of thanks goes out to fellow Mad Scientist Mr. Paul Scharre for sharing his ideas with the Mad Scientist team regarding this topic.]

29. Engaging Human-Machine Networks for Cross-domain Effects

(Editor’s Note: While war will remain an enduring human endeavor for the foreseeable future, engaging human networks will require a greater understanding of robotics, artificial intelligence, autonomy, and the Internet of Everything. Future battlefield networks at the strategic, operational, and tactical levels will leverage these aforementioned technologies to radically change the character of war, increasing the reach, speed, and lethality of conflict. Mad Scientist Laboratory is pleased to present the following guest blog post by Mr. Victor R. Morris, addressing the global implications of human-machine teaming.)

The character of war, strategy development, and operational level challenges are changing; therefore operational approaches must do the same. Joint Publication 3-25 Countering Threat Networks includes versatile lines of effort to identify, neutralize, disrupt, or destroy threat networks. These efforts correspond with engaging diverse networks to reach mission objectives within the overall Network Engagement strategy. Network Engagement consists of three components: partnering with friendly networks, engaging neutral networks, and Countering Threat Networks (CTN).

To successfully engage networks and achieve the desired effects, more advanced human-machine collaborative networks need to be understood and evaluated. Human-machine networks are defined by the integration of autonomy and narrow artificial intelligence to accelerate processes, collective understanding, and effects. These networks exist in military operational systems and within interrelated diplomatic, information, and economic systems.

Photo Credit: RAND Monitoring Social Media Lessons for Future Department of Defense Social Media Analysis in Support of Information Operations

This post analyzes collaborative networks using Network Engagement’s Partnering, Engaging and Countering (PEC) model. The intent is to outline a requirement for enhanced Network Engagement involving human-machine collaboration. An enhanced approach accelerates Joint and multinational engagement capabilities to achieve cross-domain effects in a convergent operational environment. Cross-domain effects are achieved through synchronized capabilities and overmatch in the interconnected physical domains, information environment, and cyberspace.

PEC Model: Partnering with friendly networks, engaging neutral networks, and countering threat networks

The Multi-Domain Battle concept addresses the extended battlefield and large-scale combat through Joint reconnaissance, offensive, and defensive operations to reach positions of relative advantage.

Collective defense treaties and Joint security cooperation consist of both foreign internal defense and security force assistance to deter conflict. Foreign internal defense, when approved, involves combat operations during a state of war.

First, Joint Forces may be required to partner with host nation forces and engage hostile elements with offensive operations to return the situation to a level controllable by the host nation. Additionally, defensive tasks may be required to counter the enemy’s offense and engage the population and interconnected “internet of things.” Protection determines which threats disrupt operations and the rule of law, and then counters or mitigates those threats. Examples of specific collaborative and networked threats include cyber attacks, electronic attack, explosive hazards, improvised weapons, unmanned aerial and ground systems, and weapons of mass destruction. Battle networks are technologically enhanced Anti-Access/Area Denial (A2/AD) human-machine combat capabilities that integrate defense systems for territorial defense and/or protected coercive activities.

Source:
http://globalbalita.com/wp-content/uploads/2014/03/A2AD-offensive-against-Japan.jpg

Furthermore, countering networks requires an understanding of great powers competition and political ends. Geopolitical competitors develop strategies across the continuum of conflict relative to rival advantages and national interests. These strategies emphasize both direct and indirect approaches across all domains to reach political ends. A mixed approach facilitates statecraft and unbounded policy to offset perceived disadvantages, deliver key narratives, and shape international norms.

Intergovernmental Military Alliances
Photo credit: Wikimedia

The collaborative networks that possess distinctive ways to achieve political objectives include:

1) Conventional Joint and irregular proxy forces with integrated air, ground, and sea defense capabilities

2) Emergent and disruptive technological networks

3) Super-empowered individuals and asymmetric proxy networks

Examples of emergent and disruptive technologies are artificial intelligence, advanced robotics, internet of things consisting of low-cost sensors, and additive manufacturing (3D printing).

Client states and proxy networks present significant challenges for Joint and multinational alliances when used as a key component of a competitor’s grand strategy. Proxy networks, however, are not limited to non-state paramilitary or insurgent networks. These un-attributable organizations also include convergent terrorist, transnational organized crime, and international hacker organizations.

Here the Syrian rebels are a proxy for the United States, and the Syrian government a proxy for Russia.
Image Credit: Thomas Leger

Multinational companies, political parties, and civic groups also act as proxy networks with access to high-end technologies and geo-economic capabilities. Geo-economics refers to the use of economic instruments to manipulate geopolitical objectives. These networks then either blend and cooperate or compete with other proxy actors, based on various motivations and incentives.

Adversaries will also use artificial intelligence networks as proxies to deliver more deniable and innovative attacks. The efficacy of multi-domain networks with human-machine teaming correlates to partnering, engaging, and countering activities designed to shape, deter, and win.

Source:
https://www.hackread.com/darpa-squad-x-help-troops-pinpoint-enemy-in-warfare/

Finally, operational approaches designed to force critical factors analysis, decision-making, and assessments are critical to understanding human and technologically-enabled 21st century competition and conflict. The Joint Operational Area must be assessed as one extended domain with resilient strategic network configurations designed to partner with, engage, and counter diverse systems.

Mission command through human-machine teaming, networks, and systems integration is inevitable and will leverage human adaptability, automated speed, and precision as future capabilities. The global competition for machine intelligence dominance is becoming a key element of both the changing character of war and technical threat to strategic stability.

Modifying doctrine to account for advances in autonomy, narrow artificial intelligence, and quantum computing is inevitable, and human-machine teaming has global implications.

If you enjoyed this post, please note:

  • U.S. Army Training and Doctrine Command (TRADOC) G-2’s Red Diamond Threats Newsletter, Volume 8, Issue 10 October 2017 addresses Russian “Snow Dome” A2/AD human-machine combat capabilities on pages 7-12.

  • The transformative impact of AI, robotics, and autonomy on our Soldiers and networks in future conflicts is further addressed in Redefining the Role of Soldiers on the Future Battlefield.

  • Headquarters, U.S. Army Training and Doctrine Command (TRADOC) is co-sponsoring the Bio Convergence and Soldier 2050 Conference with SRI International at Menlo Park, California, on 08-09 March 2018. This conference will be live-streamed; click here to watch the proceedings, starting at 0840 PST / 1140 EST on 08 March 2018. Ms. Elsa Kania, Adjunct Fellow, Center for New American Security (CNAS), will address “People’s Liberation Army (PLA) Human-Machine Integration” on Day 2 (09 March 2018) of the Conference.



Victor R. Morris is a civilian irregular warfare and threat mitigation instructor at the Joint Multinational Readiness Center (JMRC) in Germany.