48. Warfare at the Speed of Thought

(Editor’s Note: Mad Scientist Laboratory is pleased to present the second guest blog post by Dr. Richard Nabors, Associate Director for Strategic Planning and Deputy Director, Operations Division, U.S. Army Research, Development and Engineering Command (RDECOM) Communications-Electronics Research, Development and Engineering Center (CERDEC), addressing how Augmented and Mixed Reality are the critical elements required for integrated sensor systems to become truly operational and support Soldiers’ needs in complex environments.

Dr. Nabors’ previous guest post addressed how the proliferation of sensors, integrated via the Internet of Battlefield Things [IoBT], will provide Future Soldiers with the requisite situational awareness to fight and win in increasingly complex and advanced battlespaces.)

Speed has always been and will be a critical component in assuring military dominance. Historically, the military has sought to increase the speed of its jets, ships, tanks, and missiles. However, one of the greatest leaps that has yet to come and is coming is the ability to significantly increase the speed of the decision-making process of the individual at the small unit level.

Source: University of Maryland Institute for Advanced Computer Studies
To maximize individual and small unit initiative to think and act flexibly, Soldiers must receive as much relevant information as possible, as quickly as possible. Integrated sensor technologies can provide situational awareness by collecting and sorting real-time data and sending a fusion of information to the point of need, but that information must be processed quickly in order to be operationally effective. Augmented Reality (AR) and Mixed Reality (MR) are two of the most promising solutions to this challenge facing the military and will eventually make it possible for Soldiers to instantaneously respond to an actively changing environment.

AR and MR function in real-time, bringing the elements of the digital world into a Soldier’s perceived real world, resulting in optimal, timely, and relevant decisions and actions. AR and MR allow for the overlay of information and sensor data into the physical space in a way that is intuitive, serves the point of need, and requires minimal training to interpret. AR and MR will enable the U.S. military to survive in complex environments by decentralizing decision-making from mission command and placing substantial capabilities in Soldiers’ hands in a manner that does not overwhelm them with information.

Source: Tom Rooney III
On a Soldier’s display, AR can render useful battlefield data in the form of camera imaging and virtual maps, aiding a Soldier’s navigation and battlefield perspective. Special indicators can mark people and various objects to warn of potential dangers.
Source: MicroVision
Soldier-borne, palm-size reconnaissance copters with sensors and video can be directed and tasked instantaneously on the battlefield. Information can be gathered by unattended ground sensors and transmitted to a command center, with AR and MR serving as a networked communication system between military leaders and the individual Soldier. Used in this way, AR and MR increase Soldier safety and lethality.

In the near-term, the Army Research and Development (R&D) community is investing in the following areas:

Reliable position tracking devices that self-calibrate for head orientation of head-worn sensors.

• Ultralight, ultrabright, ultra-transparent display eyewear with wide field of view.

Source: CIO Australia

• Three-dimensional viewers with battlefield terrain visualization, incorporating real-time data from unmanned aerial vehicles, etc.

In the mid-term, R&D activities are focusing on:

• Manned vehicles with sensors and processing capabilities for moving autonomously, tasked for Soldier protection.

Robotic assets, tele-operated, semi-autonomous, or autonomous and imbued with intelligence, with limbs that can keep pace with Soldiers and act as teammates.

Source: BAE
• Robotic systems that contain multiple sensors that respond to environmental factors affecting the mission, or have self-deploying camouflage capabilities that stay deployed while executing maneuvers.

• Enhanced reconnaissance through deep-penetration mapping of building layouts, cyber activity, and subterranean infrastructure.

Once AR and MR prototypes and systems have seen widespread use, the far term focus will be on automation that could track and react to a Soldier’s changing situation by tailoring the augmentation the Soldier receives and by coordinating across the unit.

In addition, AR and MR will revolutionize training, empowering Soldiers to train as they fight. Soldiers will be able to use real-time sensor data from unmanned aerial vehicles to visualize battlefield terrain with geographic awareness of roads, buildings, and other structures before conducting their missions. They will be able to rehearse courses of action and analyze them before execution to improve situational awareness. AR and MR are increasingly valuable aids to tactical training in preparation for combat in complex and congested environments.

AR and MR are the critical elements required for integrated sensor systems to become truly operational and support Soldiers’ needs in complex environments. Solving the challenge of how and where to use AR and MR will enable the military to get full value from its investments in complex integrated sensor systems.

For more information on how the convergence of technologies will enhance Soldiers on future battlefields, see:

– The discussion on advanced decision-making in An Advanced Engagement Battlespace: Tactical, Operational and Strategic Implications for the Future Operational Environment, published by our colleagues at Small Wars Journal.

– Dr. James Canton’s presentation from the Mad Scientist Robotics, Artificial Intelligence, & Autonomy Conference at Georgia Tech Research Institute last March.

– Dr. Rob Smith’s Mad Scientist Speaker Series presentation on Operationalizing Big Data, where he addresses the applicability of AR to sports and games training as an analogy to combat training (noting “Serious sport is war minus the shooting” — George Orwell).

Dr. Richard Nabors is Associate Director for Strategic Planning, US Army CERDEC Night Vision and Electronic Sensors Directorate.

22. Speed, Scope, and Convergence Trends

“Speed is the essence of war. Take advantage of the enemy’s unpreparedness; travel by unexpected routes and strike him where he has taken no precautions.” — Sun Tzu

This timeless observation from The Art of War resonates through the millennia and is of particular significance to the Future Operational Environment
Mad Scientist Laboratory has addressed the impact of Autonomy, Artificial Intelligence (AI), and Robotic Trends in previous posts. Consequential in their own right, particularly in the hands of our adversaries, the impact of these technology trends is exacerbated by their collective speed, scope, and convergence, leading ultimately to man-machine co-evolution.

Speed. Some Mad Scientists posit that the rate of progress in these technologies will be “faster than Moore’s law.” As our adversaries close the technology gap and potentially overtake us in select areas, there is clearly a “need for speed” as cited in the Defense Science Board (DSB) Report on Autonomy. The speed of actions and decisions will need to increase at a much higher pace over time.

“… the study concluded that autonomy will deliver substantial operational value across an increasingly diverse array of DoD missions, but the DoD must move more rapidly to realize this value. Allies and adversaries alike also have access to rapid technological advances occurring globally. In short, speed matters—in two distinct dimensions. First, autonomy can increase decision speed, enabling the U.S. to act inside an adversary’s operations cycle. Secondly, ongoing rapid transition of autonomy into warfighting capabilities is vital if the U.S. is to sustain military advantage.” — DSB Summer Study on Autonomy, June 2016 (p. 3)

Scope. It may be necessary to increase not only the pace but also the scope of these decisions if these technologies generate the “extreme future” characterized by Mad Scientist Dr. James Canton as “hacking life” / “hacking matter” / “hacking the planet.” In short, no aspect of our current existence will remain untouched. Robotics, artificial intelligence, and autonomy – far from narrow topics – are closely linked to a broad range of enabling / adjunct technologies identified by Mad Scientists, to include:

• Computer Science, particularly algorithm design and software engineering
• Man-Machine Interface, to include Language / Speech and Vision
• Sensing Technologies
• Power and Energy
• Mobility and Manipulation
• Material Science to include revolutionary new materials
• Quantum Science
• Communications
• 3D (Additive) Manufacturing
• Positioning, Navigation and Timing beyond GPS
• Cyber

Science and Technological Convergence. Although 90% of the technology development will occur in the very fragmented, uncontrolled private sector, there is still a need to view robotics, artificial intelligence and autonomy as a holistic, seamless system. Technology convergence is a recurring theme among Mad Scientists. They project that we will alter our fundamental thinking about science because of the “exponential convergence” of key technologies, including:

• Nanoscience and nanotechnology
• Biotechnology and Biomedicine
• Information Technology
• Cognitive Science and Neuroscience
• Quantum Science

This convergence of technologies is already leading to revolutionary achievements with respect to sensing, data acquisition and retrieval, and computer processing hardware. These advances in turn enable machine learning to include reinforcement learning and artificial intelligence. They also facilitate advances in hardware and materials, 3D printing, robotics and autonomy, and open-sourced and reproducible computer code. Exponential convergence will generate “extremely complex futures” that include capability “building blocks” that afford strategic advantage to those who recognize and leverage them.

Co-Evolution. Clearly humans and these technologies are destined to co-evolve. Humans will be augmented in many ways: physically, via exoskeletons; perceptionally, via direct sensor inputs; genetically, via AI-enabled gene-editing technologies such as CRISPR; and cognitively via AI “COGs” and “Cogni-ceuticals.” Human reality will be a “blended” one in which physical and digital environments, media and interactions are woven together in a seamless integration of the virtual and the physical. As daunting – and worrisome – as these technological developments might seem, there will be an equally daunting challenge in the co-evolution between man and machine: the co-evolution of trust.

Trusted man-machine collaboration will require validation of system competence, a process that will take our legacy test and verification procedures far beyond their current limitations. Humans will expect autonomy to be nonetheless “directable,” and will expect autonomous systems to be able to explain the logic for their behavior, regardless of the complexity of the deep neural networks that motivate it. These technologies in turn must be able to adapt to user abilities and preferences, and attain some level of human awareness (e.g., cognitive, physiological, emotional state, situational knowledge, intent recognition).

For additional information on The Convergence of Future Technology, see Dr. Canton’s presentation from the Mad Scientist Robotics, Artificial Intelligence, & Autonomy Conference at Georgia Tech Research Institute last March.