304. Insights from the Robotics and Autonomy Series of Virtual Events

[Editor’s Note: Mad Scientist launched its Robotics and Autonomy series of virtual events late last summer to explore how automation is revolutionizing warfighting across all domains. Hosting world class subject matter experts in a series of five webinars, we dove in to explore how our near-peer adversaries (Russia and China) were integrating autonomy on the battlefield and compared/contrasted their efforts with ongoing U.S. initiatives in the Ground, Air, Sea, and Space domains. We concluded the series with a DoD panel discussion on the frameworks (i.e., ethics and policy) for U.S. employment of autonomy on the future battlefield. Today’s post documents a number of insights we have gleaned from this series of events — Read on! (Please review this post via a non-DoD network in order to access all of the embedded links — Thank you!)]

The democratization and convergence of dual use technologies, including Machine Learning (ML) / Artificial Intelligence (AI), robotics / autonomy, sensors, and edge computing have the potential to revolutionize the Operational Environment and the character of warfare. Given the increased interest and development of autonomous weapon systems around the globe, the U.S. Army’s Mad Scientist Initiative launched its Robotics and Autonomy series of virtual events to explore the opportunities and challenges that these systems will present for the U.S. Army. Here are some of the highlights of what we learned about Unmanned Systems (UxS):

Unmanned Ground Systems (UGS):

Countries recognize that integrating advanced and unmanned robotics on the battlefield will allow for increased leverage over their enemies. Even nations with limited defense budgets have developed strategic modernization plans to develop robotic platforms over the next decade.

Russia’s Uran-9 underwent operational tests in Syria, revealing a number of system shortfalls / Source: http://vitalykuzmin.net via Creative Commons Attribution-Share Alike 4.0 International

Based on their combat experiences in Syria, Russia is working to develop UGS that can independently recognize targets, cooperate in swarms, use weapons, independently complete tasks under ambiguous conditions using AI, and solve a large range of problems under complex scenarios.

China’s People’s Liberation Army (PLA) recently conducted combined arms exercises incorporating Robotics and Autonomous Systems (RAS), using remotely operated mine-clearing systems to open routes while under simulated fire.  Data sourced from the various platforms’ sensors was exchanged with combat UGVs, along with a swarm of quadcopters conducting both ISR and CAS missions against ground forces during these exercises.

Three challenges will define the future of UGS:

      • They will need to achieve greater mobility and maneuvering capabilities. Self-driving competency will not be sufficient. Instead, this technology will need to navigate difficult terrain strategically, cooperating with other platforms and reacting to adversarial actions.
      • They will be required to coordinate and team with both manned and unmanned assets continuously.
      • They will need to rapidly exchange information with robotic and human teammates in extremely contested communications environments. Furthermore, they will need to withstand adversary attempts to jam communications in order to ensure mission success.

Unmanned Aerial Systems (UAS):

While noting that Sukhoi’s S-70 Охотник (Hunter) heavy attack drone is currently being developed and Kronstadt Technologies’ Орион Э (Orion E) has been combat tested in Syria, Russian media has recently lamented that their artillery-centric military still does not possess “attack drones in the required quantity.”

China’s PLA Daily envisions drone swarms as the “advance guard” and a force that “will likely become the ‘blade of victory’ in the hands of commanders at all levels on the future battlefield.” Drone swarm operations offer six “exceptional advantages” — They have greater autonomy, possess more functional capabilities, are more resilient, have a more rapid response time, are more economical, and are less dependent on logistics and outside support. They can be used to implement multi-domain attacks and can even be used to carry a large number of individual drones in a multi-domain attack scenario.

However, before either Russia or China’s aspirations regarding UAS autonomous capabilities can be fully realized, they will first need to develop effective processes to improve their UAS’ ability to team with both humans and other robotic systems. While humans will still be responsible for higher order thinking, UAS will eventually be better positioned to take risks on the battlefield.

Given DoD policy for human oversight of autonomous systems, increased use of these platforms will incentivize the targeting of the humans who control these systems. Battlefield strategy will also shift to electronic warfare and information control in an effort to disrupt these technologies.

In order to combat UAS threats, the United States will need to develop multi-spectral sensors that can be tuned for slow and fast threats, layered defenses that use multiple detection and defeat systems, active sensors that can detect and track non-emitting threats, and highly sensitive sensors for detecting and tracking extremely small targets.

Unmanned Maritime Vehicles (UMVs):

China’s PLA Navy is interested in using UMVs as a force multiplier to enhance power projection in its regional waters, including the East and South China Seas. While these platforms will be used for data collection and reconnaissance, they will also enable the PRC to establish a consistent force presence in the contested region, enabling them to extend their ability to patrol without the associated expense of capital ship building.

China is working on swarming technology for their UMVs, including automatic formation change. Other notable developments include an automated amphibious vehicle capable of lying dormant for several months.

Compared with Russian UAV and UGV technology, Russian UMVs remain untested in conflict situations. However, Russia hopes that UMVs can support mission success in the Arctic region.  Russian Underwater Unmanned Vehicles (UUVs) are being designed for both military and civilian use. Capabilities under development include anti-submarine technologies, nuclear capable vehicles, and swarming. Currently, Russian UMV technology can travel either at high speeds or quietly, but not both.

Unmanned Space Systems:

Space has the potential to become the most strategically important domain in the Operational Environment. Today’s maneuver Brigade Combat Team (BCT) has over 2,500 pieces of equipment dependent on space-based assets for Positioning, Navigation, and Timing (PNT). This number will only increase as emerging technology on Earth demands increased bandwidth, new orbital infrastructure, niche satellite capabilities, and advanced robotics.  The Space Domain is vital to Multi-Domain Operations.  Our adversaries are well aware of this dependence and intend to disrupt and degrade these capabilities.

The U.S. military will need to determine how to operate in space amongst increasing traffic with a growing space community of over 90 spacefaring countries and companies such as Amazon, Google, and Alibaba racing to capitalize on a potential space boom in the 2020s.  The increased presence of autonomous systems in this environment will amplify the potential for competition and conflict among nation states and commercial enterprises.  Increased autonomy in space, however, will allow information to be available on demand, even within a congested environment. Space could become a data haven that automatically trades information, a process that currently occurs on Earth.

Autonomy will enable nations to expand human presence in space by serving as “Sherpas.” Unmanned systems will be placed in space before human arrival and will remain upon human departure, thus requiring systems capable of self-sustainment.  Data received from autonomous systems will be constrained by bandwidth, physical storage, and power.  Unmanned space systems will need to be able to interpret and prioritize information that is transmitted to Earth.  Advances in long-range data burst communications and space robotic extended autonomy, mobility, dexterity, endurance, and toughness will also enhance future Army robotics and autonomy applications in the land domain.

Frameworks (Ethics & Policy) for Autonomy on the Future Battlefield

Per the Congressional Research Service’s Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems, “approximately 30 countries and 165 nongovernmental organizations have called for a preemptive ban on Lethal Autonomous Weapon Systems (LAWS) due to ethical concerns, including concerns about operational risk, accountability for use, and compliance with the proportionality and distinction requirements of the law of war.” The U.S. government opposes a ban on LAWS, noting that “automated target identification, tracking, selection, and engagement functions can allow weapons to strike military objectives more accurately and with less risk of collateral damage or civilian casualties.”

However, Department of Defense Directive (DoDD) 3000.09, Autonomy in Weapons Systems, does require senior review and approval of autonomous weapons platforms before their formal development. The goal of this process is to avoid unintended engagement and harm by autonomous weapons systems and to maintain appropriate levels of human judgement over the use of force. Additionally, DoD’s Ethical Principles for Artificial Intelligence outlines five principles that will generate responsible AI and effective DoD infrastructure when operationalized.

Engaging with ethical questions early and often in technology development will be essential to maximizing traceability of autonomous systems. Focusing on human agency and avoiding harm will also make it possible to pursue virtuous technology, as opposed to meeting a minimum ethical standard.  The definition of acceptable harm is not for technology developers to determine. However, they do need to be aware of such definitions in order to encode choices for different levels of acceptable harm and enable future operators to make such decisions.  Although legal questions of individual versus state responsibility for harm caused by autonomous weapons are still being explored, it has been determined that humans, and not machines, will ultimately be held responsible.

It should also be noted that an asymmetry in ethics exists, vis-a-vis our adversaries (both state and non-state) and autonomous systems.  While our approach in developing UxS is sober, deliberate, and nuanced, our adversaries recognize the importance of AI, particularly in matching and overtaking U.S. conventional military dominance. Military thinkers within the PLA embrace AI’s prospects as leapfrog technology that would allow China to skip technological development stages and rapidly overmatch U.S. capabilities.  Russia’s Vladimir Putin has proclaimedArtificial intelligence is the future not only of Russia but of all of mankind… Whoever becomes the leader in this sphere will become the ruler of the world.”

Conclusion

The development and proliferation of UxS brings with it a host of considerations for the U.S. Army:

      • As UxS extend the distance between the human-in-the-loop and the “bleeding edge” of battle, new zones of conflict, previously considered safe, could emerge as new targets for long distance, precision kinetic strikes on our homeland.
      • Widespread implementation of AI will require future operators to leverage critical thinking, communication, and continuous learning skills to constantly challenge assumptions. Operators will not necessarily need to be a coder or developer.  Rather, they will need to generally understand how a technology works and why it may malfunction.
      • Connectivity remains an Achilles’ heel for UxS, leading to a heightened focus on developing Electronic Warfare (EW) capabilities and achieving Electromagnetic (EM) spectrum dominance.  A new, spiraling arms race to simultaneously harden and disrupt networks could ensue.  Victory could ultimately go to the side that is able to operate unconnected.
      • The U.S. must remain vigilant and resist the siren’s call of complex, multi-role weapons systems that are expensive to procure, with force structures that cannot be readily reconstituted.  Mad Scientist has long advocated for a shift away from large and expensive systems to cheap, scalable, and potentially even disposable UxS.  Increases in miniaturized computing power in cheaper systems, coupled with advances in machine learning could lead to massed precision rather than sacrificing precision for mass and vice versa.

If you enjoyed this post, check out all of the associated webinar content (presenter biographies, slide decks, and notes) from our Mad Scientist Robotics and Autonomy series of virtual events and watch the associated videos [via a non-DoD network]

… read the following related posts:

Insights from the Nagorno-Karabakh Conflict in 2020

“Once More unto The Breach Dear Friends”: From English Longbows to Azerbaijani Drones, Army Modernization STILL Means More than Materiel, by Ian Sullivan

How Big of a Deal are Drone Swarms? by proclaimed Mad Scientist Zak Kallenborn

Jomini’s Revenge: Mass Strikes Back! by proclaimed Mad Scientist Zachery Tyson Brown

“Own the Night” and the associated Modern War Institute podcast with proclaimed Mad Scientist Mr. Bob Work

… check out the following posts on our 2+3 Threats:

Russia: Our Current Pacing Threat

China: Our Emergent Pacing Threat

The Iranian Pursuit of Military Advantage: A Forecast for the Next Seven Years, The Hermit Kingdom in the Digital Era: Implications of the North Korean Problem for the SOF Community, and Extremism on the Horizon: The Challenges of VEO Innovation by Colonel Montgomery Erfourth and Dr. Aaron Bazin

… and see what we’ve learned about the Character of Warfare 2035

>>> REMINDER 1: Mad Scientist is pleased to announce Competition and Conflict in the Next Decade, the next webinar in our continuing series of monthly virtual events – Are We Doing Enough, Fast Enough? – exploring our adversaries’ views on Competition, Crisis, Conflict, and Change on 23 February 2021 (starting at 1030 EST). Join our panelists:  Dr. George Friedman, Founder and Chairman of Geopolitical Futures; John Edwards, U.S. Secret Service’s Deputy Special Agent in Charge, Office of Strategic Planning and Policy; Dr. Eleonora Mattiacci, Assistant Professor for Political Science, Amherst College; Dr. Zack Cooper, Research Fellow, American Enterprise Institute, Lecturer, Princeton University, and Adjunct Assistant Professor, Georgetown University; and Collin Meisel, Program Lead, Diplometrics, Frederick S. Pardee Center for International Futures, University of Denver, as they present their diverse perspectives on this vital topic and then take questions from registered participants.

Register here [via a non-DoD network] to participate in this informative event!

>>> REMINDER 2: Our Mad Scientist Writing Contest on Competition, Crisis, Conflict, and Change seeks to crowdsource the intellect of the Nation (i.e., You!) regarding:

How will our competitors deny the U.S. Joint Force’s tactical, operational, and strategic advantages to achieve their objectives (i.e., win without fighting) in the Competition and Crisis Phases?

How will our adversaries seek to overmatch or counter U.S. Joint Force strengths in future Large Scale Combat Operations?

Review the submission guidelines on our contest flyer, then get cracking brainstorming and crafting your innovative and insightful visions! Deadline for submission is 15 March 2021.

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

 

Share on Facebook Share on LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *