179. A New Age of Terror: New Mass Casualty Terrorism Threats

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s post by returning guest blogger Zachary Kallenborn, continuing his New Age of Terror series.  The democratization of unmanned air, ground, sea, and subsea systems and the proliferation of cyber-physical systems (e.g., automated plants) provide lesser states, non-state actors, and super-empowered individuals with new capabilities to conduct long-range precision fires and generate global non-kinetic effects resulting in mass casualty events. The potential weaponization of these otherwise benign capabilities pose new vulnerabilities to those who fail to remain vigilant and imagine the unthinkable — beware!]

A loud buzz pierced the quiet night air. A group of drones descended on a chemical plant near New York City. The drones disperse throughout the installation in search of storage tanks. A few minutes later, the buzz of the drone propellers was drowned out by loud explosions. A surge of fire leapt to the sky. A plume of gas followed, floating towards the nearby city. The gas killed thousands and thousands more were hospitalized with severe injuries.

The rapid proliferation of unmanned systems and cyber-physical systems offer terrorists new, easier means of carrying out mass casualty attacks. Drones allow terrorists to reduce their operational risk and acquire relatively low cost platforms. Cyber attacks require few resources and could cause significant harm, though a lack of expertise limits terrorist ability to inflict harm. Terrorists may prefer these methods to difficult-to-acquire and risky chemical, biological, radiological, and nuclear (CBRN) weapons.

Drones

Drones offer terrorists low cost methods of delivering harm with lower risk to attacker lives. Drone attacks can be launched from afar, in a hidden position, close to an escape route. Simple unmanned systems can be acquired easily: Amazon.com offers seemingly hundreds of drones for as low as $25. Of course, low cost drones also mean lower payloads that limit the harm caused, often significantly. Improvements to drone autonomy will allow terrorists to deploy more drones at once, including in true drone swarms.1 Terrorists can mount drone attacks across air, land, and sea.

Aerial drones allow attackers to evade ground-based defenses and could be highly effective in striking airports, chemical facilities, and other critical infrastructure. Houthi rebels in Yemen have repeatedly launched drone strikes on Saudi oil pipelines and refineries.2  Recent drone attacks eliminated half of Saudi oil production capacity.3  Attacks on chemical facilities are likely to be particularly effective. A chemical release would not require large amounts of explosives and could cause massive harm, as in the Bhopal gas accident that killed thousands. Current Department of Homeland Security Chemical Facility Anti-Terrorism Standards do not require any meaningful defenses against aerial attack.4  Alternatively, even small drones can cause major damage to airplane wings or engines, potentially risking bringing a plane down.5  In December 2018, that risk alone was enough to ground hundreds of flights at Gatwick airport south of London when drones were spotted close to the runway.

Self-driving cars also provide a means of mass casualty attack. Waymo, Uber, and several other companies seek to launch a self-driving taxi service, open to the public. Terrorists could request multiple taxis, load them with explosives or remotely operated weapons, and send them out to multiple targets. Alternatively, terrorists could launch multi-stage attacks on the same target: a first strike causes first responders to mass and subsequent attacks hit the responders. In fact, ISIS has reportedly considered this option.6

For a few hundred dollars, anyone can rent a semi-autonomous surface vessel that can carry up to 35lbs.7  No license or registration is necessary.8  Although a surface attack limits terrorists to maritime targets, potential still exists for significant harm. Terrorists can strike popular tourist sites like the Statue of Liberty or San Francisco’s Fisherman’s Wharf. U.S. military vessels are ideal targets too, such as the USS Cole bombing in October 2000.9  But drones are not the only new method of attack.

Cyber-physical systems

Like drones, cyber attacks are low cost and reduce operational risks. Cyber attacks can be launched from secure locations, even on the other side of the world. Terrorists also gain high levels of autonomy that will inhibit law enforcement responses.10  Although cyberterrorism requires significant technical know-how, terrorists require few resources other than a computer to carry out an attack.

Cyber attacks could target chemical facilities, airplanes, and other critical infrastructure targets. In 2000, Vitek Boden infiltrated computers controlling the sewage system of Maroochy Shire, Australia, and released hundreds of thousands of gallons of raw sewage into the surrounding area.11  Boden could have caused even more harm if he wished.12  Although Boden’s attack primarily harmed the environment, other attacks could threaten human life. Cyber attacks could disable safety systems at chemical facilities, risking an accidental toxic gas release or explosions. A cyber assault on a Saudi petrochemical facility in August 2017 reportedly had that exact goal.13

However, cyber expertise and specific target knowledge is likely to be a significant inhibitor. Although attacks on critical infrastructure may require specialist knowledge of the control system and administrative operations, protective measures are not always implemented, leaving targets vulnerable.14  Boden was successful in large part because he worked closely with the sewage system’s control systems. Although terrorists have defaced websites and conducted denial of service attacks, known terrorist organizations do not currently possess the capabilities to mount a major destructive cyber attack.15  The availability of the necessary human capital is a strong factor in whether terrorists pursue cyber attacks.16  Nonetheless, the risk is likely to grow as terrorists develop greater cyber capabilities, increased connectivity creates new opportunities for attack, and the black market for cybercrime tools grows.17

The Future Operational Environment

Hot-zone team members from Hawaii’s Chemical, Biological, Radiological, Nuclear, and High-Yield Explosive, Enhanced-Response-Force-Package Team (CERFP) process simulated casualties through a decontamination zone during an exercise this spring. /  Source: U.S. Air National Guard photo by Senior Airman John Linzmeier

If terrorists have new avenues of mass casualty attack, U.S. forces must devote more resources to force protection and emergency response. U.S. forces may be called upon to aid local, state, and federal emergency responders in the event of a mass casualty attack. Likewise, U.S. troops may face risks themselves: cyber and drone attacks could certainly target U.S. military installations. Even attacks that do not kill can cause significant harm: disrupting airport operations as in the 2018 Gatwick drone incident may delay troop resupply, troop deployment, or close air support to Soldiers in the field. The U.S. military and the broader national security community must rethink its approach to mass casualty terrorism to respond to these threats. Terrorist groups have typically required CBRN weapons to cause mass harm. But if you can kill thousands in a drone attack, why bother with risky, difficult-to-acquire CBRN weapons?

For more information on this threat trend, see Non-State Actors and Their Uses of Emerging Technology, presented by Dr. Gary Ackerman, National Consortium for the Study of Terrorism and Responses to Terrorism, University of Maryland, at the Mad Scientist Robotics, Artificial Intelligence & Autonomy Conference at the Georgia Tech Research Institute, Atlanta, Georgia, 7-8 March 2017…

… as well as the following related Mad Scientist Laboratory posts:

– Zachary Kallenborn‘s previous post, A New Age of Terror: The Future of CBRN Terrorism.

– Marie Murphy‘s post, Trouble in Paradise: The Technological Upheaval of Modern Political and Economic Systems

The Democratization of Dual Use Technology

Autonomy Threat Trends

The Future of the Cyber Domain

Emergent Threat Posed by Super-Empowered Individuals

… and crank up Love and Terror by The Cinematics!

Zachary Kallenborn is a freelance researcher and analyst, specializing in Chemical, Biological, Radiological, and Nuclear (CBRN) weapons, CBRN terrorism, drone swarms, and emerging technologies writ large. His research has appeared in the Nonproliferation Review, Studies in Conflict and Terrorism, Defense One, the Modern War Institute at West Point, and other outlets. His most recent study, Swarming Destruction: Drone Swarms and CBRN Weapons, examines the threats and opportunities of drone swarms for the full scope of CBRN weapons.

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).


1 Amy Hocraffer and Chang S. Nam, “A Meta-analysis of Human–System Interfaces in Unmanned Aerial Vehicle (UAV) Swarm Management,” Applied Ergonomics, Vol. 58 (2017), pp. 66–80, http://www.researchgate.net/profile/Chang_Nam5/publication/303782432_A_meta-analysis_of_human-system_interfaces_in_unmanned_aerial_vehicle_UAV_swarm_management/links/5767f71f08ae1658e2f8b435.pdf

2 Natasha Turak, “Oil Prices Jump as Saudi Energy Minister Reports Drone ‘Terrorism’ Against Pipeline Infrastructure,” CNBC, May 14, 2019, https://www.cnbc.com/2019/05/14/oil-jumps-as-saudi-energy-minister-reports-drone-terrorism-against-pipeline.html

3 John Defterios and Victoria Cavaliere, “Coordinated Strikes Knock Out Half of Saudi Oil Capacity, More Than 5 Million Barrels a Day,” CNN, September 15, 2019, https://www.cnn.com/2019/09/14/business/saudi-oil-output-impacted-drone-attack/index.html

4 Department of Homeland Security, “Risk-Based Performance Standards Guidance: Chemical Facility Anti-Terrorism Standards,” May 2009, 15, 85.

5 Peter Dockrill, “Here’s What it Looks Like When a Drone Smashes into a Plane Wing at 238 MPH,” ScienceAlert, October 22, 2018, https://www.sciencealert.com/this-is-what-it-looks-like-drone-smashes-into-plane-s-wing-238-mph-mid-air-collision-aircraft-impact

6 Lia Eustachewich, “Terrorist Wannabes Plotted Self-Driving Car Bomb Attack: Authorities,” New York Post, September 4, 2018, https://nypost.com/2018/09/04/terrorist-wannabes-plotted-self-driving-car-bomb-attack-authorities/

7 AllTerra, “AllTerra Rental Rates,” May 3, 2019, https://allterracentral.com/pub/media/wysiwyg/AllTerra_Rental_Rates-5.3.19.pdf

8 Phone conversation with USV retailer.

9 CNN Library, “USS Cole Bombing Fast Facts,” CNN, March 27, 2019, https://www.cnn.com/2013/09/18/world/meast/uss-cole-bombing-fast-facts/index.html

10 Steve S. Sin, Laura A. Blackerby, Elvis Asiamah, and Rhyner Washburn, “Determining Extremist Organisations’ Likelihood of Conducting Cyber Attacks,” 2016 8th International Conference on Cyber Conflict, May 31 to June 3, 2016, http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=7529428&tag=1

11 Marshall Abrams and Joe Weiss, “Malicious Control System Cyber Security Attack Case Study – Maroochy Water Services, Australia,” MITRE, July 23, 2008, https://www.mitre.org/sites/default/files/pdf/08_1145.pdf

12 Nabil Sayfayn and Stuart Madnick, “Cybersafety Analysis of the Maroochy Shire Sewage Spill (Preliminary Draft),” Cybersecurity Interdisciplinary Systems Laboratory, May 2017, http://web.mit.edu/smadnick/www/wp/2017-09.pdf

13 Nicole Perlroth and Clifford Krauss, “A Cyberattack in Saudi Arabia had a Deadly Goal. Experts Fear Another Try,” New York Times, March 15, 2018, https://www.nytimes.com/2018/03/15/technology/saudi-arabia-hacks-cyberattacks.html

14 Noguchi Mutsuo and Ueda Hirofumi, “An Analysis of the Actual Status of Recent Cyberattacks on Critical Infrastructure,” NEC Technical Journal, Vol. 12, No. 2, January 2018, https://www.nec.com/en/global/techrep/journal/g17/n02/pdf/170204.pdf

15 Tamara Evan, Eireann Leverett, Simon Ruffle, Andrew Coburn, James Bourdeau, Rohan Gunaratna, and Daniel Ralph, “Cyber Terrorism: Assessment of the Threat to Insurance,” Cambridge Centre for Risk Studies – Cyber Terrorism Insurance Futures 2017, November 2017, https://www.jbs.cam.ac.uk/fileadmin/user_upload/research/centres/risk/downloads/pool-re-cyber-terrorism.pdf

16 Steve S. Sin, et al, “Determining Extremist Organisations’ Likelihood of Conducting Cyber Attacks.”

17 Lillian Ablon, Martin C. Libicki, and Andrea A. Golay, “Markets for Cybercrime Tools and Stolen Data: Hacker’s Bazaar,” RAND, 2014, https://www.rand.org/content/dam/rand/pubs/research_reports/RR600/RR610/RAND_RR610.pdf

178. Space: Challenges and Opportunities

[Editor’s Note:  The U.S. Army Futures Command (AFC) and Training and Doctrine Command (TRADOC) co-sponsored the Mad Scientist Disruption and the Operational Environment Conference with the Cockrell School of Engineering at The University of Texas at Austin on 24-25 April 2019 in Austin, Texas. Today’s post is excerpted from this conference’s Final Report (see link at the end of this post), addressing how the Space Domain is becoming increasingly crowded, given that the community of spacefaring entities now comprises more than 90 nations, as well as companies such as Amazon, Google, and Alibaba.  This is particularly significant to the Army as it increasingly relies on space-based assets to support long-range precision fires and mission command.  Read on to learn how this space boom will create operational challenges for the Army, while simultaneously yield advances in autonomy that will ultimately benefit military applications in the other operational domains. (Note: Some of the embedded links in this post are best accessed using non-DoD networks.)]

Everybody wants to launch satellites

Space has the potential to become the most strategically important domain in the Operational Environment. Today’s maneuver Brigade Combat Team (BCT) has over 2,500 pieces of equipment dependent on space-based assets for Positioning, Navigation, and Timing (PNT).1 This number is only going to increase as emerging technology on Earth demands increased bandwidth, new orbital infrastructure, niche satellite capabilities, and advanced robotics.

Image made from models used to track debris in Low Earth Orbit / Source: NASA Earth Observatory; Wikimedia Commons

Low Earth Orbit is cluttered with hundreds of thousands of objects, such as satellites, debris, and other refuse that can pose a hazard to space operations, and only one percent of these objects are tracked.2  This complexity is further exacerbated by the fact that there are no universally recognized “space traffic rules” and no standard operating procedures. Additionally, there is a space “gold rush” with companies and countries racing to launch assets into orbit at a blistering pace. The FCC has granted over 7,500 satellite licenses for SpaceX alone over the next five years, and the U.S. has the potential to double the number of tracked space objects in that same timeframe.3 This has the potential to cause episodes of Kessler syndrome – where cascading damage produced by collisions increases debris by orders of magnitude.4  This excess debris could also be used as cover by an adversary for a hostile act, thereby making attribution difficult.

There are efforts, such as University of Texas-Austin’s tool ASTRIAGraph, to mitigate this problem through crowdsourcing the location of orbital objects. A key benefit of these tools is their ability to analyze all sources of information simultaneously so as to get the maximum mutual information on desired space domain awareness criteria and enable going from data to discovery.5   One added benefit is that the system layers the analysis of other organizations and governments to reveal gaps, inconsistencies, and data overlaps. This information is of vital importance to avoid collisions, to determine what is debris and what is active, and to properly plan flight paths. For the military, a collision with a mission-critical asset could disable warfighter capabilities, cause unintentional escalation, or result in loss of life.

As astronauts return to Earth via the Orion spacecraft, autonomous caretaking systems will maintain Gateway. / Source: NASA

Autonomy will be critical for future space activities because physical human presence in space will be limited. Autonomous robots with human-like mechanical skills performing maintenance and hardware survivability tasks will be vital. For example, NASA’s Gateway program relies upon fully autonomous systems to function as it’s devoid of humans for 11 months out of the year.

An autonomous caretaking capability will facilitate spacecraft maintenance when Gateway is unmanned / Source: NASA; Dr. Julia Badger

Fixing mechanical and hardware problems on the space station requires a dexterous robot on board that takes direction from a self-diagnosing program, thus creating a self-healing system of systems.6 The military can leverage this technology already developed for austere environments to perform tasks requiring fine motor skills in environments that are inhospitable or too dangerous for human life. Similar dual-use autonomous capabilities employed by our near-peer competitors could also serve as a threat capability against U.S. space assets.  As the military continues to expand its mission sets in space, and its assets become more complex systems of systems, it will increasingly rely on autonomous or semi-autonomous robots for maintenance, debris collection, and defense.

The Space Domain is vital to Land Domain operations.  Our adversaries are well aware of this dependence and intend to disrupt and degrade these capabilities.  NASA is at the forefront of long range operations with robotic systems responsible for self-healing, collection of information, and communications.  What lessons are being learned and applied by the Army from NASA’s experience with autonomous operations in Space?

If you enjoyed this post, please also see:

The entire Mad Scientist Disruption and the Operational Environment Conference Final Report, dated 25 July 2019.

– Dr. Moriba K. Jah and Dr. Diane Howard‘s presentation from the aforementioned conference on Space Traffic Management and Situational Awareness

Dr. Julia Badger‘s presentation from the same conference on Robotics in Space.

– Dr. Jah‘s Modern War Institute podcast on What Does the Future Hold for the US Military in Space? hosted by our colleagues at Modern War Institute.

The following Mad Scientist Laboratory blog posts on space:


1 Houck, Caroline, “The Army’s Space Force Has Doubled in Six Years, and Demand Is Still Going Up,” DefenseOne, 23 Aug. 2017. https://www.defenseone.com/technology/2017/08/armys-space-force-has-doubled-six-years-and-demand-still-going/140467/

2 Jah, Moriba, Mad Scientist Conference: Disruption and the Future Operational Environment, University of Texas at Austin, 25 April 2019.

3 Seemangal, Robin, “Watch SpaceX Launch the First of its Global Internet Satellites,” Wired, 18 Feb. 2018. https://www.wired.com/story/watch-spacex-launch-the-first-of-its-global-internet-satellites/

4 “Micrometeoriods and Orbital Debris (MMOD),” NASA, 14 June 2016. https://www.nasa.gov/centers/wstf/site_tour/remote_hypervelocity_test_laboratory/micrometeoroid_and_orbital_debris.html

5 https://sites.utexas.edu/moriba/astriagraph/

6 Badger, Julia, Mad Scientist Conference: Disruption and the Future Operational Environment, University of Texas at Austin, 25 April 2019.

127. “Maddest” Guest Blogger!

[Editor’s Note: Since its inception in November 2017, the Mad Scientist Laboratory has enabled us to expand our reach and engage global innovators from across industry, academia, and the Government regarding emergent disruptive technologies and their individual and convergent impacts on the future of warfare. For perspective, our blog has accrued 106K views by over 57K visitors from around the world!

Our Mad Scientist Community of Action continues to grow — in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. To date, 53% of the blog posts published have been submitted by guest bloggers! We challenge you all to contribute your ideas about warfare and the Future Operational Environment!

In particular, we would like to recognize proclaimed Mad Scientist Dr. Alexander Kott by re-posting our review of his paper, Ground Warfare in 2050: How It Might Look, original published by the US Army Research Laboratory in August 2018.  This paper provides a technological forecast of autonomous intelligent agents and robots and their potential for employment on future battlefields in the year 2050.

Our review of Dr. Kott’s paper generated a record number of visits and views during the past six month period. Consequently, we hereby declare Dr. Kott to be the Mad Scientist Laboratory’s “Maddest” Guest Blogger! for the first and second quarters of FY19. In recognition of this achievement, Dr. Kott will receive much coveted Mad Scientist swag!

Enjoy today’s post as we revisit Dr. Kott’s conclusions with links to our previously published posts supporting his findings.]

Ground Warfare in 2050:  How It Might Look

In his paper, Dr. Kott addresses two major trends (currently under way) that will continue to affect combat operations for the foreseeable future. They are:

The employment of small aerial drones for Intelligence, Surveillance, and Reconnaissance (ISR) will continue, making concealment difficult and eliminating distance from opposing forces as a means of counter-detection. This will require the development and use of decoy capabilities (also intelligent robotic devices). This counter-reconnaissance fight will feature prominently on future battlefields between autonomous sensors and countermeasures – “a robot-on-robot affair.”

See our related discussions regarding Concealment in the Fundamental Questions Affecting Army Modernization post and Finders vs Hiders in our Timeless Competitions post.

The continued proliferation of intelligent munitions, operating at greater distances, collaborating in teams to seek out and destroy designated targets, and able to defeat armored and other hardened targets, as well as defiladed and entrenched targets.

See our descriptions of the future recon / strike complex in our Advanced Engagement Battlespace and the “Hyperactive Battlefield” post, and Robotics and Swarms / Semi Autonomous capabilities in our Potential Game Changers post.

These two trends will, in turn, drive the following forecasted developments:

Increasing reliance on unmanned systems, “with humans becoming a minority within the overall force, being further dispersed across the battlefield.”

See Mr. Jeff Becker’s post on The Multi-Domain “Dragoon” Squad: A Hyper-enabled Combat System, and Mr. Mike Matson’s Demons in the Tall Grass, both of which envision future tactical units employing greater numbers of autonomous combat systems; as well as Mr. Sam Bendett’s post on Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward, addressing the contemporary hurdles that one of our strategic competitors must address in operationalizing Unmanned Ground Vehicles.

Intelligent munitions will be neutralized “primarily by missiles and only secondarily by armor and entrenchments. Specialized autonomous protection vehicles will be required that will use their extensive load of antimissiles to defeat the incoming intelligent munitions.”

See our discussion of what warfare at machine-speed looks like in our Advanced Engagement Battlespace and the “Hyperactive Battlefield”.

Source: Fausto De Martini / Kill Command

Forces will exploit “very complex terrain, such as dense forest and urban environments” for cover and concealment, requiring the development of highly mobile “ground robots with legs and limbs,” able to negotiate this congested landscape.

 

See our Megacities: Future Challenges and Responses and Integrated Sensors: The Critical Element in Future Complex Environment Warfare posts that address future complex operational environments.

Source: www.defenceimages.mod.uk

The proliferation of autonomous combat systems on the battlefield will generate an additional required capability — “a significant number of specialized robotic vehicles that will serve as mobile power generation plants and charging stations.”

See our discussion of future Power capabilities on our Potential Game Changers handout.

“To gain protection from intelligent munitions, extended subterranean tunnels and facilities will become important. This in turn will necessitate the tunnel-digging robotic machines, suitably equipped for battlefield mobility.”

See our discussion of Multi-Domain Swarming in our Black Swans and Pink Flamingos post.

All of these autonomous, yet simultaneously integrated and networked battlefield systems will be vulnerable to Cyber-Electromagnetic Activities (CEMA). Consequently, the battle within the Cyber domain will “be fought largely by various autonomous cyber agents that will attack, defend, and manage the overall network of exceptional complexity and dynamics.”

See MAJ Chris Telley’s post addressing Artificial Intelligence (AI) as an Information Operations tool in his Influence at Machine Speed: The Coming of AI-Powered Propaganda.

The “high volume and velocity of information produced and demanded by the robot-intensive force” will require an increasingly autonomous Command and Control (C2) system, with humans increasingly being on, rather than in, the loop.

See Mr. Ian Sullivan’s discussion of AI vs. AI and how the decisive edge accrues to the combatant with more autonomous decision-action concurrency in his Lessons Learned in Assessing the Operational Environment post.

If you enjoyed reading this post, please watch Dr. Alexander Kott’s presentation, “The Network is the Robot,” from the Mad Scientist Robotics, Artificial Intelligence, and Autonomy: Visioning Multi-Domain Warfare in 2030-2050 Conference, co-sponsored by the Georgia Tech Research Institute (GTRI), in Atlanta, Georgia, 7-8 March 2017.

… and crank up Mr. Roboto by Styx!

Dr. Alexander Kott serves as the ARL’s Chief Scientist. In this role he provides leadership in development of ARL technical strategy, maintaining technical quality of ARL research, and representing ARL to external technical community. He published over 80 technical papers and served as the initiator, co-author and primary editor of over ten books, including most recently Cyber Defense and Situational Awareness (2015) and Cyber Security of SCADA and other Industrial Control Systems (2016), and the forthcoming Cyber Resilience of Systems and Networks (2019).

 

124. Mad Scientist Science Fiction Writing Contest 2019

[Editor’s Note:  Just a quick reminder that Mad Scientist is seeking your visions of future combat with our Science Fiction Writing Contest 2019.  Our deadline for submission is now one month out     — 1 APRIL 2019 so please review the contest details below, get those creative writing juices flowing, and send us your visions of combat in 2030!]

Background: The U.S. Army finds itself at a historical inflection point, where disparate, yet related elements of an increasingly complex Operational Environment (OE) are converging, creating a situation where fast-moving trends are rapidly transforming the nature of all aspects of society and human life – including the character of warfare. It is important to take a creative approach to projecting and anticipating both transformational and enduring trends that will lend themselves to the depiction of the future. In this vein, the U.S. Army Mad Scientist Initiative is seeking your creativity and unique ideas to describe a battlefield that does not yet exist.

Task: Write about the following scenario – On March 17th, 2030, the country of Donovia, after months of strained relations and covert hostilities, invades neighboring country Otso. Donovia is a wealthy nation that is a near-peer competitor to the United States. Like the United States, Donovia has invested heavily in disruptive technologies such as robotics, AI, autonomy, quantum information sciences, bio enhancements and gene editing, space-based weapons and communications, drones, nanotechnology, and directed energy weapons. The United States is a close ally of Otso and is compelled to intervene due to treaty obligations and historical ties. The United States is about to engage Donovia in its first battle with a near-peer competitor in over 80 years…

Three ways to approach:
1) Forecasting – Description of the timeline and events leading up to the battle.
2) Describing – Account of the battle while it’s happening.
3) Backcasting – Retrospective look after the battle has ended (i.e., After Action Review or lessons learned).

Three questions to consider while writing (U.S., adversaries, and others):
1) What will forces and Soldiers look like in 2030?
2) What technologies will enable them or be prevalent on the battlefield?
3) What do Multi-Domain Operations look like in 2030?

Submission Guidelines:
– No more than 5000 words in length
– Provide your submission in .doc or .docx format
– Please use conventional text formatting (e.g., no columns) and have images “in line” with text
– Submissions from Government and DoD employees must be cleared through their respective PAOs prior to submission
MUST include completed release form (on the back of contest flyer)
CANNOT have been previously published

Selected submissions may be chosen for publication or a possible future speaking opportunity.

Contact: Send your submissions to: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil

For additional story telling inspiration, please see the following blog posts:

… and Dr. Lydia Kostopoulos‘ short story entitled The Most Eventful Night in the White House Situation Room: Year 2051, published by our colleagues at Small Wars Journal.

 

114. Mad Scientist Science Fiction Writing Contest 2019

Futuristic tank rendering  / Source: U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC)

[Editor’s Note:  Story Telling is a powerful tool that allows us to envision how innovative technologies could be employed and operationalized in the Future Operational Environment.  Mad Scientist is seeking your visions of future combat with our Science Fiction Writing Contest 2019.  Our deadline for submission is 1 APRIL 2019, so please review the contest details below, get those creative writing juices flowing, and send us your visions of combat in 2030!] 

Still from “The Future of the Soldier” video / Source:  U.S. Army Natick Soldier Research Development and Engineering Center

Background: The U.S. Army finds itself at a historical inflection point, where disparate, yet related elements of an increasingly complex Operational Environment (OE) are converging, creating a situation where fast moving trends are rapidly transforming the nature of all aspects of society and human life – including the character of warfare. It is important to take a creative approach to projecting and anticipating both transformational and enduring trends that will lend themselves to the depiction of the future. In this vein, the U.S. Army Mad Scientist Initiative is seeking your creativity and unique ideas to describe a battlefield that does not yet exist.

Illustration from “Silent Ruin” by Don Hudson & Kinsun Lo / Source:   U.S.  Army Cyber Institute at West Point

Task: Write about the following scenario – On March 17th, 2030, the country of Donovia, after months of strained relations and covert hostilities, invades neighboring country Otso. Donovia is a wealthy nation that is a near-peer competitor to the United States. Like the United States, Donovia has invested heavily in disruptive technologies such as robotics, AI, autonomy, quantum information sciences, bio enhancements and gene editing, space-based weapons and communications, drones, nanotechnology, and directed energy weapons. The United States is a close ally of Otso and is compelled to intervene due to treaty obligations and historical ties. The United States is about to engage Donovia in its first battle with a near-peer competitor in over 80 years…

Three ways to approach:
1) Forecasting – Description of the timeline and events leading up to the battle.
2) Describing – Account of the battle while it’s happening.
3) Backcasting – Retrospective look after the battle has ended (i.e., After Action Review or lessons learned).

Three questions to consider while writing (U.S., adversaries, and others):
1) What will forces and Soldiers look like in 2030?
2) What technologies will enable them or be prevalent on the battlefield?
3) What do Multi-Domain Operations look like in 2030?

Submission Guidelines:
– No more than 5000 words in length
– Provide your submission in .doc or .docx format
– Please use conventional text formatting (e.g., no columns) and have images “in line” with text
– Submissions from Government and DoD employees must be cleared through their respective PAOs prior to submission
MUST include completed release form (on the back of contest flyer)
CANNOT have been previously published

Selected submissions may be chosen for publication or a possible future speaking opportunity.

Contact: Send your submissions to: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil

For additional story telling inspiration, please see the following blog posts:

 

110. Future Jobs and Skillsets

[Editor’s Note:  On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC.  Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces.  Today’s post is extracted from this conference’s final report (more of which is addressed at the bottom of this post).]

The U.S. Army currently has more than 150 Military Occupational Specialties (MOSs), each requiring a Soldier to learn unique tasks, skills, and knowledges. The emergence of a number of new technologies – drones, Artificial Intelligence (AI), autonomy, immersive mixed reality, big data storage and analytics, etc. – coupled with the changing character of future warfare means that many of these MOSs will need to change, while others will need to be created. This already has been seen in the wider U.S. and global economy, where the growth of internet services, smartphones, social media, and cloud technology over the last ten years has introduced a host of new occupations that previously did not exist. The future will further define and compel the creation of new jobs and skillsets that have not yet been articulated or even imagined. Today’s hobbies (e.g., drones) and recreational activities (e.g., Minecraft/Fortnite) that potential recruits engage in every day could become MOSs or Additional Skill Identifiers (ASIs) of the future.

Training eighty thousand new Recruits a year on existing MOSs is a colossal undertaking.  A great expansion in the jobs and skillsets needed to field a highly capable future Army, replete with modified or new MOSs, adds a considerable burden to the Army’s learning systems and institutions. These new requirements, however, will almost certainly present an opportunity for the Army to capitalize on intelligent tutors, personalized learning, and immersive learning to lessen costs and save time in Soldier and Leader development.

The recruit of 2050 will be born in 2032 and will be fundamentally different from the generations born before them.  Marc Prensky, educational writer and speaker who coined the term digital native, asserts this “New Human” will stand in stark contrast to the “Old Human” in the ways they learn and approach learning..1 Where humans today are born into a world with ubiquitous internet, hyper-connectivity, and the Internet of Things, each of these elements are generally external to the human.  By 2032, these technologies likely will have converged and will be embedded or integrated into the individual with connectivity literally on the tips of their fingers. 

Some of the newly required skills may be inherent within the next generation(s) of these Recruits. Many of the games, drones, and other everyday technologies that are already or soon to be very common – narrow AI, app development and general programming, and smart devices – will yield a variety of intrinsic skills that Recruits will have prior to entering the Army. Just like we no longer train Soldiers on how to use a computer, games like Fortnite, with no formal relationship with the military, will provide players with militarily-useful skills such as communications, resource management, foraging, force structure management, and fortification and structure building, all while attempting to survive against persistent attack.  Due to these trends, Recruits may come into the Army with fundamental technical skills and baseline military thinking attributes that flatten the learning curve for Initial Entry Training (IET).2

While these new Recruits may have a set of some required skills, there will still be a premium placed on premier skillsets in fields such as AI and machine learning, robotics, big data management, and quantum information sciences. Due to the high demand for these skillsets, the Army will have to compete for talent with private industry, battling them on compensation, benefits, perks, and a less restrictive work environment – limited to no dress code, flexible schedule, and freedom of action. In light of this, the Army may have to consider adjusting or relaxing its current recruitment processes, business practices, and force structuring to ensure it is able to attract and retain expertise. It also may have to reconsider how it adapts and utilizes its civilian workforce to undertake these types of tasks in new and creative ways.

The Recruit of 2050 will need to be engaged much differently than today. Potential Recruits may not want to be contacted by traditional methods3 – phone calls, in person, job fairs – but instead likely will prefer to “meet” digitally first. Recruiters already are seeing this today. In order to improve recruiting efforts, the Army may need to look for Recruits in non-traditional areas such as competitive online gaming. There is an opportunity for the Army to use AI to identify Recruit commonalities and improve its targeted advertisements in the digital realm to entice specific groups who have otherwise been overlooked. The Army is already exploring this avenue of approach through the formation of an eSports team that will engage young potential Recruits and attempt to normalize their view of Soldiers and the Army, making them both more relatable and enticing.4 This presents a broader opportunity to close the chasm that exists between civilians and the military.

The overall dynamic landscape of the future economy, the evolving labor market, and the changing character of future warfare will create an inflection point for the Army to re-evaluate longstanding recruitment strategies, workplace standards, and learning institutions and programs. This will bring about an opportunity for the Army to expand, refine, and realign its collection of skillsets and MOSs, making Soldiers more adapted for future battles, while at the same time challenging the Army to remain prominent in attracting premier talent in a highly competitive environment.

If you enjoyed this extract, please read the comprehensive Learning in 2050 Conference Final Report

… and see our TRADOC 2028 blog post.


1 Prensky, Mark, Mad Scientist Conference: Learning in 2050, Georgetown University, 9 August 2018.

2 Schatz, Sarah, Mad Scientist Conference: Learning in 2050, Georgetown University, 8 August 2018.

3 Davies, Hans, Mad Scientist Conference: Learning in 2050, Georgetown University, 9 August 2018.

4 Garland, Chad, Uncle Sam wants you — to play video games for the US Army, Stars and Stripes, 9 November 2018, https://www.stripes.com/news/uncle-sam-wants-you-to-play-video-games-for-the-us-army-1.555885.

92. Ground Warfare in 2050: How It Might Look

[Editor’s Note: Mad Scientist Laboratory is pleased to review proclaimed Mad Scientist Dr. Alexander Kott’s paper, Ground Warfare in 2050: How It Might Look, published by the US Army Research Laboratory in August 2018. This paper offers readers with a technological forecast of autonomous intelligent agents and robots and their potential for employment on future battlefields in the year 2050. In this post, Mad Scientist reviews Dr. Kott’s conclusions and provides links to our previously published posts that support his findings.]

In his paper, Dr. Kott addresses two major trends (currently under way) that will continue to affect combat operations for the foreseeable future. They are:

•  The employment of small aerial drones for Intelligence, Surveillance, and Reconnaissance (ISR) will continue, making concealment difficult and eliminating distance from opposing forces as a means of counter-detection. This will require the development and use of decoy capabilities (also intelligent robotic devices). This counter-reconnaissance fight will feature prominently on future battlefields between autonomous sensors and countermeasures – “a robot-on-robot affair.”

See our related discussions regarding Concealment in the Fundamental Questions Affecting Army Modernization post and Finders vs Hiders in our Timeless Competitions post.

  The continued proliferation of intelligent munitions, operating at greater distances, collaborating in teams to seek out and destroy designated targets, and able to defeat armored and other hardened targets, as well as defiladed and entrenched targets.

See our descriptions of the future recon / strike complex in our Advanced Engagement Battlespace and the “Hyperactive Battlefield” post, and Robotics and Swarms / Semi Autonomous capabilities in our Potential Game Changers post.

These two trends will, in turn, drive the following forecasted developments:

  Increasing reliance on unmanned systems, “with humans becoming a minority within the overall force, being further dispersed across the battlefield.”

See Mr. Jeff Becker’s post on The Multi-Domain “Dragoon” Squad: A Hyper-enabled Combat System, and Mr. Mike Matson’s Demons in the Tall Grass, both of which envision future tactical units employing greater numbers of autonomous combat systems; as well as Mr. Sam Bendett’s post on Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward, addressing the contemporary hurdles that one of our strategic competitors must address in operationalizing Unmanned Ground Vehicles.

•  Intelligent munitions will be neutralized “primarily by missiles and only secondarily by armor and entrenchments. Specialized autonomous protection vehicles will be required that will use their extensive load of antimissiles to defeat the incoming intelligent munitions.”

See our discussion of what warfare at machine-speed looks like in our Advanced Engagement Battlespace and the “Hyperactive Battlefield”.

Source: Fausto De Martini / Kill Command

  Forces will exploit “very complex terrain, such as dense forest and urban environments” for cover and concealment, requiring the development of highly mobile “ground robots with legs and limbs,” able to negotiate this congested landscape.

 

See our Megacities: Future Challenges and Responses and Integrated Sensors: The Critical Element in Future Complex Environment Warfare posts that address future complex operational environments.

Source: www.defenceimages.mod.uk

  The proliferation of autonomous combat systems on the battlefield will generate an additional required capability — “a significant number of specialized robotic vehicles that will serve as mobile power generation plants and charging stations.”

See our discussion of future Power capabilities on our Potential Game Changers handout.

 “To gain protection from intelligent munitions, extended subterranean tunnels and facilities will become important. This in turn will necessitate the tunnel-digging robotic machines, suitably equipped for battlefield mobility.”

See our discussion of Multi-Domain Swarming in our Black Swans and Pink Flamingos post.

  All of these autonomous, yet simultaneously integrated and networked battlefield systems will be vulnerable to Cyber-Electromagnetic Activities (CEMA). Consequently, the battle within the Cyber domain will “be fought largely by various autonomous cyber agents that will attack, defend, and manage the overall network of exceptional complexity and dynamics.”

See MAJ Chris Telley’s post addressing Artificial Intelligence (AI) as an Information Operations tool in his Influence at Machine Speed: The Coming of AI-Powered Propaganda.

 The “high volume and velocity of information produced and demanded by the robot-intensive force” will require an increasingly autonomous Command and Control (C2) system, with humans increasingly being on, rather than in, the loop.

See Mr. Ian Sullivan’s discussion of AI vs. AI and how the decisive edge accrues to the combatant with more autonomous decision-action concurrency in his Lessons Learned in Assessing the Operational Environment post.

If you enjoyed reading this post, please watch Dr. Alexander Kott’s presentation, “The Network is the Robot,” from the Mad Scientist Robotics, Artificial Intelligence, and Autonomy: Visioning Multi-Domain Warfare in 2030-2050 Conference, co-sponsored by the Georgia Tech Research Institute (GTRI), in Atlanta, Georgia, 7-8 March 2017.

Dr. Alexander Kott serves as the ARL’s Chief Scientist. In this role he provides leadership in development of ARL technical strategy, maintaining technical quality of ARL research, and representing ARL to external technical community. He published over 80 technical papers and served as the initiator, co-author and primary editor of over ten books, including most recently Cyber Defense and Situational Awareness (2015) and Cyber Security of SCADA and other Industrial Control Systems (2016), and the forthcoming Cyber Resilience of Systems and Networks (2019).

82. Bias and Machine Learning

[Editor’s Note:  Today’s post poses four central questions to our Mad Scientist community of action regarding bias in machine learning and the associated ramifications for artificial intelligence, autonomy, lethality, and decision-making on future warfighting.]

We thought that we had the answers, it was the questions we had wrong” – Bono, U2

Source: www.vpnsrus.com via flickr

As machine learning and deep learning algorithms become more commonplace, it is clear that the utopian ideal of a bias-neutral Artificial Intelligence (AI) is exactly just that. These algorithms have underlying biases embedded in their coding, imparted by their human programmers (either consciously or unconsciously). These algorithms can develop further biases during the machine learning and training process.  Dr. Tolga Bolukbasi, Boston University, recently described algorithms as not being capable of distinguishing right from wrong, unlike humans that can judge their actions, even when they act against ethical norms. For algorithms, data is the ultimate determining factor.

Realizing that algorithms supporting future Intelligence, Surveillance, and Reconnaissance (ISR) networks and Commander’s decision support aids will have inherent biases — what is the impact on future warfighting? This question is exceptionally relevant as Soldiers and Leaders consider the influence of biases in man-machine relationships, and their potential ramifications on the battlefield, especially with regard to the rules of engagement (i.e., mission execution and combat efficiency versus the proportional use of force and minimizing civilian casualties and collateral damage).

It is difficult to make predictions, particularly about the future.” This quote has been attributed to anyone ranging from Mark Twain to Niels Bohr to Yogi Berra. Point prediction is a sucker’s bet. However, asking the right questions about biases in AI is incredibly important.

The Mad Scientist Initiative has developed a series of questions to help frame the discussion regarding what biases we are willing to accept and in what cases they will be acceptable. Feel free to share your observations and questions in the comments section of this blog post (below) or email them to us at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil.

1) What types of bias are we willing to accept? Will a so-called cognitive bias that forgoes a logical, deliberative process be allowable? What about a programming bias that is discriminative towards any specific gender(s), ethnicity(ies), race(s), or even age(s)?

2) In what types of systems will we accept biases? Will machine learning applications in supposedly non-lethal warfighting functions like sustainment, protection, and intelligence be given more leeway with regards to bias?

3) Will the biases in machine learning programming and algorithms be more apparent and/or outweigh the inherent biases of humans-in-the-loop? How will perceived biases affect trust and reliance on machine learning applications?

4) At what point will the pace of innovation and introduction of this technology on the battlefield by our adversaries cause us to forego concerns of bias and rapidly field systems to gain a decisive Observe, Orient, Decide, and Act (OODA) loop and combat speed advantage on the Hyperactive Battlefield?

For additional information impacting on this important discussion, please see the following:

An Appropriate Level of Trust… blog post

Ethical Dilemmas of Future Warfare blog post

Ethics and the Future of War panel discussion video

81. “Maddest” Guest Blogger!

[Editor’s Note: Since its inception last November, the Mad Scientist Laboratory has enabled us to expand our reach and engage global innovators from across industry, academia, and the Government regarding emergent disruptive technologies and their individual and convergent impacts on the future of warfare. For perspective, our blog has accrued almost 60K views by over 30K visitors from around the world!

Our Mad Scientist Community of Action continues to grow — in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. Almost half (36 out of 81) of the blog posts published have been submitted by guest bloggers. We challenge you to contribute your ideas!

In particular, we would like to recognize Mad Scientist Mr. Sam Bendett by re-posting his submission entitled “Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward,” originally published on 25 June 2018. This post generated a record number of visits and views during the past six month period. Consequently, we hereby declare Sam to be the Mad Scientist Laboratory’s “Maddest” Guest Blogger! for the latter half of FY18. In recognition of his achievement, Sam will receive much coveted Mad Scientist swag.

While Sam’s post revealed the many challenges Russia has experienced in combat testing the Uran-9 Unmanned Ground Vehicle (UGV) in Syria, it is important to note that Russia has designed, prototyped,  developed, and operationally tested this system in a combat environment, demonstrating a disciplined and proactive approach to innovation.  Russia is learning how to integrate robotic lethal ground combat systems….

Enjoy re-visiting Sam’s informative post below, noting that many of the embedded links are best accessed using non-DoD networks.]

Russia’s Forpost UAV (licensed copy of IAI Searcher II) in Khmeimim, Syria; Source: https://t.co/PcNgJ811O8

Russia, like many other nations, is investing in the development of various unmanned military systems. The Russian defense establishment sees such systems as mission multipliers, highlighting two major advantages: saving soldiers’ lives and making military missions more effective. In this context, Russian developments are similar to those taking place around the world. Various militaries are fielding unmanned systems for surveillance, intelligence, logistics, or attack missions to make their forces or campaigns more effective. In fact, the Russian military has been successfully using Unmanned Aerial Vehicles (UAVs) in training and combat since 2013. It has used them with great effect in Syria, where these UAVs flew more mission hours than manned aircraft in various Intelligence, Surveillance, and Reconnaissance (ISR) roles.

Russia is also busy designing and testing many unmanned maritime and ground vehicles for various missions with diverse payloads. To underscore the significance of this emerging technology for the nation’s armed forces, Russian Defense Minister Sergei Shoigu recently stated that the serial production of ground combat robots for the military “may start already this year.”

Uran-9 combat UGV at Victory Day 2018 Parade in Red Square; Source: independent.co.uk

But before we see swarms of ground combat robots with red stars emblazoned on them, the Russian military will put these weapons through rigorous testing in order to determine if they can correspond to battlefield realities. Russian military manufacturers and contractors are not that different from their American counterparts in sometimes talking up the capabilities of their creations, seeking to create the demand for their newest achievement before there is proof that such technology can stand up to harsh battlefield conditions. It is for this reason that the Russian Ministry of Defense (MOD) finally established several centers such as Main Research and Testing Center of Robotics, tasked with working alongside the defense-industrial sector to create unmanned military technology standards and better communicate warfighters’ needs.  The MOD is also running conferences such as the annual “Robotization of the Armed Forces” that bring together military and industry decision-makers for a better dialogue on the development, growth, and evolution of the nation’s unmanned military systems.

Uran-9 Combat UGV, Source: nationalinterest.org

This brings us to one of the more interesting developments in Russian UGVs. Then Russian Deputy Defense Minister Borisov recently confirmed that the Uran-9 combat UGV was tested in Syria, which would be the first time this much-discussed system was put into combat. This particular UGV is supposed to operate in teams of three or four and is armed with a 30mm cannon and 7.62 mm machine guns, along with a variety of other weapons.

Just as importantly, it was designed to operate at a distance of up to three kilometers (3000 meters or about two miles) from its operator — a range that could be extended up to six kilometers for a team of these UGVs. This range is absolutely crucial for these machines, which must be operated remotely. Russian designers are developing operational electronics capable of rendering the Uran-9 more autonomous, thereby moving the operators to a safer distance from actual combat engagement. The size of a small tank, the Uran-9 impressed the international military community when first unveiled and it was definitely designed to survive battlefield realities….

Uran-9; Source: Defence-Blog.com

However, just as “no plan survives first contact with the enemy,” the Uran-9, though built to withstand punishment, came up short in its first trial run in Syria. In a candid admission, Andrei P. Anisimov, Senior Research Officer at the 3rd Central Research Institute of the Ministry of Defense, reported on the Uran-9’s critical combat deficiencies during the 10th All-Russian Scientific Conference entitled “Actual Problems of Defense and Security,” held in April 2018. In particular, the following issues came to light during testing:

• Instead of its intended range of several kilometers, the Uran-9 could only be operated at distance of “300-500 meters among low-rise buildings,” wiping out up to nine-tenths of its total operational range.

• There were “17 cases of short-term (up to one minute) and two cases of long-term (up to 1.5 hours) loss of Uran-9 control” recorded, which rendered this UGV practically useless on the battlefield.

• The UGV’s running gear had problems – there were issues with supporting and guiding rollers, as well as suspension springs.

• The electro-optic stations allowed for reconnaissance and identification of potential targets at a range of no more than two kilometers.

• The OCH-4 optical system did not allow for adequate detection of adversary’s optical and targeting devices and created multiple interferences in the test range’s ground and airspace.

Uran-9 undergoing testing; Source: YouTube

• Unstable operation of the UGV’s 30mm automatic cannon was recorded, with firing delays and failures. Moreover, the UGV could fire only when stationary, which basically wiped out its very purpose of combat “vehicle.”

• The Uran-9’s combat, ISR, and targeting weapons and mechanisms were also not stabilized.

On one hand, these many failures are a sign that this much–discussed and much-advertised machine is in need of significant upgrades, testing, and perhaps even a redesign before it gets put into another combat situation. The Russian military did say that it tested nearly 200 types of weapons in Syria, so putting the Uran-9 through its combat paces was a logical step in the long development of this particular UGV. If the Syrian trial was the first of its kind for this UGV, such significant technical glitches would not be surprising.

However, the MOD has been testing this Uran-9 for a while now, showing videos of this machine at a testing range, presumably in Russia. The truly unexpected issue arising during operations in Syria had to do with the failure of the Uran-9 to effectively engage targets with its cannon while in motion (along with a number of other issues). Still, perhaps many observers bought into the idea that this vehicle would perform as built – tracks, weapons, and all. A closer examination of the publicly-released testing video probably foretold some of the Syrian glitches – in this particular one, Uran-9 is shown firing its machine guns while moving, but its cannon was fired only when the vehicle was stationary. Another interesting aspect that is significant in hindsight is that the testing range in the video was a relatively open space – a large field with a few obstacles around, not the kind of complex terrain, dense urban environment encountered in Syria. While today’s and future battlefields will range greatly from open spaces to megacities, a vehicle like the Uran-9 would probably be expected to perform in all conditions. Unless, of course, Syrian tests would effectively limit its use in future combat.

Russian Soratnik UGV

On another hand, so many failures at once point to much larger issues with the Russian development of combat UGVs, issues that Anisimov also discussed during his presentation. He highlighted the following technological aspects that are ubiquitous worldwide at this point in the global development of similar unmanned systems:

• Low level of current UGV autonomy;

• Low level of automation of command and control processes of UGV management, including repairs and maintenance;

• Low communication range, and;

• Problems associated with “friend or foe” target identification.

Judging from the Uran-9’s Syrian test, Anisimov made the following key conclusions which point to the potential trajectory of Russian combat UGV development – assuming that other unmanned systems may have similar issues when placed in a simulated (or real) combat environment:

• These types of UGVs are equipped with a variety of cameras and sensors — and since the operator is presumably located a safe distance from combat, he may have problems understanding, processing, and effectively responding to what is taking place with this UGV in real-time.

• For the next 10-15 years, unmanned military systems will be unable to effectively take part in combat, with Russians proposing to use them in storming stationary and well-defended targets (effectively giving such combat UGVs a kamikaze role).

• One-time and preferably stationary use of these UGVs would be more effective, with maintenance and repair crews close by.

• These UGVs should be used with other military formations in order to target and destroy fortified and firing enemy positions — but never on their own, since their breakdown would negatively impact the military mission.

The presentation proposed that some of the above-mentioned problems could be overcome by domestic developments in the following UGV technology and equipment areas:

• Creating secure communication channels;

• Building miniaturized hi-tech navigation systems with a high degree of autonomy, capable of operating with a loss of satellite navigation systems;

• Developing miniaturized and effective ISR components;

• Integrating automated command and control systems, and;

• Better optics, electronics and data processing systems.

According to Anisimov’s report, the overall Russian UGV and unmanned military systems development arch is similar to the one proposed by the United States Army Capabilities Integration Center (ARCIC):  the gradual development of systems capable of more autonomy on the battlefield, leading to “smart” robots capable of forming “mobile networks” and operating in swarm configurations. Such systems should be “multifunctional” and capable of being integrated into existing armed forces formations for various combat missions, as well as operate autonomously when needed. Finally, each military robot should be able to function within existing and future military technology and systems.

Source: rusmilitary.wordpress.com

Such a candid review and critique of the Uran-9 in Syria, if true, may point to the Russian Ministry of Defense’s attitude towards its domestic manufacturers. The potential combat effectiveness of this UGV was advertised for the past two years, but its actual performance fell far short of expectations. It is a sign for developers of other Russian unmanned ground vehicles – like Soratnik, Vihr, and Nerehta — since it displays the full range of deficiencies that take place outside of well-managed testing ranges where such vehicles are currently undergoing evaluation. It also brought to light significant problems with ISR equipment — this type of technology is absolutely crucial to any unmanned system’s successful deployment, and its failures during Uran-9 tests exposed a serious combat weakness.

It is also a useful lesson for many other designers of domestic combat UGVs who are seeking to introduce similar systems into existing order of battle. It appears that the Uran-9’s full effectiveness can only be determined at a much later time if it can perform its mission autonomously in the rapidly-changing and complex battlefield environment. Fully autonomous operation so far eludes its Russian developers, who are nonetheless still working towards achieving such operational goals for their combat UGVs. Moreover, Russian deliberations on using their existing combat UGV platforms in one-time attack mode against fortified adversary positions or firing points, tracking closely with ways that Western military analysts are thinking that such weapons could be used in combat.

Source: Nikolai Novichkov / Orbis Defense

The Uran-9 is still a test bed and much has to take place before it could be successfully integrated into current Russian concept of operations. We could expect more eye-opening “lessons learned” from its and other UGVs potential deployment in combat. Given the rapid proliferation of unmanned and autonomous technology, we are already in the midst of a new arms race. Many states are now designing, building, exporting, or importing various technologies for their military and security forces.

To make matters more interesting, the Russians have been public with both their statements about new technology being tested and evaluated, and with the possible use of such weapons in current and future conflicts. There should be no strategic or tactical surprise when military robotics are finally encountered in future combat.

Source: Block13
by djahal; Diviantart.com

For another perspective on Russian military innovation, please read Mr. Ray Finch’s guest post The Tenth Man” — Russia’s Era Military Innovation Technopark.

Samuel Bendett is a Research Analyst at the CNA Corporation and a Russia Studies Fellow at the American Foreign Policy Council. He is an official Mad Scientist, having presented and been so proclaimed at a previous Mad Scientist Conference.  The views expressed here are his own.

80. “The Queue”

[Editor’s Note:  Mad Scientist Laboratory is pleased to present our August edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

Gartner Hype Cycle / Source:  Nicole Saraco Loddo, Gartner

1.5 Trends Emerge in the Gartner Hype Cycle for Emerging Technologies,” by Kasey Panetta, Gartner, 16 August 2018.

Gartner’s annual hype cycle highlights many of the technologies and trends explored by the Mad Scientist program over the last two years. This year’s cycle added 17 new technologies and organized them into five emerging trends: 1) Democratized Artificial Intelligence (AI), 2) Digitalized Eco-Systems, 3) Do-It-Yourself Bio-Hacking, 4) Transparently Immersive Experiences, and 5) Ubiquitous Infrastructure. Of note, many of these technologies have a 5–10 year horizon until the Plateau of Productivity. If this time horizon is accurate, we believe these emerging technologies and five trends will have a significant role in defining the Character of Future War in 2035 and should have modernization implications for the Army of 2028. For additional information on the disruptive technologies identified between now and 2035, see the Era of Accelerated Human Progress portion of our Potential Game Changers broadsheet.

[Gartner disclaimer:  Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.]

Artificial Intelligence by GLAS-8 / Source: Flickr

2.Should Evil AI Research Be Published? Five Experts Weigh In,” by Dan Robitzski, Futurism, 27 August 2018.

The following rhetorical (for now) question was posed to the “AI Race and Societal Impacts” panel during last month’s The Joint Multi-Conference on Human-Level Artificial Intelligence in Prague, The Czech Republic:

“Let’s say you’re an AI scientist, and you’ve found the holy grail of your field — you figured out how to build an artificial general intelligence (AGI). That’s a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas — things no computer can do today.

That’s great, right? Except for one big catch: your AGI system is evil or could only be used for malicious purposes.

So, now a conundrum. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil? Do you file a patent so that no one else (except for you) could bring such an algorithm into existence? Or do you sit on your research, protecting the world from your creation but also passing up on the astronomical paycheck that would surely arrive in the wake of such a discovery?”

The panel’s responses ranged from controlling — “Don’t publish it!” and treat it like a grenade, “one would not hand it to a small child, but maybe a trained soldier could be trusted with it”; to the altruistic — “publish [it]… immediately” and “there is no evil technology, but there are people who would misuse it. If that AGI algorithm was shared with the world, people might be able to find ways to use it for good”; to the entrepreneurial – “sell the evil AGI to [me]. That way, they wouldn’t have to hold onto the ethical burden of such a powerful and scary AI — instead, you could just pass it to [me and I will] take it from there.

While no consensus of opinion was arrived at, the panel discussion served a useful exercise in illustrating how AI differs from previous eras’ game changing technologies. Unlike Nuclear, Biological, and Chemical weapons, no internationally agreed to and implemented control protocols can be applied to AI, as there are no analogous gas centrifuges, fissile materials, or triggering mechanisms; no restricted access pathogens; no proscribed precursor chemicals to control. Rather, when AGI is ultimately achieved, it is likely to be composed of nothing more than diffuse code; a digital will’o wisp that can permeate across the global net to other nations, non-state actors, and super-empowered individuals, with the potential to facilitate unprecedentedly disruptive Information Operation (IO) campaigns and Virtual Warfare, revolutionizing human affairs. The West would be best served in emulating the PRC with its Military-Civil Fusion Centers and integrate the resources of the State with the innovation of industry to achieve their own AGI solutions soonest. The decisive edge will “accrue to the side with more autonomous decision-action concurrency on the Hyperactive Battlefield” — the best defense against a nefarious AGI is a friendly AGI!

Scales Sword Of Justice / Source: https://www.maxpixel.net/

3.Can Justice be blind when it comes to machine learning? Researchers present findings at ICML 2018,” The Alan Turing Institute, 11 July 2018.

Can justice really be blind? The International Conference on Machine Learning (ICML) was held in Stockholm, Sweden, in July 2018. This conference explored the notion of machine learning fairness and proposed new methods to help regulators provide better oversight and practitioners to develop fair and privacy-preserving data analyses. Like ethical discussions taking place within the DoD, there are rising legal concerns that commercial machine learning systems (e.g., those associated with car insurance pricing) might illegally or unfairly discriminate against certain subgroups of the population. Machine learning will play an important role in assisting battlefield decisions (e.g., the targeting cycle and commander’s decisions) – especially lethal decisions. There is a common misperception that machines will make unbiased and fair decisions, divorced from human bias. Yet the issue of machine learning bias is significant because humans, with their host of cognitive biases, code the very programming that will enable machines to learn and make decisions. Making the best, unbiased decisions will become critical in AI-assisted warfighting. We must ensure that machine-based learning outputs are verified and understood to preclude the inadvertent introduction of human biases.  Read the full report here.

Robot PNG / Source: pngimg.com

4.Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans,” by Katyanna Quach, The Register, 3 August 2018.

In a study published by PLOS ONE, researchers found that a robot’s personality affected a human’s decision-making. In the study, participants were asked to dialogue with a robot that was either sociable (chatty) or functional (focused). At the end of the study, the researchers let the participants know that they could switch the robot off if they wanted to. At that moment, the robot would make an impassioned plea to the participant to resist shutting them down. The participants’ actions were then recorded. Unexpectedly, there were  a large number of participants who resisted shutting down the functional robots after they made their plea, as opposed to the sociable ones. This is significant. It shows, beyond the unexpected result, that decision-making is affected by robotic personality. Humans will form an emotional connection to artificial entities despite knowing they are robotic if they mimic and emulate human behavior. If the Army believes its Soldiers will be accompanied and augmented heavily by robots in the near future, it must also understand that human-robot interaction will not be the same as human-computer interaction. The U.S. Army must explore how attain the appropriate level of trust between Soldiers and their robotic teammates on the future battlefield. Robots must be treated more like partners than tools, with trust, cooperation, and even empathy displayed.

IoT / Source: Pixabay

5.Spending on Internet of Things May More Than Double to Over Half a Trillion Dollars,” by Aaron Pressman, Fortune, 8 August 2018.

While the advent of the Internet brought home computing and communication even deeper into global households, the revolution of smart phones brought about the concept of constant personal interconnectivity. Today and into the future, not only are humans being connected to the global commons via their smart devices, but a multitude of devices, vehicles, and various accessories are being integrated into the Internet of Things (IoT). Previously, the IoT was addressed as a game changing technology. The IoT is composed of trillions of internet-linked items, creating opportunities and vulnerabilities. There has been explosive growth in low Size Weight and Power (SWaP) and connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness).

Large companies are expected to quickly grow their spending on Internet-connected devices (i.e., appliances, home devices [such as Google Home, Alexa, etc.], various sensors) to approximately $520 billion. This is a massive investment into what will likely become the Internet of Everything (IoE). While growth is focused on known devices, it is likely that it will expand to embedded and wearable sensors – think clothing, accessories, and even sensors and communication devices embedded within the human body. This has two major implications for the Future Operational Environment (FOE):

– The U.S. military is already struggling with the balance between collecting, organizing, and using critical data, allowing service members to use personal devices, and maintaining operations and network security and integrity (see banning of personal fitness trackers recently). A segment of the IoT sensors and devices may be necessary or critical to the function and operation of many U.S. Armed Forces platforms and weapons systems, inciting some critical questions about supply chain security, system vulnerabilities, and reliance on micro sensors and microelectronics

– The U.S. Army of the future will likely have to operate in and around dense urban environments, where IoT devices and sensors will be abundant, degrading blue force’s ability to sense the battlefield and “see” the enemy, thereby creating a veritable needle in a stack of needles.

6.Battlefield Internet: A Plan for Securing Cyberspace,” by Michèle Flournoy and Michael Sulmeyer, Foreign Affairs, September/October 2018. Review submitted by Ms. Marie Murphy.

With the possibility of a “cyber Pearl Harbor” becoming increasingly imminent, intelligence officials warn of the rising danger of cyber attacks. Effects of these attacks have already been felt around the world. They have the power to break the trust people have in institutions, companies, and governments as they act in the undefined gray zone between peace and all-out war. The military implications are quite clear: cyber attacks can cripple the military’s ability to function from a command and control aspect to intelligence communications and materiel and personnel networks. Besides the military and government, private companies’ use of the internet must be accounted for when discussing cyber security. Some companies have felt the effects of cyber attacks, while others are reluctant to invest in cyber protection measures. In this way, civilians become affected by acts of cyber warfare, and attacks on a country may not be directed at the opposing military, but the civilian population of a state, as in the case of power and utility outages seen in eastern Europe. Any actor with access to the internet can inflict damage, and anyone connected to the internet is vulnerable to attack, so public-private cooperation is necessary to most effectively combat cyber threats.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!