191. Competition in 2035: Anticipating Chinese Exploitation of Operational Environments

[Editor’s Note:  In today’s post, Mad Scientist Laboratory explores China’s whole-of-nation approach to exploiting operational environments, synchronizing government, military, and industry activities to change geostrategic power paradigms via competition in 2035. Excerpted from products previously developed and published by the TRADOC G-2’s Operational Environment and Threat Analysis Directorate (see links below), this post describes China’s approach to exploitation and identifies the implications for the U.S. Army — Enjoy!]

The Operational Environment is envisioned as a continuum, divided into two eras: the Era of Accelerated Human Progress (now through 2035) and the Era of Contested Equality (2035 through 2050). This latter era is marked by significant breakthroughs in technology and convergences in terms of capabilities, which lead to significant changes in the character of warfare. During this period, traditional aspects of warfare undergo dramatic, almost revolutionary changes which at the end of this timeframe may even challenge the very nature of warfare itself. In this era, no one actor is likely to have any long-term strategic or technological advantage, with aggregate power between the U.S. and its strategic competitors being equivalent, but not necessarily symmetric. Prevailing in this period will depend on an ability to synchronize multi-domain capabilities against an artificial intelligence-enhanced adversary with an overarching capability to visualize and understand the battlespace at even greater ranges and velocities. Equally important will be controlling information and the narrative surrounding the conflict. Adversaries will adopt sophisticated information operations and narrative strategies to change the context of the conflict and thus defeat U.S. political will.

The future strategic environment will be characterized by a persistent state of competition where global competitors seek to exploit the conditions of operational environments to gain advantage. Adversaries understand that the application of any or all elements of national power in competition just below the threshold of armed conflict is an effective strategy against the U.S.

Chinese DF-17 carrying the DF-ZF Hypersonic Glide Vehicle / Source: Bill Bostock, Business Insider Australia, via Wikimedia Commons

China is rapidly modernizing its armed forces and developing new approaches to warfare. Beijing has invested significant resources into research and development of a wide array of advanced technologies. Coupled with its time-honored practice of reverse engineering technologies or systems it purchases or acquires through espionage, this effort likely will allow China to surpass Russia as our most capable threat sometime around 2030.

China’s Approach to Exploitation

China’s whole-of-nation approach, which involves synchronization of actions across government, military, and industry, will facilitate exploitation of operational environments and enable it to gain global influence through economic exploitation.

China will leverage the international system to advance its own interests while attempting to constrain others, including the U.S.

Preferred Conditions and Methods

The following conditions and methods are conducive to exploitation by China, enabling them to shape the strategic environment in 2035:

    • Infrastructure Capacity Challenges:  China targets undeveloped and fragile environments where their capital investments, technology, and human capital can produce financial gains and generate political influence.
    • Interconnected Economies:  China looks for partners and opportunities to become a significant stakeholder in a wide variety of economies in order to capitalize on its investments as well as generate political influence.
    • Specialized Economies:  China looks for opportunities to partner with specialized markets and leverage their vulnerabilities for gain.
    • Technology Access Gaps:  China targets areas where their capital investments in technology provide partners with key resources and competitive advantages by filling technology gaps.

Implications for the U.S. Army:

    • The Chinese People’s Liberation Army (PLA) deployed armored medical vehicles and personnel to Germany for the Combined Aid 2019 Joint Exercise with the Bundeswehr this past summer.

      Traditional Army threat paradigms may not be sufficient for competition.

    • The Army could be drawn into unanticipated escalation as a result of China’s activities during the competition phase.
    • Army military partnerships will likely be undermined by China in 2035.
    • Army operations and engagements will be increasingly impacted by the pervasiveness of Chinese goods, technology, infrastructure, and systems.

If you enjoyed this post, please see the original paper and associated infographic of the same title, both by the TRADOC G-2’s Operational Environment and Threat Analysis Directorate and hosted on their All Partners Access Network (APAN) site

… and read the following MadSci Laboratory blog posts:

A view of the Future: 2035-2050

China’s Drive for Innovation Dominance and Quantum Surprise on the Battlefield?, by Elsa Kania

A Closer Look at China’s Strategies for Innovation: Questioning True Intent, by Cindy Hurst

Critical Projection: Insights from China’s Science Fiction, by Lt Col Dave Calder

188. “Tenth Man” — Challenging our Assumptions about the Future Force

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish our latest “Tenth Man” post. This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging. We offer it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Operational Environment (OE). Today’s post examines a foundational assumption about the Future Force by challenging it, reviewing the associated implications, and identifying potential signals and/or indicators of change. Read on!]

Assumption: The United States will maintain sufficient Defense spending as a percentage of its GDP to modernize the Multi-Domain Operations (MDO) force. [Related MDO Baseline Assumption – “b. The Army will adjust to fiscal constraints and have resources sufficient to preserve the balance of readiness, force structure, and modernization necessary to meet the demands of the national defense strategy in the mid-to far-term (2020-2040),” TRADOC Pam 525-3-1, The U.S. Army in Multi-Domain Operations 2028, p. A-1.]

Source: U.S. Census Bureau

Over the past decades, the defense budget has varied but remained sufficient to accomplish the missions of the U.S. military. However, a graying population with fewer workers and longer life spans will put new demands on the non-discretionary and discretionary federal budget. These stressors on the federal budget may indicate that the U.S. is following the same path as Europe and Japan. By 2038, it is projected that 21% of Americans will be 65 years old or older.1 Budget demand tied to an aging population will threaten planned DoD funding levels.

In the near-term (2019-2023), total costs in 2019 dollars are projected to remain the same. In recent years, the DoD underestimated the costs of acquiring weapons systems and maintaining compensation levels. By taking these factors into account, a 3% increase from the FY 2019 DoD budget is needed in this timeframe. Similarly, the Congressional Budget Office (CBO) estimates that costs will steadily climb after 2023. Their base budget in 2033 is projected to be approximately $735 billion — that is an 11% increase over ten years. This is due to rising compensation rates, growing costs of operations and maintenance, and the purchasing of new weapons systems.2 These budgetary pressures are connected to several stated and hidden assumptions:

    • An all-volunteer force will remain viable [Related MDO Baseline Assumption – “a. The U.S. Army will remain a professional, all volunteer force, relying on all components of the Army to meet future commitments.”],
    • Materiel solutions’ associated technologies will have matured to the requisite Technology Readiness Levels (TRLs), and
    • The U.S. will have the industrial ability to reconstitute the MDO force following “America’s First Battle.”

Implications: If these assumptions prove false, the manned and equipped force of the future will look significantly different than the envisioned MDO force. A smaller DoD budget could mean a small fielded Army with equipping decisions for less exquisite weapons systems. A smaller active force might also drive changes to Multi-Domain Operations and how the Army describes the way it will fight in the future.

Signpost / Indicators of Change:

    • 2008-type “Great Recession”
    • Return of budget control and sequestration
    • Increased domestic funding for:
      • Universal Healthcare
      • Universal College
      • Social Security Fix
    • Change in International Monetary Environment (higher interest rates for borrowing)

If you enjoyed this alternative view on force modernization, please also see the following posts:

  • Disclaimer: The views expressed in this blog post do not reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

1The long-term impact of aging on the federal budget,” by Louise Sheiner, Brookings, 11 January 2018 https://www.brookings.edu/research/the-long-term-impact-of-aging-on-the-federal-budget/

2Long-Term Implications of the 2019 Future Years Defense Program,” Congressional Budget Office, 13 February 2019. https://www.cbo.gov/publication/54948

183. Ethics, Morals, and Legal Implications

[Editor’s Note: The U.S. Army Futures Command (AFC) and Training and Doctrine Command (TRADOC) co-sponsored the Mad Scientist Disruption and the Operational Environment Conference with the Cockrell School of Engineering at The University of Texas at Austin on 24-25 April 2019 in Austin, Texas. Today’s post is excerpted from this conference’s Final Report and addresses how the speed of technological innovation and convergence continues to outpace human governance. The U.S. Army must not only consider how best to employ these advances in modernizing the force, but also the concomitant ethical, moral, and legal implications their use may present in the Operational Environment (see links to the newly published TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, and the complete Mad Scientist Disruption and the Operational Environment Conference Final Report at the bottom of this post).]

Technological advancement and subsequent employment often outpaces moral, ethical, and legal standards. Governmental and regulatory bodies are then caught between technological progress and the evolution of social thinking. The Disruption and the Operational Environment Conference uncovered and explored several tension points that the Army may be challenged by in the future.

Space

Cubesats in LEO / Source: NASA

Space is one of the least explored domains in which the Army will operate; as such, we may encounter a host of associated ethical and legal dilemmas. In the course of warfare, if the Army or an adversary intentionally or inadvertently destroys commercial communications infrastructure – GPS satellites – the ramifications to the economy, transportation, and emergency services would be dire and deadly. The Army will be challenged to consider how and where National Defense measures in space affect non-combatants and American civilians on the ground.

Per proclaimed Mad Scientists Dr. Moriba Jah and Dr. Diane Howard, there are ~500,000 objects orbiting the Earth posing potential hazards to our space-based services. We are currently able to only track less than one percent of them — those that are the size of a smart phone / softball or larger. / Source: NASA Orbital Debris Office

International governing bodies may have to consider what responsibility space-faring entities – countries, universities, private companies – will have for mitigating orbital congestion caused by excessive launching and the aggressive exploitation of space. If the Army is judicious with its own footprint in space, it could reduce the risk of accidental collisions and unnecessary clutter and congestion. It is extremely expensive to clean up space debris and deconflicting active operations is essential. With each entity acting in their own self-interest, with limited binding law or governance and no enforcement, overuse of space could lead to a “tragedy of the commons” effect.1  The Army has the opportunity to more closely align itself with international partners to develop guidelines and protocols for space operations to avoid potential conflicts and to influence and shape future policy. Without this early intervention, the Army may face ethical and moral challenges in the future regarding its addition of orbital objects to an already dangerously cluttered Low Earth Orbit. What will the Army be responsible for in democratized space? Will there be a moral or ethical limit on space launches?

Autonomy in Robotics

AFC’s Future Force Modernization Enterprise of Cross-Functional Teams, Acquisition Programs of Record, and Research and Development centers executed a radio rodeo with Industry throughout June 2019 to inform the Army of the network requirements needed to enable autonomous vehicle support in contested, multi-domain environments. / Source: Army.mil

Robotics have been pervasive and normalized in military operations in the post-9/11 Operational Environment. However, the burgeoning field of autonomy in robotics with the potential to supplant humans in time-critical decision-making will bring about significant ethical, moral, and legal challenges that the Army, and larger DoD are currently facing. This issue will be exacerbated in the Operational Environment by an increased utilization and reliance on autonomy.

The increasing prevalence of autonomy will raise a number of important questions. At what point is it more ethical to allow a machine to make a decision that may save lives of either combatants or civilians? Where does fault, responsibility, or attribution lie when an autonomous system takes lives? Will defensive autonomous operations – air defense systems, active protection systems – be more ethically acceptable than offensive – airstrikes, fire missions – autonomy? Can Artificial Intelligence/Machine Learning (AI/ML) make decisions in line with Army core values?

Deepfakes and AI-Generated Identities, Personas, and Content

Source: U.S. Air Force

A new era of Information Operations (IO) is emerging due to disruptive technologies such as deepfakes – videos that are constructed to make a person appear to say or do something that they never said or did – and AI Generative Adversarial Networks (GANs) that produce fully original faces, bodies, personas, and robust identities.2  Deepfakes and GANs are alarming to national security experts as they could trigger accidental escalation, undermine trust in authorities, and cause unforeseen havoc. This is amplified by content such as news, sports, and creative writing similarly being generated by AI/ML applications.

This new era of IO has many ethical and moral implications for the Army. In the past, the Army has utilized industrial and early information age IO tools such as leaflets, open-air messaging, and cyber influence mechanisms to shape perceptions around the world. Today and moving forward in the Operational Environment, advances in technology create ethical questions such as: is it ethical or legal to use cyber or digital manipulations against populations of both U.S. allies and strategic competitors? Under what title or authority does the use of deepfakes and AI-generated images fall? How will the Army need to supplement existing policy to include technologies that didn’t exist when it was written?

AI in Formations

With the introduction of decision-making AI, the Army will be faced with questions about trust, man-machine relationships, and transparency. Does AI in cyber require the same moral benchmark as lethal decision-making? Does transparency equal ethical AI? What allowance for error in AI is acceptable compared to humans? Where does the Army allow AI to make decisions – only in non-combat or non-lethal situations?

Commanders, stakeholders, and decision-makers will need to gain a level of comfort and trust with AI entities exemplifying a true man-machine relationship. The full integration of AI into training and combat exercises provides an opportunity to build trust early in the process before decision-making becomes critical and life-threatening. AI often includes unintentional or implicit bias in its programming. Is bias-free AI possible? How can bias be checked within the programming? How can bias be managed once it is discovered and how much will be allowed? Finally, does the bias-checking software contain bias? Bias can also be used in a positive way. Through ML – using data from previous exercises, missions, doctrine, and the law of war – the Army could inculcate core values, ethos, and historically successful decision-making into AI.

If existential threats to the United States increase, so does pressure to use artificial and autonomous systems to gain or maintain overmatch and domain superiority. As the Army explores shifting additional authority to AI and autonomous systems, how will it address the second and third order ethical and legal ramifications? How does the Army rectify its traditional values and ethical norms with disruptive technology that rapidly evolves?

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.
    • Ethics and the Future of War panel, facilitated by LTG Dubik (USA-Ret.) at the Mad Scientist Visualizing Multi Domain Battle 2030-2050 Conference, facilitated at Georgetown University, on 25-26 July 2017.

Just Published! TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, 7 October 2019, describes the conditions Army forces will face and establishes two distinct timeframes characterizing near-term advantages adversaries may have, as well as breakthroughs in technology and convergences in capabilities in the far term that will change the character of warfare. This pamphlet describes both timeframes in detail, accounting for all aspects across the Diplomatic, Information, Military, and Economic (DIME) spheres to allow Army forces to train to an accurate and realistic Operational Environment.


1 Munoz-Patchen, Chelsea, “Regulating the Space Commons: Treating Space Debris as Abandoned Property in Violation of the Outer Space Treaty,” Chicago Journal of International Law, Vol. 19, No. 1, Art. 7, 1 Aug. 2018. https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1741&context=cjil

2 Robitzski, Dan, “Amazing AI Generates Entire Bodies of People Who Don’t Exist,” Futurism.com, 30 Apr. 2019. https://futurism.com/ai-generates-entire-bodies-people-dont-exist

182. “Tenth Man” – Challenging our Assumptions about the Operational Environment and Warfare (Part 2)

[Editor’s Note: Mad Scientist Laboratory is pleased to publish our latest “Tenth Man” post. This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging. The Mad Scientist Laboratory offers it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Operational Environment (OE). We continue our series of “Tenth Man” posts examining the foundational assumptions of The Operational Environment and the Changing Character of Future Warfare, challenging them, reviewing the associated implications, and identifying potential signals and/or indicators of change. Enjoy!]

Assumption:  The character of warfare will change but the nature of war will remain human-centric.

The character of warfare will change in the future OE as it inexorably has since the advent of flint hand axes; iron blades; stirrups; longbows; gunpowder; breech loading, rifled, and automatic guns; mechanized armor; precision-guided munitions; and the Internet of Things. Speed, automation, extended ranges, broad and narrow weapons effects, and increasingly integrated multi-domain conduct, in addition to the complexity of the terrain and social structures in which it occurs, will make mid Twenty-first Century warfare both familiar and utterly alien.

The nature of warfare, however, is assumed to remain human-centric in the future. While humans will increasingly be removed from processes, cycles, and perhaps even decision-making, nearly all content regarding the future OE assumes that humans will remain central to the rationale for war and its most essential elements of execution. The nature of war has remained relatively constant from Thucydides through Clausewitz, and forward to the present. War is still waged because of fear, honor, and interest, and remains an expression of politics by other means. While machines are becoming ever more prevalent across the battlefield – C5ISR, maneuver, and logistics – we cling to the belief that parties will still go to war over human interests; that war will be decided, executed, and controlled by humans.

Implications:  If these assumptions prove false, then the Army’s fundamental understanding of war in the future may be inherently flawed, calling into question established strategies, force structuring, and decision-making models. A changed or changing nature of war brings about a number of implications:

– Humans may not be aware of the outset of war. As algorithmic warfare evolves, might wars be fought unintentionally, with humans not recognizing what has occurred until effects are felt?

– Wars may be fought due to AI-calculated opportunities or threats – economic, political, or even ideological – that are largely imperceptible to human judgement. Imagine that a machine recognizes a strategic opportunity or impetus to engage a nation-state actor that is conventionally (read that humanly) viewed as weak or in a presumed disadvantaged state. The machine launches offensive operations to achieve a favorable outcome or objective that it deemed too advantageous to pass up.

  • – Infliction of human loss, suffering, and disruption to induce coercion and influence may not be conducive to victory. Victory may be simply a calculated or algorithmic outcome that causes an adversary’s machine to decide their own victory is unattainable.

– The actor (nation-state or otherwise) with the most robust kairosthenic power and/or most talented humans may not achieve victory. Even powers enjoying the greatest materiel advantages could see this once reliable measure of dominion mitigated. Winning may be achieved by the actor with the best algorithms or machines.

  • These implications in turn raise several questions for the Army:

– How much and how should the Army recruit and cultivate human talent if war is no longer human-centric?

– How should forces be structured – what is the “right” mix of humans to machines if war is no longer human-centric?

– Will current ethical considerations in kinetic operations be weighed more or less heavily if humans are further removed from the equation? And what even constitutes kinetic operations in such a future?

– Should the U.S. military divest from platforms and materiel solutions (hardware) and re-focus on becoming algorithmically and digitally-centric (software)?

 

– What is the role for the armed forces in such a world? Will competition and armed conflict increasingly fall within the sphere of cyber forces in the Departments of the Treasury, State, and other non-DoD organizations?

– Will warfare become the default condition if fewer humans get hurt?

– Could an adversary (human or machine) trick us (or our machines) to miscalculate our response?

Signposts / Indicators of Change:

– Proliferation of AI use in the OE, with increasingly less human involvement in autonomous or semi-autonomous systems’ critical functions and decision-making; the development of human-out-of-the-loop systems

– Technology advances to the point of near or actual machine sentience, with commensurate machine speed accelerating the potential for escalated competition and armed conflict beyond transparency and human comprehension.

– Nation-state governments approve the use of lethal autonomy, and this capability is democratized to non-state actors.

– Cyber operations have the same political and economic effects as traditional kinetic warfare, reducing or eliminating the need for physical combat.

– Smaller, less-capable states or actors begin achieving surprising or unexpected victories in warfare.

– Kinetic war becomes less lethal as robots replace human tasks.

– Other departments or agencies stand up quasi-military capabilities, have more active military-liaison organizations, or begin actively engaging in competition and conflict.

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.

… as well as our previous “Tenth Man” blog posts:

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

179. A New Age of Terror: New Mass Casualty Terrorism Threats

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s post by returning guest blogger Zachary Kallenborn, continuing his New Age of Terror series.  The democratization of unmanned air, ground, sea, and subsea systems and the proliferation of cyber-physical systems (e.g., automated plants) provide lesser states, non-state actors, and super-empowered individuals with new capabilities to conduct long-range precision fires and generate global non-kinetic effects resulting in mass casualty events. The potential weaponization of these otherwise benign capabilities pose new vulnerabilities to those who fail to remain vigilant and imagine the unthinkable — beware!]

A loud buzz pierced the quiet night air. A group of drones descended on a chemical plant near New York City. The drones disperse throughout the installation in search of storage tanks. A few minutes later, the buzz of the drone propellers was drowned out by loud explosions. A surge of fire leapt to the sky. A plume of gas followed, floating towards the nearby city. The gas killed thousands and thousands more were hospitalized with severe injuries.

The rapid proliferation of unmanned systems and cyber-physical systems offer terrorists new, easier means of carrying out mass casualty attacks. Drones allow terrorists to reduce their operational risk and acquire relatively low cost platforms. Cyber attacks require few resources and could cause significant harm, though a lack of expertise limits terrorist ability to inflict harm. Terrorists may prefer these methods to difficult-to-acquire and risky chemical, biological, radiological, and nuclear (CBRN) weapons.

Drones

Drones offer terrorists low cost methods of delivering harm with lower risk to attacker lives. Drone attacks can be launched from afar, in a hidden position, close to an escape route. Simple unmanned systems can be acquired easily: Amazon.com offers seemingly hundreds of drones for as low as $25. Of course, low cost drones also mean lower payloads that limit the harm caused, often significantly. Improvements to drone autonomy will allow terrorists to deploy more drones at once, including in true drone swarms.1 Terrorists can mount drone attacks across air, land, and sea.

Aerial drones allow attackers to evade ground-based defenses and could be highly effective in striking airports, chemical facilities, and other critical infrastructure. Houthi rebels in Yemen have repeatedly launched drone strikes on Saudi oil pipelines and refineries.2  Recent drone attacks eliminated half of Saudi oil production capacity.3  Attacks on chemical facilities are likely to be particularly effective. A chemical release would not require large amounts of explosives and could cause massive harm, as in the Bhopal gas accident that killed thousands. Current Department of Homeland Security Chemical Facility Anti-Terrorism Standards do not require any meaningful defenses against aerial attack.4  Alternatively, even small drones can cause major damage to airplane wings or engines, potentially risking bringing a plane down.5  In December 2018, that risk alone was enough to ground hundreds of flights at Gatwick airport south of London when drones were spotted close to the runway.

Self-driving cars also provide a means of mass casualty attack. Waymo, Uber, and several other companies seek to launch a self-driving taxi service, open to the public. Terrorists could request multiple taxis, load them with explosives or remotely operated weapons, and send them out to multiple targets. Alternatively, terrorists could launch multi-stage attacks on the same target: a first strike causes first responders to mass and subsequent attacks hit the responders. In fact, ISIS has reportedly considered this option.6

For a few hundred dollars, anyone can rent a semi-autonomous surface vessel that can carry up to 35lbs.7  No license or registration is necessary.8  Although a surface attack limits terrorists to maritime targets, potential still exists for significant harm. Terrorists can strike popular tourist sites like the Statue of Liberty or San Francisco’s Fisherman’s Wharf. U.S. military vessels are ideal targets too, such as the USS Cole bombing in October 2000.9  But drones are not the only new method of attack.

Cyber-physical systems

Like drones, cyber attacks are low cost and reduce operational risks. Cyber attacks can be launched from secure locations, even on the other side of the world. Terrorists also gain high levels of autonomy that will inhibit law enforcement responses.10  Although cyberterrorism requires significant technical know-how, terrorists require few resources other than a computer to carry out an attack.

Cyber attacks could target chemical facilities, airplanes, and other critical infrastructure targets. In 2000, Vitek Boden infiltrated computers controlling the sewage system of Maroochy Shire, Australia, and released hundreds of thousands of gallons of raw sewage into the surrounding area.11  Boden could have caused even more harm if he wished.12  Although Boden’s attack primarily harmed the environment, other attacks could threaten human life. Cyber attacks could disable safety systems at chemical facilities, risking an accidental toxic gas release or explosions. A cyber assault on a Saudi petrochemical facility in August 2017 reportedly had that exact goal.13

However, cyber expertise and specific target knowledge is likely to be a significant inhibitor. Although attacks on critical infrastructure may require specialist knowledge of the control system and administrative operations, protective measures are not always implemented, leaving targets vulnerable.14  Boden was successful in large part because he worked closely with the sewage system’s control systems. Although terrorists have defaced websites and conducted denial of service attacks, known terrorist organizations do not currently possess the capabilities to mount a major destructive cyber attack.15  The availability of the necessary human capital is a strong factor in whether terrorists pursue cyber attacks.16  Nonetheless, the risk is likely to grow as terrorists develop greater cyber capabilities, increased connectivity creates new opportunities for attack, and the black market for cybercrime tools grows.17

The Future Operational Environment

Hot-zone team members from Hawaii’s Chemical, Biological, Radiological, Nuclear, and High-Yield Explosive, Enhanced-Response-Force-Package Team (CERFP) process simulated casualties through a decontamination zone during an exercise this spring. /  Source: U.S. Air National Guard photo by Senior Airman John Linzmeier

If terrorists have new avenues of mass casualty attack, U.S. forces must devote more resources to force protection and emergency response. U.S. forces may be called upon to aid local, state, and federal emergency responders in the event of a mass casualty attack. Likewise, U.S. troops may face risks themselves: cyber and drone attacks could certainly target U.S. military installations. Even attacks that do not kill can cause significant harm: disrupting airport operations as in the 2018 Gatwick drone incident may delay troop resupply, troop deployment, or close air support to Soldiers in the field. The U.S. military and the broader national security community must rethink its approach to mass casualty terrorism to respond to these threats. Terrorist groups have typically required CBRN weapons to cause mass harm. But if you can kill thousands in a drone attack, why bother with risky, difficult-to-acquire CBRN weapons?

For more information on this threat trend, see Non-State Actors and Their Uses of Emerging Technology, presented by Dr. Gary Ackerman, National Consortium for the Study of Terrorism and Responses to Terrorism, University of Maryland, at the Mad Scientist Robotics, Artificial Intelligence & Autonomy Conference at the Georgia Tech Research Institute, Atlanta, Georgia, 7-8 March 2017…

… as well as the following related Mad Scientist Laboratory posts:

– Zachary Kallenborn‘s previous post, A New Age of Terror: The Future of CBRN Terrorism.

– Marie Murphy‘s post, Trouble in Paradise: The Technological Upheaval of Modern Political and Economic Systems

The Democratization of Dual Use Technology

Autonomy Threat Trends

The Future of the Cyber Domain

Emergent Threat Posed by Super-Empowered Individuals

… and crank up Love and Terror by The Cinematics!

Zachary Kallenborn is a freelance researcher and analyst, specializing in Chemical, Biological, Radiological, and Nuclear (CBRN) weapons, CBRN terrorism, drone swarms, and emerging technologies writ large. His research has appeared in the Nonproliferation Review, Studies in Conflict and Terrorism, Defense One, the Modern War Institute at West Point, and other outlets. His most recent study, Swarming Destruction: Drone Swarms and CBRN Weapons, examines the threats and opportunities of drone swarms for the full scope of CBRN weapons.

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).


1 Amy Hocraffer and Chang S. Nam, “A Meta-analysis of Human–System Interfaces in Unmanned Aerial Vehicle (UAV) Swarm Management,” Applied Ergonomics, Vol. 58 (2017), pp. 66–80, http://www.researchgate.net/profile/Chang_Nam5/publication/303782432_A_meta-analysis_of_human-system_interfaces_in_unmanned_aerial_vehicle_UAV_swarm_management/links/5767f71f08ae1658e2f8b435.pdf

2 Natasha Turak, “Oil Prices Jump as Saudi Energy Minister Reports Drone ‘Terrorism’ Against Pipeline Infrastructure,” CNBC, May 14, 2019, https://www.cnbc.com/2019/05/14/oil-jumps-as-saudi-energy-minister-reports-drone-terrorism-against-pipeline.html

3 John Defterios and Victoria Cavaliere, “Coordinated Strikes Knock Out Half of Saudi Oil Capacity, More Than 5 Million Barrels a Day,” CNN, September 15, 2019, https://www.cnn.com/2019/09/14/business/saudi-oil-output-impacted-drone-attack/index.html

4 Department of Homeland Security, “Risk-Based Performance Standards Guidance: Chemical Facility Anti-Terrorism Standards,” May 2009, 15, 85.

5 Peter Dockrill, “Here’s What it Looks Like When a Drone Smashes into a Plane Wing at 238 MPH,” ScienceAlert, October 22, 2018, https://www.sciencealert.com/this-is-what-it-looks-like-drone-smashes-into-plane-s-wing-238-mph-mid-air-collision-aircraft-impact

6 Lia Eustachewich, “Terrorist Wannabes Plotted Self-Driving Car Bomb Attack: Authorities,” New York Post, September 4, 2018, https://nypost.com/2018/09/04/terrorist-wannabes-plotted-self-driving-car-bomb-attack-authorities/

7 AllTerra, “AllTerra Rental Rates,” May 3, 2019, https://allterracentral.com/pub/media/wysiwyg/AllTerra_Rental_Rates-5.3.19.pdf

8 Phone conversation with USV retailer.

9 CNN Library, “USS Cole Bombing Fast Facts,” CNN, March 27, 2019, https://www.cnn.com/2013/09/18/world/meast/uss-cole-bombing-fast-facts/index.html

10 Steve S. Sin, Laura A. Blackerby, Elvis Asiamah, and Rhyner Washburn, “Determining Extremist Organisations’ Likelihood of Conducting Cyber Attacks,” 2016 8th International Conference on Cyber Conflict, May 31 to June 3, 2016, http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=7529428&tag=1

11 Marshall Abrams and Joe Weiss, “Malicious Control System Cyber Security Attack Case Study – Maroochy Water Services, Australia,” MITRE, July 23, 2008, https://www.mitre.org/sites/default/files/pdf/08_1145.pdf

12 Nabil Sayfayn and Stuart Madnick, “Cybersafety Analysis of the Maroochy Shire Sewage Spill (Preliminary Draft),” Cybersecurity Interdisciplinary Systems Laboratory, May 2017, http://web.mit.edu/smadnick/www/wp/2017-09.pdf

13 Nicole Perlroth and Clifford Krauss, “A Cyberattack in Saudi Arabia had a Deadly Goal. Experts Fear Another Try,” New York Times, March 15, 2018, https://www.nytimes.com/2018/03/15/technology/saudi-arabia-hacks-cyberattacks.html

14 Noguchi Mutsuo and Ueda Hirofumi, “An Analysis of the Actual Status of Recent Cyberattacks on Critical Infrastructure,” NEC Technical Journal, Vol. 12, No. 2, January 2018, https://www.nec.com/en/global/techrep/journal/g17/n02/pdf/170204.pdf

15 Tamara Evan, Eireann Leverett, Simon Ruffle, Andrew Coburn, James Bourdeau, Rohan Gunaratna, and Daniel Ralph, “Cyber Terrorism: Assessment of the Threat to Insurance,” Cambridge Centre for Risk Studies – Cyber Terrorism Insurance Futures 2017, November 2017, https://www.jbs.cam.ac.uk/fileadmin/user_upload/research/centres/risk/downloads/pool-re-cyber-terrorism.pdf

16 Steve S. Sin, et al, “Determining Extremist Organisations’ Likelihood of Conducting Cyber Attacks.”

17 Lillian Ablon, Martin C. Libicki, and Andrea A. Golay, “Markets for Cybercrime Tools and Stolen Data: Hacker’s Bazaar,” RAND, 2014, https://www.rand.org/content/dam/rand/pubs/research_reports/RR600/RR610/RAND_RR610.pdf

175. “I Know the Sound it Makes When It Lies” AI-Powered Tech to Improve Engagement in the Human Domain

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s post by guest bloggers LTC Arnel P. David, LTC (Ret) Patrick James Christian, PhD, and Dr. Aleksandra Nesic, who use storytelling to illustrate how the convergence of Artificial Intelligence (AI), cloud computing, big data, augmented and enhanced reality, and deception detection algorithms could complement decision-making in future specialized engagements.  Enjoy this first in a series of three posts exploring how game changing tech will enhance operations in the Human Domain!]

RAF A400 Atlas / Source:  Flickr, UK MoD, by Andrew Linnett

It is 2028. Lt Col Archie Burton steps off the British A400-M Atlas plane onto the hard pan desert runway of Banku Airfield, Nigeria. This is his third visit to Nigeria, but this time he is the commander of the Engagement Operations Group – Bravo (EOG-B). This group of bespoke, specialized capabilities is the British Army’s agile and highly-trained force for specialized engagement. It operates amongst the people and builds indigenous mass with host nation security forces. Members of this outfit operate in civilian clothes and speak multiple languages with academic degrees ranging from anthropology to computational science.

Source:  Flickr, Com Salud

Archie donned his Viz glasses on the drive to a meeting with local leadership of the town of Banku. Speaking to his AI assistant, “Jarvis,” Archie cycles through past engagement data to prep for the meeting and learn the latest about the local town and its leaders. Jarvis is connected to a cloud-computing environment, referred to as “HDM” for “Human Doman Matrix,” where scientifically collected and curated population data is stored, maintained, and integrated with a host of applications to support operations in the human domain in both training and deployed settings.

Several private organizations that utilize integrated interdisciplinary social science have helped NATO, the U.K. MoD, and the U.S. DoD develop CGI-enabled virtual reality experiences to accelerate learning for operators who work in challenging conflict settings laden with complex psycho-social and emotional dynamics that drive the behaviour and interactions of the populations on the ground. Together with NGOs and civil society groups, they collected ethnographic data and combined it with phenomenological qualitative inquiry using psychology and sociology to curate anthropological stories that reflect specific cultural audiences.

EOG-Bravo’s mission letter from Field Army Headquarters states that they must leverage the extensive and complex human network dynamic to aid in the recovery of 11 females kidnapped by the Islamic Revolutionary Brotherhood (IRB) terrorist group. Two of the females are British citizens, who were supporting a humanitarian mission with the ‘Save the Kids’ NGO prior to being abducted.

At the meeting in Banku, the mayor, police chief, and representative from Save the Kids were present. Archie was welcomed by handshakes and hugs by the police chief who was a former student at Sandhurst and knows Archie from past deployments. The discussion leaped immediately into the kidnapping situation.

The girls were last seen transiting a jungle area North of Oyero. Our organization is in contact by email with one of the IRB facilitators. He is asking for £2 million and we are ready to make that payment,” said Simon Moore of Save the Kids.

Archie’s Viz glasses scanned the facial expressions of those present and Jarvis cautioned him regarding the behaviour of the police chief whose micro facial expressions and eyes revealed a biological response of excitement at the mention of the £2M.

Archie asks “Chief Adesola, what do you think? Should we facilitate payment?

Hmmm, I’m not sure. We don’t know what the IRB will do. We should definitely consider it though,” said Police Chief Adesola.

The Viz glasses continued to feed the facial expressions into HDM, where the recurrent AI neural network recognition algorithm, HOMINID-AI, detected a lie. The AI system and human analysts at the Land Information Manoeuvre Centre (LIMOC) back in the U.K. estimate with a high-level of confidence that Chief Adesola was lying.

At the LIMOC, a 24-hour operation under 77th Brigade, Sgt Richards, determines that the Police Chief is worthy of surveillance by EOG-Alpha, Archie’s sister battlegroup. EOG-Alpha informs local teams in Lagos to deploy unmanned ground sensors and collection assets to monitor the police chief.

Small teams of 3-4 soldiers depart from Lagos in the middle of the night to link up with host nation counterparts. Together, the team of operators and Nigerian national-level security forces deploy sensors to monitor the police chief’s movements and conversations around his office and home.

The next morning, Chief Adesola is picked up by a sensor meeting with an unknown associate. The sensor scanned this associate and the LIMOC processed an immediate hit — he was a leader of the IRB; number three in their chain of command. EOG-A’s operational element is alerted and ordered to work with local security forces to detain this terrorist leader.  Intelligence collected from him and the Chief will hopefully lead them to the missing females…

If you enjoyed this post, stay tuned for Part 2 on the Human Domain Matrix, Part 3 on Emotional Warfare in Yemen, and check out the following links to other works by today’s blog post authors:

Operationalizing the Science of the Human Domain by Aleks Nesic and Arnel P. David

A Psycho-Emotional Human Security Analytical Framework by Patrick J. Christian, Aleksandra Nesic, David Sniffen, Tasneem Aljehani, Khaled Al Sumairi, Narayan B. Khadka, Basimah Hallawy, and Binamin Konlan

Military Strategy in the 21st Century:  People, Connectivity, and Competition by Charles T. Cleveland, Benjamin Jensen, Susan Bryant, and Arnel P. David

… and see the following MadSci Lab blog posts on how AI can augment our Leaders’ decision-making on the battlefield:

Takeaways Learned about the Future of the AI Battlefield

The Guy Behind the Guy: AI as the Indispensable Marshal, by Mr. Brady Moore and Mr. Chris Sauceda

LTC Arnel P. David is an Army Strategist serving in the United Kingdom as the U.S. Special Assistant for the Chief of the General Staff. He recently completed an Artificial Intelligence Program from the Saïd Business School at the University of Oxford.

LTC (Ret) Patrick James Christian, PhD is co-founder of Valka-Mir and a Psychoanalytical Anthropologist focused on the psychopathology of violent ethnic and cultural conflict. He a retired Special Forces officer serving as a social scientist for the Psychological Operations Task Forces in the Arabian Peninsula and Afghanistan, where he constructs psychological profiles of designated target audiences.

Aleksandra Nesic, PhD is co-founder of Valka-Mir and Visiting Faculty for the Countering Violent Extremism and Countering Terrorism Fellowship Program at the Joint Special Operations University (JSOU), USSOCOM. She is also Visiting Faculty, U.S. Army JFK Special Warfare Center and School, and a Co-Founder and Senior Researcher of Complex Communal Conflicts at Valka-Mir Human Security, LLC.

Acknowledgements:  Special Thanks to the British Army Future Force Development Team for their help in creating the British characters depicted in this first story.

Disclaimer:  The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

 

 

174. A New Age of Terror: The Future of CBRN Terrorism

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s post by guest blogger Zachary Kallenborn.  In the first of a series of posts, Mr. Kallenborn addresses how the convergence of emerging technologies is eroding barriers to terrorist organizations acquiring the requisite equipment, materiel, and expertise to develop and deliver chemical, biological, radiological, and nuclear (CBRN) agents in an attack.  Learn about the challenges that (thankfully) remain and the ramifications for the operational environment.  (Note:  Some of the embedded links in this post are best accessed using non-DoD networks.)]

Unidentified drones spotted over the Thayer Monument at West Point.

On the evening of July 15, 2034, 264 West Point cadets reported to the hospital with a severe, but unknown illness. West Point Military Police (MP) investigated the incident and discovered video footage of two men launching several autonomous drones from a pickup truck near the base, then driving off. A suspicious fire the same night at a local apartment complex revealed remnants of 3D printers and synthetic biology kits. The investigation remains ongoing…

 

Such a scenario is fantasy, but increasingly plausible.

Various emerging technologies reduce the barriers to chemical, biological, radiological, and nuclear (CBRN) terrorism — bioterrorism in particular. The convergence of these technologies used may allow terrorists to acquire CBRN weapons with minimal identifiable signatures. Although these technologies exist today, their sophistication, availability, and terrorist interest in their use is likely to grow over the coming decades. For example, the first powered model airplane was flown in 1937; however, terrorists did not attempt to use drones until 1994.1  Thankfully, major challenges will still inhibit truly catastrophic CBRN terror.

Acquisition

Kasumigaseki Station, one of the many stations affected during the Tokyo subway sarin attack by Aum Shinrikyo / Source:  Wikimedia Commons

CBRN weapon acquisition is a difficult task for terrorist organizations. Terrorists must acquire significant specialized equipment, materiel, expertise, and the organizational capabilities to support the acquisition of such weapons and a physical location to assemble them. Even supposed successes like Aum Shinrikyo’s attack on the Tokyo subway were not nearly as impactful as they could have been. Aum’s biological weapons program was also a notable failure. In one instance, a member of the cult fell into a vat of clostridium botulinum (the bacteria that produces the botulinum toxin) and emerged unharmed.2  As a result, only 1-2% of terrorist organizations pursue or use CBRN weapons.3  But these barriers are eroding.

3D printing may ease the acquisition of some equipment and materiel. 3D printers can be used to create equipment components at reduced cost and have been used to create bioreactors, microscopes, and others key elements.4  Bioprinters can also create tissue samples to test weapons agents.5  The digital build-files for 3D printed items can also be sent and received online, perhaps from black market sellers or individuals sympathetic to the terrorist’s ideology.6

Synthetic biology offers improved access to biological weapons agents, especially to otherwise highly controlled agents. Synthetic biology can be used to create new or modify existing organisms.7 According to the World Health Organization, synthetic biology techniques could plausibly allow recreation of the variola virus (smallpox).8  That is especially significant because the virus only exists in two highly secure laboratories.9

Delivery

Delivery of a CBRN agent can also be a challenge. CBRN agents useful for mass casualty attacks rely on the air to carry the agent to an adversary (nuclear weapons are an obvious exception, but the likelihood of a terrorist organization acquiring a nuclear weapon is extremely low). Poor wind conditions, physical barriers, rain, and other environmental conditions can inhibit delivery. Biological weapons also require spray systems that can create droplets of an appropriate size, so that the agent is light enough to float in the air, but heavy enough to enter the lungs (approximately 1-10 microns).

Drones also make CBRN agent delivery easier. Drones offer terrorists access to the air. Terrorists can use them to fly over physical barriers, such as fencing or walls to carry out an attack. Drones also give terrorists more control over where they launch an attack: they can choose a well-defended position or one proximate to an escape route. Although small drone payload sizes limit the amount of agent that can be delivered, terrorists can acquire multiple drones.

Advances in drone autonomy allow terrorists to control more drones at once.10  Autonomy also allows terrorists to launch more complex attacks, perhaps directing autonomous drones to multiple targets or follow a path through multiple, well-populated areas. Greater autonomy also reduces the risks to the terrorists, because they can flee more readily from the area.

3D printing can also help with CBRN agent delivery. Spray-tanks and nozzles subject to export controls can be 3D printed.11  3D printers can also be used to make drones.12  3D printers also provide customizability to adapt these systems for CBRN agent delivery.

Remaining Challenges

CBRN weapons acquisition also requires significant technical expertise. Terrorist organizations must correctly perform complex scientific procedures, know which procedures to use, know which equipment and materials are needed, and operate the equipment. They must do all of that without harming themselves or others (harming innocents may not seem like a concern for an organization intent on mass harm; however, it would risk exposure of the larger plot.) Much of this knowledge is tacit, meaning that it is based on experience and cannot be easily transferred to other individuals.

Emerging technologies do not drastically reduce this barrier, though experts disagree. For example, genome-synthesis requires significant tacit knowledge that terrorists cannot easily acquire without relevant experience.13  Likewise, 3D printers are unlikely to spit out a completely assembled piece of equipment. Rather, 3D printers may provide parts that need to be assembled into a final result. However, some experts argue that as technologies become more ubiquitous, they will be commercialized and made easier to use.14  While this technology is likely to become more accessible, physical limitations will place an upper bound on how accessible it can become.

The Future Operational Environment

If CBRN terrorism is becoming easier, U.S. forces can be expected to be at greater risk of CBRN attack and face more frequent attacks. An attack with infectious biological weapons from afar would not likely be discovered until well after the attack took place. Although still quite unlikely, a major biological attack could cause massive harm. Timed correctly, a CBRN terror attack could delay deployment of troops to a combat zone, inhibit launch of close-air support assets, or harm morale by delaying delivery of delicious pizza MREs.15  Off the battlefield, troops may have less access to protective gear and be at greater risk of harm. Even a poorly made agent can harm military operations: quarantines must still be established and operations limited until the risk is neutralized or at least determined to be non-harmful.

However, counter-intuitively, terrorist demand for CBRN weapons may actually decrease, because emerging technologies also offer easier pathways to mass casualties. These risks will be explored in the next article in this series.

If you enjoyed this post, please read:

The Democratization of Dual Use Technology

Dead Deer, and Mad Cows, and Humans (?) … Oh My! by proclaimed Mad Scientists LtCol Jennifer Snow and Dr. James Giordano, and returning guest blogger Joseph DeFranco

– Mad Scientist Bio Convergence and Soldier 2050 Conference blog post and Final Report

Emergent Threat Posed by Super-Empowered Individuals

Zachary Kallenborn is a freelance researcher and analyst, specializing in Chemical, Biological, Radiological, and Nuclear (CBRN) weapons, CBRN terrorism, drone swarms, and emerging technologies writ large. His research has appeared in the Nonproliferation Review, Studies in Conflict and Terrorism, Defense One, the Modern War Institute at West Point, and other outlets. His most recent study, Swarming Destruction: Drone Swarms and CBRN Weapons, examines the threats and opportunities of drone swarms for the full scope of CBRN weapons.

Disclaimer:  The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).


1 Walter A. Good, “The AMA History Project Presents Autobiography of Dr. Walter (Walt) A. Good,” Academy of Model Aeronautics, August 2009, https://www.modelaircraft.org/sites/default/files/files/GoodDrWalterAWalt.pdf; Robert J. Bunker, “Terrorist and Insurgent Unmanned Aerial Vehicles: Use, Potentials, and Military Implications,” United States Army War College Press, August 2015.

2 Richard Danzig et al., Aum Shinrikyo: Insights Into How Terrorists Develop Biological and Chemical Weapons, 2nd ed. (December 2012), https://s3.amazonaws.com/files.cnas.org/documents/CNAS_AumShinrikyo_SecondEdition_English.pdf (accessed 6 June 2017).

3 Gary Ackerman, Jonathan Wilkenfeld, R. Karl Rethemeyer, and Victor Asal, “Terrorist Groups and Weapons of Mass Destruction,” National Consortium for the Study of Terrorism and Responses to Terrorism, (START), https://www.start.umd.edu/research-projects/terrorist-groups-and-weapons-mass-destruction

4 Clare Scott, “Experiment Tests the Suitability of 3D Printing Materials for Creating Lab Equipment,” 3DPrint.com, August 3, 2018, https://3dprint.com/221403/3d-printing-materials-lab/

5 Kolja Brockmann, “Advances in 3D Printing Technology: Increasing Biological Weapons Proliferation Risks?” Stockholm International Peace Research Institute (SIPRI), July 29, 2019, https://www.sipri.org/commentary/blog/2019/advances-3d-printing-technology-increasing-biological-weapon-proliferation-risks

Franklin Houser, “3D Printed Drone Parts – All You Need to Know in 2019,” All3DP, February 12, 2019, https://all3dp.com/3d-print-drone-parts/

6 Natasha Bajema, “3D Printing: Enabler of Mass Destruction,” Medium, October 20, 2018, https://medium.com/@natashabajema/3d-printing-enabler-of-mass-destruction-74d2a684a13

7 Committee on Strategies for Identifying and Addressing Potential Biodefense Vulnerabilities Posed by Synthetic Biology, “Biodefense in the Age of Synthetic Biology,” (Washington DC: The National Academies Press, 2018), 9.

8 “The Independent Advisory Group on Public Health Implications of Synthetic Biology Technology Related to Smallpox,” World Health Organization, June 29-30, 2015, available at https://www.who.int/csr/resources/publications/smallpox/synthetic-biology-technology-smallpox/en/

9 Smallpox,” National Institutes of Allergy and Infectious Diseases, available at www.niaid.nih.gov/diseases-conditions/smallpox

10 Amy Hocraffer and Chang S. Nam, “A Meta-analysis of Human–System Interfaces in Unmanned Aerial Vehicle (UAV) Swarm Management,” Applied Ergonomics, Vol. 58 (2017), pp. 66–80, http://www.researchgate.net/profile/Chang_Nam5/publication/303782432_A_meta-analysis_of_human-system_interfaces_in_unmanned_aerial_vehicle_UAV_swarm_management/links/5767f71f08ae1658e2f8b435.pdf

11 Kolja Brockmann, “Advances in 3D Printing Technology: Increasing Biological Weapons Proliferation Risks?” Stockholm International Peace Research Institute (SIPRI), July 29, 2019, https://www.sipri.org/commentary/blog/2019/advances-3d-printing-technology-increasing-biological-weapon-proliferation-risks

12 Franklin Houser, “3D Printed Drone Parts – All You Need to Know in 2019,” All3DP, February 12, 2019, https://all3dp.com/3d-print-drone-parts/

13 Kathleen M. Vogel, “Framing Biosecurity: An Alternative to the Biotech Revolution Model?,” Science and Public Policy, Vol. 35 No. 1, 2008.

14 Jonathan B. Tucker, “Could Terrorists Exploit Synthetic Biology?” The New Atlantis, Spring 2011, https://www.thenewatlantis.com/publications/could-terrorists-exploit-synthetic-biology#_ftn8

15 Steve1989MREInfo, “2018 MRE Pepperoni Pizza MRE Review Meal Ready to Eat Ration Taste Testing,” YouTube, July 28, 2018, https://www.youtube.com/watch?v=u_sY-nJ179U

140. A Closer Look at China’s Strategies for Innovation: Questioning True Intent

[Editor’s Note: Mad Scientist Laboratory is pleased to publish today’s guest blog post by Ms. Cindy Hurst, addressing China’s continued drive for dominance regarding innovative technologies.  The asymmetry in ethics existing between their benign and altruistic publicly stated policies and their whole-of-government commitment to modernization and the development of disruptive technologies will remain a key component of multi-domain competition.]

One of China’s most important initiatives is to become an innovative society — but at what cost? In February, the Center for New American Security published a paper, entitled Understanding China’s AI Strategy: Clues to Chinese Strategic Thinking on Artificial Intelligence and National Security. Its author, Gregory Allen, explains that the Chinese government sees Artificial Intelligence (AI) as a “high strategic priority” and is therefore devoting resources “to cultivate AI expertise and strategic thinking among its national security community.” He further urges careful tracking of China’s progress in AI.

Indeed, it would behoove the West to stay abreast of what China is doing in the areas of AI, and not just militarily, but in all areas since there is a clear overlap of civilian and military applications. According to countless official statements, publications, and strategic plans, such as the 13th Five-Year National Science and Technology Innovation Plan, China has placed great emphasis on developing AI, along with other cutting edge technologies, which it views as “majorly influential disruptive technologies” that are capable of altering “the structure of science and technology, the economy, society, and the ecology, to win a competitive advantage in the new round of industry transformation.” 1

Know your enemy and know yourself and in 100 battles you will not be in peril” is one of the key principles of Sun Tzu. The compelling reasons for China’s goals to become a strong global force can easily be explained by understanding its past history and ancient strategies, which are still studied today. The Middle Kingdom had been touted as having once been a seafaring power with a past of contributing world-class innovation at different points over its 5,000 year history. More recently, during the 19th and 20th centuries, China endured what it refers to as the “century of humiliation” — a period in which it was carved up by Western forces during the Opium Wars and then pummeled by Japanese forces in the 1930s.

After the Communist Party’s defeat of the Kuomintang, who retreated to Taiwan, Communist Party Chairman Mao Zedong proclaimed the establishment of the People’s Republic of China in 1949. Since then, the country has vowed to never again be vulnerable to outside forces. They would press forward, making their own path, suffering bumps and bruises along the way. However, it was the United States’ crushing defeat of Iraqi forces during the Persian Gulf War in 1991 that served as the real wakeup call that China lagged far behind Western forces in military capabilities. Since then, generals working at the Academy of Military Science in Beijing and others have studied every aspect of the U.S. revolution in military affairs, including advances in microprocessors, sensors, communication, and Joint operations.2

In its efforts to try to make some headway in technology, China has been accused of stealing massive amounts of foreign intellectual property over the past few decades. Their methodology has included acquisition and reverse engineering, participating in joint ventures sharing research and development, spying, and hacking into government and corporate computer systems. According to a report by CNBC, one in five North American-based corporations on the CNBC Global CFO Council claimed that Chinese companies had stolen their intellectual property within the last year.3 Such thefts and acquisitions make it easier for China to catch up on technology at a low-cost. While the United States spends billions of dollars in research and development, China also benefits without having to expend similar amounts of capital.

Artificial intelligence, quantum information, and Internet of Things are three examples of disruptive technologies shaping the future and in which China aspires to one day have a large or controlling stake. In his speech delivered at the 19th National Congress of the Communist Party of China in October 2017, President Xi Jinping stated that “innovation is the primary driving force behind development” and “it is the strategic underpinning for building a modernized economy.”4

However, while Xi and other Chinese officials outwardly push for international cooperation in AI technology, their efforts and methods have raised concern among some analysts. China openly promotes international cooperation in research and development. However, one might consider possible alternative intentions in trying to push for international cooperation. For example, in Allen’s article, he explains that Fu Ying, the Vice-Chair of the Foreign Affairs Committee of the National People’s Congress had stated that “we should cooperate to preemptively prevent the threat of AI.” Fu further said that China was interested in “playing a leading role in creating norms to mitigate” the risks. A PLA think-tank scholar reportedly expressed support for “mechanisms that are similar to arms control.”5 How sincere are the Chinese in this sentiment? Should it join forces with foreign states to come up with control mechanisms, would China abide by these mechanisms or act in secret, continuing their forward momentum to gain the edge? After all, if both China and the United States, for example, ended up on an even playing field, it would run counter to China’s objectives, if one subscribes to the concept as outlined by Michael Pillsbury in his book, The Hundred-Year Marathon: China’s Secret Strategy to Replace America as the Global Superpower.

While China’s spoken objectives might be sincere, it is prudent to continually review a few of the ancient strategies/stratagems developed during the warring states period, still studied in China today and applied. Some examples include:

1. Cross the sea without the emperor’s knowledge: Hide your true intentions by using the ruse of fake intentions… until you achieve your real intentions.

2. Kill with a borrowed sword: Use the enemy’s strength against them or the strength of another to conquer your enemy.

3. Hide a dagger behind a smile: charm and ingratiate your enemy until you have gained his trust… and then move against him in secret.

In his article, Allen cites a recent Artificial Intelligence Security White Paper, written by “an influential Chinese government think tank,” calling upon China’s government to “avoid Artificial Intelligence arms races among countries” adding that China will “deepen international cooperation on AI laws and regulations, international rules, and so on…” However, as Allen points out, “China’s behavior of aggressively developing, utilizing, and exporting increasingly autonomous robotic weapons and surveillance AI technology runs counter to the country’s stated goals of avoiding an AI arms race.” China may have good intentions. However, its opaque nature breeds skepticism.

Another interesting point to expand upon and that Allen touched upon in his article are the effects of disruptive technologies on societies. According to a Chinese think tank scholar, “China believes that the United States is likely to spend too much to maintain and upgrade mature systems and underinvest in disruptive new systems that make America’s existing sources of advantage vulnerable and obsolete…” When considering the Chinese stratagem, “Sacrifice the plum tree to preserve the peach tree,” it is easy to argue that China will not be easily swayed from developing disruptive technologies, despite possible repercussions and damaging effects. For example, the development of autonomous systems results in unemployment and a steep learning curve. It is inherent in Chinese culture to sacrifice short-term objectives in order to obtain long-term goals. Sustaining initial, short-term repercussions are necessary before China can achieve some of its long-term production goals. Allen explains, “modernization is a top priority, and there is a general understanding that many of its current platforms and approaches are obsolete and must be replaced regardless.”

Particularly intriguing in Allen’s article is his discussion of SenseTime, which is a “world leader in computer vision AI.” The author states that “China’s government and leadership is enthusiastic about using AI for surveillance.” He goes on to say that one Chinese scholar had told him that he “looks forward to a world in AI” in which it will be “impossible to commit a crime without being caught.” While this may seem like an ideal scenario, given the technology is put into the hands of a level-headed and fair law enforcement agency; should it be turned over to an authoritarian dictatorship, such a technology could prove to be disastrous to private citizens. Government control and scare tactics could further suppress their citizens’ basic rights and freedoms.

In conclusion, while China openly pushes the concept of its modernization efforts as a win-win, peaceful development strategy — a careful study of Chinese strategies that have been around for millennia may point to a different scenario, bringing skepticism into the equation. It would be easy to fall prey to an ideology that preaches peace, mutual development, and mutual respect. However, it is important to ask the following two questions: “Is this real?” and “What, if anything, are their ulterior motives?”

If you enjoyed this post, please see:

China’s Drive for Innovation Dominance

Quantum Surprise on the Battlefield?

Cindy Hurst is a research analyst under contract for the Foreign Military Studies Office, Fort Leavenworth, Kansas. Her focus has been primarily on China, with a recent emphasis on research and development, China’s global expansion efforts, and Chinese military strategy. She has published nearly three dozen major papers and countless articles in a variety of journals, magazines, and online venues.

Disclaimer:  The views expressed in this article are Ms. Hurst’s alone and do not imply endorsement by the U.S. Army Training and Doctrine Command, the U.S. Army, the Department of Defense, or the U.S. Government.  This piece is meant to be thought-provoking and does not reflect the current position of the U.S. Army.


1 “Notice of the State Council Regarding the Issuance of the 13th Five-Year National Science and Technology Innovation Plan, State Council Issuance (2016) No. 43, 28 March 2017, http://www.gov.cn/zhengce/content/2016-08/08/content_5098072.htm.

2 “Neither War Nor Peace,” The Economist, 25 January 2018, https://www.economist.com/special-report/2018/01/25/neither-war-nor-peace.

3 Eric Rosenbaum, “1 in 5 Corporations Say China Has Stolen Their IP within the Last Year: CNBC CFO Survey,” CNBC, 1 March 2019, https://www.cnbc.com/2019/02/28/1-in-5-companies-say-china-stole-their-ip-within-the-last-year-cnbc.html.

4 Xi Jinping, “Secure a Decisive Victory in Building a Moderately Prosperous Society in All Respects and Strive for the Great Success of Socialism with Chinese Characteristics for a New Era,” Transcript of speech delivered at the 19th National Congress of the communist Party of China, 18 October 2017.

5 Gregory Allen, “Understanding China’s AI Strategy,” Center for a New American Security, 6 February 2019, https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy.

138. “The Monolith”

The Monolith set from the dawn of man sequence, 2001: A Space Odyssey, Metro-Goldwyn-Mayer (1968) / Source: Wikimedia Commons

[Editor’s Note: Mad Scientist Laboratory is pleased to introduce a new, quarterly feature, entitled “The Monolith.” Arthur C. Clarke and Stanley Kubrick fans alike will recognize and appreciate our allusion to the alien artifact responsible for “uplifting” mankind from primitive, defenseless hominids into tool using killers — destined for the stars — from their respective short story, “The Sentinel,” and movie, “2001: A Space Odyssey.” We hope that you will similarly benefit from this post (although perhaps in not quite so evolutionary a manner!), reflecting the Mad Scientist Teams’ collective book and movie recommendations — Enjoy!]

Originally published by PublicAffairs on 5 October 2017

The Future of War by Sir Lawrence Freedman. The evolution of warfare has taken some turns that were quite unexpected and were heavily influenced by disruptive technologies of the day. Sir Lawrence examines the changing character of warfare over the last several centuries, how it has been influenced by society and technology, the ways in which science fiction got it wrong and right, and how it might take shape in the future. This overarching look at warfare causes one to pause and consider whether we may be asking the right questions about future warfare.

 

Royal Scots Guardsmen engaging the enemy with a Lewis Machine Gun / Source:  Flickr

They Shall Not Grow Old directed by Sir Peter Jackson. This lauded 2018 documentary utilizes original film footage from World War I (much of it unseen for the past century) that has been digitized, colorized, upscaled, and overlaid with audio recordings from British servicemen who fought in the war. The divide between civilians untouched by the war and service members, the destructive impact of new disruptive technologies, and the change they wrought on the character of war resonate to this day and provide an excellent historical analogy from which to explore future warfare.

Gene Simmons plays a nefarious super empowered individual in Runaway

Runaway directed by Michael Crichton. This film, released in 1984, is set in the near future, where a police officer (Tom Selleck) and his partner (Cynthia Rhodes) specialize in neutralizing malfunctioning robots. A rogue killer robot – programmed to kill by the bad guy (Gene Simmons) – goes on homicidal rampage. Alas, the savvy officers begin to uncover a wider, nefarious plan to proliferate killer robots. This offbeat Sci-Fi thriller illustrates how dual-use technologies in the hands of super-empowered individuals could be employed innovatively in the Future Operational Environment. Personalized warfare is also featured, as a software developer’s family is targeted by the ‘bad guy,’ using a corrupted version of the very software he helped create. This movie illustrates the potential for everyday commercial products to be adapted maliciously by adversaries, who, unconstrained ethically, can out-innovate us with convergent, game changing technologies (robotics, CRISPR, etc.).

Originally published by Macmillan on 1 May 2018

The Military Science of Star Wars by George Beahm. Storytelling is a powerful tool used to visualize the future, and Science Fiction often offers the best trove of ideas. The Military Science of Star Wars by George Beahm dissects and analyzes the entirety of the Star Wars Universe to mine for information that reflects the real world and the future of armed conflict. Beahm tackles the personnel, weapons, technology, tactics, strategy, resources, and lessons learned from key battles and authoritatively links them to past, current, and future Army challenges. Beahm proves that storytelling, and even fantasy (Star Wars is more a fantasy story than a Science Fiction story), can teach us about the real world and help evolve our thinking to confront problems in new and novel ways. He connects the story to the past, present, and future Army and asks important questions, like “What makes Han Solo a great military Leader?”, “How can a military use robots (Droids) effectively?”, and most importantly, “What, in the universe, qualified Jar Jar Binks to be promoted to Bombad General?”.

Ex Machina, Universal Pictures (2014) / Source: Vimeo

Ex Machina directed by Alex Garland. This film, released in 2014, moves beyond the traditional questions surrounding the feasibility of Artificial Intelligence (AI) and the Turing test to explore the darker side of synthetic beings, knowing that it is achievable and that the test can be passed. The film is a cautionary tale of what might be possible at the extreme edge of AI computing and innovation where control may be fleeting or even an illusion. The Army may never face the same consequences that the characters in the film face, but it can learn from their lessons. AI is a hotly debated topic with some saying it will bring about the end of days, and others saying generalized AI will never exist. With a future this muddy, one must be cautious of exploring new and undefined technology spaces that carry so much risk. As more robotic entities are operationalized, and AI further permeates the battlefield, future Soldiers and Leaders would do well to stay abreast of the potential for volatility in an already chaotic environment. If Military AI progresses substantially, what will happen when we try to turn it off?

Astronaut and Lunar Module pilot Buzz Aldrin is pictured during the Apollo 11 extravehicular activity on the moon / Source: NASA

Apollo 11 directed by Todd Douglas Miller. As the United States prepares to celebrate the fiftieth anniversary of the first manned mission to the lunar surface later this summer, this inspiring documentary reminds audiences of just how audacious an achievement this was. Using restored archival audio recordings and video footage (complemented by simple line animations illustrating each of the spacecrafts’ maneuver sequences), Todd Miller skillfully re-captures the momentousness of this historic event, successfully weaving together a comprehensive point-of-view of the mission. Watching NASA and its legion of aerospace contractors realize the dream envisioned by President Kennedy eight years before serves to remind contemporary America that we once dared and dreamed big, and that we can do so again, harnessing the energy of insightful and focused leadership with the innovation of private enterprise. This uniquely American attribute may well tip the balance in our favor, given current competition and potential future conflicts with our near-peer adversaries in the Future Operational Environment.

Originally published by Penguin Random House on 3 July 2018

Artemis by Andy Weir. In his latest novel, following on the heels of his wildly successful The Martian, Andy Weir envisions an established lunar city in 2080 through the eyes of Jasmine “Jazz” Bashara, one of its citizen-hustlers, who becomes enmeshed in a conspiracy to control the tremendous wealth generated from the space and lunar mineral resources refined in the Moon’s low-G environment. His suspenseful plot, replete with descriptions of the science and technologies necessary to survive (and thrive!) in the hostile lunar environment, posits a late 21st century rush to exploit space commodities. The resultant economic boom has empowered non-state actors as new competitors on the global — er, extraterrestrial stage — from the Kenya Space Corporation (blessed by its equatorial location and reduced earth to orbit launch costs) to the Sanchez Aluminum mining and refining conglomerate, controlled by a Brazilian crime syndicate scheming to take control of the lunar city. Readers are reminded that the economic hegemony currently enjoyed by the U.S., China, and the E.U. may well be eclipsed by visionary non-state actors who dare and dream big enough to exploit the wealth that lies beyond the Earth’s gravity well.

137. What’s in a Touch? Lessons from the Edge of Electronic Interface

[Editor’s Note:  Mad Scientist Laboratory is pleased to present today’s guest blog post by Dr. Brian Holmes, exploring the threats associated with adaptive technologies and how nefarious actors can morph benign technological innovations into new, more sinister applications.  The three technological trends of democratization, convergence, and asymmetrical ethics portend a plethora of dystopian scenarios for the Future Operational Environment.  Dr. Holmes imagines how advances in prosthetic R&D could be manipulated to augment advances in artificial intelligence and robotics, providing a sense of touch to realize more lifelike lethal autonomous weapons systems — Enjoy!]

Somewhere in a near parallel, fictional universe –

Parallel Universes / Source:  Max Pixel

Dr. Sandy Votel is an Associate Professor and researcher at a military defense school in the U.S.  She has a diverse career that includes experience in defense and private laboratories researching bleeding edge biological science. For eight years, she served as an intelligence officer in the military reserves. Ten years ago she decided to join a defense school as a graduate research professor.

Dr. Mark Smith is a new Assistant Professor at her School. He just graduated with his Ph.D. before accepting his academic position. Sandy, Mark’s mentor, is explaining the finer details of her team’s research during Mark’s first week on the job.

Sandy began by explaining to Mark what her post-doc was investigating –

He’s researching the fundamental materials required for electronic skin,” she said.

“Cyborg” / Source: R.E. Barber Photography via Flickr

After a pause, Sandy followed up by posing this hackneyed question, “Is it wrong that I am helping to create one small slice of a yet to be made front line cyborg, or, a bioengineered replicant spy of the kind played out in popular Hollywood movies?” Her smirk quickly followed. Westerners were practically conditioned to make comments like that.

 

The Modular Prosthetic Limb (MPL) / Source: U.S. Navy via Flickr

Her colleague Mark immediately replied, “It’s more likely this kind of technology could someday help battlefield soldiers or civilians who have lost fingers, toes, or limbs. They might be able to touch or feel again in some new manner through the interface. The material could be embedded into some sort of artificial prosthetic, and electronically connected to receptors feeding the information to and from your brain. Imagine the possibilities! Any interest in collaborating? We should push the boundaries here!

Sandy knew that the early stage research was intended for the most benevolent of reasons – personalized health care and disposable electronic sensors to name a few – but the creative futurist in her, heavily influenced by years evaluating the more disturbing side of humanity as an intelligence officer, suddenly made her pause. After all, she saw the realized threat from adaptive technologies daily when she logged into her computer system each drill weekend.

A drawing of the character Deckard by Canosard, from the film Blade Runner (Warner Bros., 1982) / Source: DeviantArt

She’d also seen wildly creative science fiction writers’ draft ideas into reality. Sandy loved reading science fiction novels and watched every movie or show that resulted. As a child, she was amazed when Rick Deckard, from the movie Blade Runner, inserted a photograph into a machine that scanned it and allowed him to enhance the resolution enough to observe finite details embedded in thousands of pixels. Like most of the general public, she used to think that was impossible! Oh, how times have changed.

Sandy walked back into her office, scanned her email and focused on an article her department chair had sent to the entire workforce to evaluate. She suddenly stood back in shock, and immediately connected the disturbing news with elements she recalled from history.

Dr. Josef Mengele / Source:  Wikimedia Commons

Decades before Blade Runner came out in the cinema, the modern boundaries of science and human subject experimentation were torn asunder by the likes of Dr. Josef Mengele in the 1940’s. The “Angel of Death” was a German anthropologist and medical doctor who researched genetics in school and conducted horrific experiments on humans in Auschwitz as an SS officer.

Dr. He Jiankui / Source:  Wikimedia Commons

According to the article she just read, China’s Dr. He Jiankui, a biophysicist educated in China and the United States, shocked the world by pushing the limits of ethical genetic research by editing the genes of human embryos.

In each case, conflict or culture induced them to perform world changing science, resulting in not only global condemnation, but also the re-birth of knowledge with dual purpose. Sandy knew that history dictates a repetition of bad activities like these, performed in unpredictable scenarios set in a deep, dark, dystopian future.

Sandy’s realization hastened further reflection.

Cyborgs / Source: Pixabay

A significant number of studies have documented the emotional and physical benefits derived from touch. The research suggests that touch is fundamental to human communication, health, and bonding. If this is true, not only will advanced levels of artificial intelligence, or “AI”, require coding enabling learning and empathy, but the bioengineered system the AI is directing will necessitate a sense of touch to mimic a more lifelike cyborg. Passive sensors are only as good as physics allows them to be, or as great as the signal to noise levels dictate in a dirty environment. Touch, however, conveys something different… something far more real.

AI mimicking human visage / Source: Max Pixel

Sandy knew that most futuristic battlefield articles now center on today’s technology du jour, artificial intelligence. There’s no question that AI will serve as the brain center for individual or centralized networks of future machines; but to make them more human and adaptable to the battlefield of tomorrow as indistinguishable soldiers or undetectable HUMINT assets — subtler pieces are required to complete the puzzle.

Imagine hundreds or thousands of manufactured assets programmed for clandestine military operations, or covert activities that look, act, and feel like us?” she thought.

Weapons can be embedded into robotic systems, coding and software improved to the point of winning challenging board games, but it’s the bioengineers with duplicitous purposes and far too much imagination that hold the real key to the soldier of the future; specifically, the soldiers that replace, infiltrate, or battle us.

Nefarious actors adapting benign technological innovations into new, more sinister applications…

It’s happened before, and it will happen again!” she said out loud, accidentally.

Mark, who happened to be walking past her door, asked if everything was alright. Sandy nodded, but finished this thought as soon as he left her view.

Unfortunately, the key that unlocks the occurrence of these secrets exists in a faraway place, under duress, and without rules. If the military is worried about the Deep Future, we should be analyzing the scenarios that enable these kinds of creative paradigms.”

After all, it’s all in a touch. 

If you enjoyed this post, please:

– Read the Mad Scientist Bio Convergence and Soldier 2050 Conference Final Report.

Review the following blog posts:

Ethical Dilemmas of Future Warfare, and

Envisioning Future Operational Environment Possibilities through Story Telling.

– See our compendium of 23 submissions from the 2017 Mad Scientist Science Fiction Contest at Science Fiction: Visioning the Future of Warfare 2030-2050.

Crank up I Am Robot by The Phenomenauts (who?!?)

Dr. Brian Holmes is the Dean of the Anthony G. Oettinger School of Science and Technology Intelligence at the National Intelligence University in Bethesda, MD.

Disclaimer: The views expressed in this article are Dr. Holmes’ alone and do not imply endorsement by the U.S. Army Training and Doctrine Command, the U.S. Army, the Defense Intelligence Agency, the Department of Defense, its component organizations, or the U.S. Government.  This piece is meant to be thought-provoking and does not reflect the current position of the U.S. Army.