188. “Tenth Man” — Challenging our Assumptions about the Future Force

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish our latest “Tenth Man” post. This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging. We offer it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Operational Environment (OE). Today’s post examines a foundational assumption about the Future Force by challenging it, reviewing the associated implications, and identifying potential signals and/or indicators of change. Read on!]

Assumption: The United States will maintain sufficient Defense spending as a percentage of its GDP to modernize the Multi-Domain Operations (MDO) force. [Related MDO Baseline Assumption – “b. The Army will adjust to fiscal constraints and have resources sufficient to preserve the balance of readiness, force structure, and modernization necessary to meet the demands of the national defense strategy in the mid-to far-term (2020-2040),” TRADOC Pam 525-3-1, The U.S. Army in Multi-Domain Operations 2028, p. A-1.]

Source: U.S. Census Bureau

Over the past decades, the defense budget has varied but remained sufficient to accomplish the missions of the U.S. military. However, a graying population with fewer workers and longer life spans will put new demands on the non-discretionary and discretionary federal budget. These stressors on the federal budget may indicate that the U.S. is following the same path as Europe and Japan. By 2038, it is projected that 21% of Americans will be 65 years old or older.1 Budget demand tied to an aging population will threaten planned DoD funding levels.

In the near-term (2019-2023), total costs in 2019 dollars are projected to remain the same. In recent years, the DoD underestimated the costs of acquiring weapons systems and maintaining compensation levels. By taking these factors into account, a 3% increase from the FY 2019 DoD budget is needed in this timeframe. Similarly, the Congressional Budget Office (CBO) estimates that costs will steadily climb after 2023. Their base budget in 2033 is projected to be approximately $735 billion — that is an 11% increase over ten years. This is due to rising compensation rates, growing costs of operations and maintenance, and the purchasing of new weapons systems.2 These budgetary pressures are connected to several stated and hidden assumptions:

    • An all-volunteer force will remain viable [Related MDO Baseline Assumption – “a. The U.S. Army will remain a professional, all volunteer force, relying on all components of the Army to meet future commitments.”],
    • Materiel solutions’ associated technologies will have matured to the requisite Technology Readiness Levels (TRLs), and
    • The U.S. will have the industrial ability to reconstitute the MDO force following “America’s First Battle.”

Implications: If these assumptions prove false, the manned and equipped force of the future will look significantly different than the envisioned MDO force. A smaller DoD budget could mean a small fielded Army with equipping decisions for less exquisite weapons systems. A smaller active force might also drive changes to Multi-Domain Operations and how the Army describes the way it will fight in the future.

Signpost / Indicators of Change:

    • 2008-type “Great Recession”
    • Return of budget control and sequestration
    • Increased domestic funding for:
      • Universal Healthcare
      • Universal College
      • Social Security Fix
    • Change in International Monetary Environment (higher interest rates for borrowing)

If you enjoyed this alternative view on force modernization, please also see the following posts:

  • Disclaimer: The views expressed in this blog post do not reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

1The long-term impact of aging on the federal budget,” by Louise Sheiner, Brookings, 11 January 2018 https://www.brookings.edu/research/the-long-term-impact-of-aging-on-the-federal-budget/

2Long-Term Implications of the 2019 Future Years Defense Program,” Congressional Budget Office, 13 February 2019. https://www.cbo.gov/publication/54948

184. Blurring Lines Between Competition and Conflict

[Editor’s Note: The United States Army faces multiple, complex challenges in tomorrow’s Operational Environment (OE), confronting strategic competitors in an increasingly contested space across every domain (land, air, maritime, space, and cyberspace). The Mad Scientist Initiative, the U.S. Army Training and Doctrine Command (TRADOC) G-2 Futures, and Army Futures Command (AFC) Future Operational Environment Cell have collaborated with representatives from industry, academia, and the Intelligence Community to explore the blurring lines between competition and conflict, and the character of great power warfare in the future. Today’s post captures our key findings regarding the OE and what will be required to successfully compete, fight, and win in it — Enjoy!].

Alternative Views of Warfare: The U.S. Army’s view of the possible return to Large Scale Combat Operations (LSCO) and capital systems warfare might not be the future of warfare. Near-peer competitors will seek to achieve national objectives through competition short of conflict, and regional competitors and non-state actors will effectively compete and fight with smaller, cheaper, and greater numbers of systems against our smaller number of exquisite systems. However, preparation for LSCO and great state warfare may actually contribute to its prevention.

Competition and Conflict are Blurring: The dichotomy of war and peace is no longer a useful construct for thinking about national security or the development of land force capabilities. There are no longer defined transitions from peace to war and competition to conflict. This state of simultaneous competition and conflict is continuous and dynamic, but not necessarily cyclical. Potential adversaries will seek to achieve their national interest short of conflict and will use a range of actions from cyber to kinetic against unmanned systems walking up to the line of a short or protracted armed conflict. Authoritarian regimes are able to more easily ensure unity of effort and whole-of-government over Western democracies and work to exploit fractures and gaps in decision-making, governance, and policy.

The globalization of the world – in communications, commerce, and belligerence (short of war) – as well as the fragmentation of societies and splintering of identities has created new factions and “tribes,” and opened the aperture on who has offensive capabilities that were previously limited to state actors. Additionally, the concept of competition itself has broadened as social media, digital finance, smart technology, and online essential services add to a growing target area.

Adversaries seek to shape public opinion and influence decisions through targeted information operations campaigns, often relying on weaponized social media. Competitors invest heavily in research and development in burgeoning technology fields Artificial Intelligence (Al), quantum sciences, and biotech – and engage in technology theft to weaken U.S. technological superiority. Cyber attacks and probing are used to undermine confidence in financial institutions and critical government and public functions – Supervisory Control and Data Acquisition (SCADA), voting, banking, and governance. Competition and conflict are occurring in all instruments of power throughout the entirety of the Diplomatic, Information, Military and Economic (DIME) model.

Cyber actions raise the question of what is the threshold to be considered an act of war. If an adversary launches a cyber ­attack against a critical financial institution and an economic crisis results – is it an act of war? There is a similar concern regarding unmanned assets. While the kinetic destruction of an unmanned system may cost millions, no lives are lost. How much damage without human loss of life is acceptable?

Nuclear Deterrence limits Great Power Warfare: Multi-Domain Operations (MDO) is predicated on a return to Great Power warfare. However, nuclear deterrence could make that eventuality less likely. The U.S. may be competing more often below the threshold of conventional war and the decisive battles of the 20th Century (e.g., Midway and Operation Overlord). The two most threatening adversaries – Russia and China – have substantial nuclear arsenals, as does the United States, which will continue to make Great Power conventional warfare a high risk / high cost endeavor. The availability of non-nuclear capabilities that can deliver regional and global effects is a new attribute of the OE. This further complicates the deterrence value of militaries and the escalation theory behind flexible deterrent options. The inherent implications of cyber effects in the real world – especially in economies, government functions, and essential services – further exacerbates the blurring between competition and conflict.

Hemispheric Competition and Conflict: Over the last twenty years, Russia and China have been viewed as regional competitors in Eurasia or South-East Asia. These competitors will seek to undermine and fracture traditional Western institutions, democracies, and alliances. Both are transitioning to a hemispheric threat with a primary focus on challenging the U.S. Army all the way from its home station installations (i.e., the Strategic Support Area) to the Close Area fight. We can expect cyber attacks against critical infrastructure, the use of advanced information warfare such as deep fakes targeting units and families, and the possibility of small scale kinetic attacks during what were once uncontested administrative actions of deployment. There is no institutional memory for this threat and adding time and required speed for deployment is not enough to exercise MDO.

Disposable versus Exquisite: Current thinking espouses technologically advanced and expensive weapons platforms over disposable ones, which brings with it an aversion to employ these exquisite platforms in contested domains and an inability to rapidly reconstitute them once they are committed and subsequently attrited. In LSCO with a near-peer competitor, the ability to reconstitute will be imperative. The Army (and larger DoD) may need to shift away from large and expensive systems to cheap, scalable, and potentially even disposable unmanned systems (UxS). Additionally, the increases in miniaturized computing power in cheaper systems, coupled with advances in machine learning could lead to massed precision rather than sacrificing precision for mass and vice versa.

This challenge is exacerbated by the ability for this new form of mass to quickly aggregate/disaggregate, adapt, self­-organize, self-heal, and reconstitute, making it largely unpredictable and dynamic. Adopting these capabilities could provide the U.S. Army and allied forces with an opportunity to use mass precision to disrupt enemy Observe, Orient, Decide, and Act (OODA) loops, confuse kill chains/webs, overwhelm limited adversary formations, and exploit vulnerabilities in extended logistics tails and advanced but immature communication networks.

Human-Starts-the-Loop: There have been numerous discussions and debate over whether armed forces will continue to have a “man-in-the-loop” regarding Lethal Autonomous Weapons Systems (LAWS). Lethal autonomy in future warfare may instead be “human-starts-the-loop,” meaning that humans will be involved in the development of weapons/targeting systems – establishing rules and scripts – and will initiate the process, but will then allow the system to operate autonomously. It has been stated that it would be ethically disingenuous to remain constrained by “human-on-the-loop” or “human-in-the-­loop” constructs when our adversaries are unlikely to similarly restrict their own autonomous warfighting capabilities. Further, the employment of this approach could impact the Army’s MDO strategy. The effects of “human-starts-the-loop” on the kill chain – shortening, flattening, or otherwise dispersing – would necessitate changes in force structuring that could maximize resource allocation in personnel, platforms, and materiel. This scenario presents the Army with an opportunity to execute MDO successfully with increased cost savings, by: 1) Conducting independent maneuver – more agile and streamlined units moving rapidly; 2) Employing cross-domain fires – efficiency and speed in targeting and execution; 3) Maximizing human potential – putting capable Warfighters in optimal positions; and 4) Fielding in echelons above brigade – flattening command structures and increasing efficiency.

Emulation and the Accumulation of Advantages: China and Russia are emulating many U.S. Department of Defense modernization and training initiatives. China now has Combat Training Centers. Russia has programs that mirror the Army’s Cross Functional Team initiatives and the Artificial Intelligence (AI) Task Force. China and Russia are undergoing their own versions of force modernization to better professionalize the ranks and improve operational reach. Within these different technical spaces, both China and Russia are accumulating advantages that they envision will blunt traditional U.S. combat advantages and the tenets described in MDO. However, both nations remain vulnerable and dependent on U.S. innovations in microelectronics, as well as the challenges of incorporating these technologies into their own doctrine, training, and cultures.

If you enjoyed this post, please also see:

Jomini’s Revenge: Mass Strikes Back! by Zachery Tyson Brown.

Our “Tenth Man” – Challenging our Assumptions about the Operational Environment and Warfare posts, where Part 1 discusses whether the future fight will necessarily even involve LSCO and Part 2 addresses the implications of a changed or changing nature of war.

The Death of Authenticity:  New Era Information Warfare.

 

 

183. Ethics, Morals, and Legal Implications

[Editor’s Note: The U.S. Army Futures Command (AFC) and Training and Doctrine Command (TRADOC) co-sponsored the Mad Scientist Disruption and the Operational Environment Conference with the Cockrell School of Engineering at The University of Texas at Austin on 24-25 April 2019 in Austin, Texas. Today’s post is excerpted from this conference’s Final Report and addresses how the speed of technological innovation and convergence continues to outpace human governance. The U.S. Army must not only consider how best to employ these advances in modernizing the force, but also the concomitant ethical, moral, and legal implications their use may present in the Operational Environment (see links to the newly published TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, and the complete Mad Scientist Disruption and the Operational Environment Conference Final Report at the bottom of this post).]

Technological advancement and subsequent employment often outpaces moral, ethical, and legal standards. Governmental and regulatory bodies are then caught between technological progress and the evolution of social thinking. The Disruption and the Operational Environment Conference uncovered and explored several tension points that the Army may be challenged by in the future.

Space

Cubesats in LEO / Source: NASA

Space is one of the least explored domains in which the Army will operate; as such, we may encounter a host of associated ethical and legal dilemmas. In the course of warfare, if the Army or an adversary intentionally or inadvertently destroys commercial communications infrastructure – GPS satellites – the ramifications to the economy, transportation, and emergency services would be dire and deadly. The Army will be challenged to consider how and where National Defense measures in space affect non-combatants and American civilians on the ground.

Per proclaimed Mad Scientists Dr. Moriba Jah and Dr. Diane Howard, there are ~500,000 objects orbiting the Earth posing potential hazards to our space-based services. We are currently able to only track less than one percent of them — those that are the size of a smart phone / softball or larger. / Source: NASA Orbital Debris Office

International governing bodies may have to consider what responsibility space-faring entities – countries, universities, private companies – will have for mitigating orbital congestion caused by excessive launching and the aggressive exploitation of space. If the Army is judicious with its own footprint in space, it could reduce the risk of accidental collisions and unnecessary clutter and congestion. It is extremely expensive to clean up space debris and deconflicting active operations is essential. With each entity acting in their own self-interest, with limited binding law or governance and no enforcement, overuse of space could lead to a “tragedy of the commons” effect.1  The Army has the opportunity to more closely align itself with international partners to develop guidelines and protocols for space operations to avoid potential conflicts and to influence and shape future policy. Without this early intervention, the Army may face ethical and moral challenges in the future regarding its addition of orbital objects to an already dangerously cluttered Low Earth Orbit. What will the Army be responsible for in democratized space? Will there be a moral or ethical limit on space launches?

Autonomy in Robotics

AFC’s Future Force Modernization Enterprise of Cross-Functional Teams, Acquisition Programs of Record, and Research and Development centers executed a radio rodeo with Industry throughout June 2019 to inform the Army of the network requirements needed to enable autonomous vehicle support in contested, multi-domain environments. / Source: Army.mil

Robotics have been pervasive and normalized in military operations in the post-9/11 Operational Environment. However, the burgeoning field of autonomy in robotics with the potential to supplant humans in time-critical decision-making will bring about significant ethical, moral, and legal challenges that the Army, and larger DoD are currently facing. This issue will be exacerbated in the Operational Environment by an increased utilization and reliance on autonomy.

The increasing prevalence of autonomy will raise a number of important questions. At what point is it more ethical to allow a machine to make a decision that may save lives of either combatants or civilians? Where does fault, responsibility, or attribution lie when an autonomous system takes lives? Will defensive autonomous operations – air defense systems, active protection systems – be more ethically acceptable than offensive – airstrikes, fire missions – autonomy? Can Artificial Intelligence/Machine Learning (AI/ML) make decisions in line with Army core values?

Deepfakes and AI-Generated Identities, Personas, and Content

Source: U.S. Air Force

A new era of Information Operations (IO) is emerging due to disruptive technologies such as deepfakes – videos that are constructed to make a person appear to say or do something that they never said or did – and AI Generative Adversarial Networks (GANs) that produce fully original faces, bodies, personas, and robust identities.2  Deepfakes and GANs are alarming to national security experts as they could trigger accidental escalation, undermine trust in authorities, and cause unforeseen havoc. This is amplified by content such as news, sports, and creative writing similarly being generated by AI/ML applications.

This new era of IO has many ethical and moral implications for the Army. In the past, the Army has utilized industrial and early information age IO tools such as leaflets, open-air messaging, and cyber influence mechanisms to shape perceptions around the world. Today and moving forward in the Operational Environment, advances in technology create ethical questions such as: is it ethical or legal to use cyber or digital manipulations against populations of both U.S. allies and strategic competitors? Under what title or authority does the use of deepfakes and AI-generated images fall? How will the Army need to supplement existing policy to include technologies that didn’t exist when it was written?

AI in Formations

With the introduction of decision-making AI, the Army will be faced with questions about trust, man-machine relationships, and transparency. Does AI in cyber require the same moral benchmark as lethal decision-making? Does transparency equal ethical AI? What allowance for error in AI is acceptable compared to humans? Where does the Army allow AI to make decisions – only in non-combat or non-lethal situations?

Commanders, stakeholders, and decision-makers will need to gain a level of comfort and trust with AI entities exemplifying a true man-machine relationship. The full integration of AI into training and combat exercises provides an opportunity to build trust early in the process before decision-making becomes critical and life-threatening. AI often includes unintentional or implicit bias in its programming. Is bias-free AI possible? How can bias be checked within the programming? How can bias be managed once it is discovered and how much will be allowed? Finally, does the bias-checking software contain bias? Bias can also be used in a positive way. Through ML – using data from previous exercises, missions, doctrine, and the law of war – the Army could inculcate core values, ethos, and historically successful decision-making into AI.

If existential threats to the United States increase, so does pressure to use artificial and autonomous systems to gain or maintain overmatch and domain superiority. As the Army explores shifting additional authority to AI and autonomous systems, how will it address the second and third order ethical and legal ramifications? How does the Army rectify its traditional values and ethical norms with disruptive technology that rapidly evolves?

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.
    • Ethics and the Future of War panel, facilitated by LTG Dubik (USA-Ret.) at the Mad Scientist Visualizing Multi Domain Battle 2030-2050 Conference, facilitated at Georgetown University, on 25-26 July 2017.

Just Published! TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, 7 October 2019, describes the conditions Army forces will face and establishes two distinct timeframes characterizing near-term advantages adversaries may have, as well as breakthroughs in technology and convergences in capabilities in the far term that will change the character of warfare. This pamphlet describes both timeframes in detail, accounting for all aspects across the Diplomatic, Information, Military, and Economic (DIME) spheres to allow Army forces to train to an accurate and realistic Operational Environment.


1 Munoz-Patchen, Chelsea, “Regulating the Space Commons: Treating Space Debris as Abandoned Property in Violation of the Outer Space Treaty,” Chicago Journal of International Law, Vol. 19, No. 1, Art. 7, 1 Aug. 2018. https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1741&context=cjil

2 Robitzski, Dan, “Amazing AI Generates Entire Bodies of People Who Don’t Exist,” Futurism.com, 30 Apr. 2019. https://futurism.com/ai-generates-entire-bodies-people-dont-exist

182. “Tenth Man” – Challenging our Assumptions about the Operational Environment and Warfare (Part 2)

[Editor’s Note: Mad Scientist Laboratory is pleased to publish our latest “Tenth Man” post. This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging. The Mad Scientist Laboratory offers it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Operational Environment (OE). We continue our series of “Tenth Man” posts examining the foundational assumptions of The Operational Environment and the Changing Character of Future Warfare, challenging them, reviewing the associated implications, and identifying potential signals and/or indicators of change. Enjoy!]

Assumption:  The character of warfare will change but the nature of war will remain human-centric.

The character of warfare will change in the future OE as it inexorably has since the advent of flint hand axes; iron blades; stirrups; longbows; gunpowder; breech loading, rifled, and automatic guns; mechanized armor; precision-guided munitions; and the Internet of Things. Speed, automation, extended ranges, broad and narrow weapons effects, and increasingly integrated multi-domain conduct, in addition to the complexity of the terrain and social structures in which it occurs, will make mid Twenty-first Century warfare both familiar and utterly alien.

The nature of warfare, however, is assumed to remain human-centric in the future. While humans will increasingly be removed from processes, cycles, and perhaps even decision-making, nearly all content regarding the future OE assumes that humans will remain central to the rationale for war and its most essential elements of execution. The nature of war has remained relatively constant from Thucydides through Clausewitz, and forward to the present. War is still waged because of fear, honor, and interest, and remains an expression of politics by other means. While machines are becoming ever more prevalent across the battlefield – C5ISR, maneuver, and logistics – we cling to the belief that parties will still go to war over human interests; that war will be decided, executed, and controlled by humans.

Implications:  If these assumptions prove false, then the Army’s fundamental understanding of war in the future may be inherently flawed, calling into question established strategies, force structuring, and decision-making models. A changed or changing nature of war brings about a number of implications:

– Humans may not be aware of the outset of war. As algorithmic warfare evolves, might wars be fought unintentionally, with humans not recognizing what has occurred until effects are felt?

– Wars may be fought due to AI-calculated opportunities or threats – economic, political, or even ideological – that are largely imperceptible to human judgement. Imagine that a machine recognizes a strategic opportunity or impetus to engage a nation-state actor that is conventionally (read that humanly) viewed as weak or in a presumed disadvantaged state. The machine launches offensive operations to achieve a favorable outcome or objective that it deemed too advantageous to pass up.

  • – Infliction of human loss, suffering, and disruption to induce coercion and influence may not be conducive to victory. Victory may be simply a calculated or algorithmic outcome that causes an adversary’s machine to decide their own victory is unattainable.

– The actor (nation-state or otherwise) with the most robust kairosthenic power and/or most talented humans may not achieve victory. Even powers enjoying the greatest materiel advantages could see this once reliable measure of dominion mitigated. Winning may be achieved by the actor with the best algorithms or machines.

  • These implications in turn raise several questions for the Army:

– How much and how should the Army recruit and cultivate human talent if war is no longer human-centric?

– How should forces be structured – what is the “right” mix of humans to machines if war is no longer human-centric?

– Will current ethical considerations in kinetic operations be weighed more or less heavily if humans are further removed from the equation? And what even constitutes kinetic operations in such a future?

– Should the U.S. military divest from platforms and materiel solutions (hardware) and re-focus on becoming algorithmically and digitally-centric (software)?

 

– What is the role for the armed forces in such a world? Will competition and armed conflict increasingly fall within the sphere of cyber forces in the Departments of the Treasury, State, and other non-DoD organizations?

– Will warfare become the default condition if fewer humans get hurt?

– Could an adversary (human or machine) trick us (or our machines) to miscalculate our response?

Signposts / Indicators of Change:

– Proliferation of AI use in the OE, with increasingly less human involvement in autonomous or semi-autonomous systems’ critical functions and decision-making; the development of human-out-of-the-loop systems

– Technology advances to the point of near or actual machine sentience, with commensurate machine speed accelerating the potential for escalated competition and armed conflict beyond transparency and human comprehension.

– Nation-state governments approve the use of lethal autonomy, and this capability is democratized to non-state actors.

– Cyber operations have the same political and economic effects as traditional kinetic warfare, reducing or eliminating the need for physical combat.

– Smaller, less-capable states or actors begin achieving surprising or unexpected victories in warfare.

– Kinetic war becomes less lethal as robots replace human tasks.

– Other departments or agencies stand up quasi-military capabilities, have more active military-liaison organizations, or begin actively engaging in competition and conflict.

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.

… as well as our previous “Tenth Man” blog posts:

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

178. Space: Challenges and Opportunities

[Editor’s Note:  The U.S. Army Futures Command (AFC) and Training and Doctrine Command (TRADOC) co-sponsored the Mad Scientist Disruption and the Operational Environment Conference with the Cockrell School of Engineering at The University of Texas at Austin on 24-25 April 2019 in Austin, Texas. Today’s post is excerpted from this conference’s Final Report (see link at the end of this post), addressing how the Space Domain is becoming increasingly crowded, given that the community of spacefaring entities now comprises more than 90 nations, as well as companies such as Amazon, Google, and Alibaba.  This is particularly significant to the Army as it increasingly relies on space-based assets to support long-range precision fires and mission command.  Read on to learn how this space boom will create operational challenges for the Army, while simultaneously yield advances in autonomy that will ultimately benefit military applications in the other operational domains. (Note: Some of the embedded links in this post are best accessed using non-DoD networks.)]

Everybody wants to launch satellites

Space has the potential to become the most strategically important domain in the Operational Environment. Today’s maneuver Brigade Combat Team (BCT) has over 2,500 pieces of equipment dependent on space-based assets for Positioning, Navigation, and Timing (PNT).1 This number is only going to increase as emerging technology on Earth demands increased bandwidth, new orbital infrastructure, niche satellite capabilities, and advanced robotics.

Image made from models used to track debris in Low Earth Orbit / Source: NASA Earth Observatory; Wikimedia Commons

Low Earth Orbit is cluttered with hundreds of thousands of objects, such as satellites, debris, and other refuse that can pose a hazard to space operations, and only one percent of these objects are tracked.2  This complexity is further exacerbated by the fact that there are no universally recognized “space traffic rules” and no standard operating procedures. Additionally, there is a space “gold rush” with companies and countries racing to launch assets into orbit at a blistering pace. The FCC has granted over 7,500 satellite licenses for SpaceX alone over the next five years, and the U.S. has the potential to double the number of tracked space objects in that same timeframe.3 This has the potential to cause episodes of Kessler syndrome – where cascading damage produced by collisions increases debris by orders of magnitude.4  This excess debris could also be used as cover by an adversary for a hostile act, thereby making attribution difficult.

There are efforts, such as University of Texas-Austin’s tool ASTRIAGraph, to mitigate this problem through crowdsourcing the location of orbital objects. A key benefit of these tools is their ability to analyze all sources of information simultaneously so as to get the maximum mutual information on desired space domain awareness criteria and enable going from data to discovery.5   One added benefit is that the system layers the analysis of other organizations and governments to reveal gaps, inconsistencies, and data overlaps. This information is of vital importance to avoid collisions, to determine what is debris and what is active, and to properly plan flight paths. For the military, a collision with a mission-critical asset could disable warfighter capabilities, cause unintentional escalation, or result in loss of life.

As astronauts return to Earth via the Orion spacecraft, autonomous caretaking systems will maintain Gateway. / Source: NASA

Autonomy will be critical for future space activities because physical human presence in space will be limited. Autonomous robots with human-like mechanical skills performing maintenance and hardware survivability tasks will be vital. For example, NASA’s Gateway program relies upon fully autonomous systems to function as it’s devoid of humans for 11 months out of the year.

An autonomous caretaking capability will facilitate spacecraft maintenance when Gateway is unmanned / Source: NASA; Dr. Julia Badger

Fixing mechanical and hardware problems on the space station requires a dexterous robot on board that takes direction from a self-diagnosing program, thus creating a self-healing system of systems.6 The military can leverage this technology already developed for austere environments to perform tasks requiring fine motor skills in environments that are inhospitable or too dangerous for human life. Similar dual-use autonomous capabilities employed by our near-peer competitors could also serve as a threat capability against U.S. space assets.  As the military continues to expand its mission sets in space, and its assets become more complex systems of systems, it will increasingly rely on autonomous or semi-autonomous robots for maintenance, debris collection, and defense.

The Space Domain is vital to Land Domain operations.  Our adversaries are well aware of this dependence and intend to disrupt and degrade these capabilities.  NASA is at the forefront of long range operations with robotic systems responsible for self-healing, collection of information, and communications.  What lessons are being learned and applied by the Army from NASA’s experience with autonomous operations in Space?

If you enjoyed this post, please also see:

The entire Mad Scientist Disruption and the Operational Environment Conference Final Report, dated 25 July 2019.

– Dr. Moriba K. Jah and Dr. Diane Howard‘s presentation from the aforementioned conference on Space Traffic Management and Situational Awareness

Dr. Julia Badger‘s presentation from the same conference on Robotics in Space.

– Dr. Jah‘s Modern War Institute podcast on What Does the Future Hold for the US Military in Space? hosted by our colleagues at Modern War Institute.

The following Mad Scientist Laboratory blog posts on space:


1 Houck, Caroline, “The Army’s Space Force Has Doubled in Six Years, and Demand Is Still Going Up,” DefenseOne, 23 Aug. 2017. https://www.defenseone.com/technology/2017/08/armys-space-force-has-doubled-six-years-and-demand-still-going/140467/

2 Jah, Moriba, Mad Scientist Conference: Disruption and the Future Operational Environment, University of Texas at Austin, 25 April 2019.

3 Seemangal, Robin, “Watch SpaceX Launch the First of its Global Internet Satellites,” Wired, 18 Feb. 2018. https://www.wired.com/story/watch-spacex-launch-the-first-of-its-global-internet-satellites/

4 “Micrometeoriods and Orbital Debris (MMOD),” NASA, 14 June 2016. https://www.nasa.gov/centers/wstf/site_tour/remote_hypervelocity_test_laboratory/micrometeoroid_and_orbital_debris.html

5 https://sites.utexas.edu/moriba/astriagraph/

6 Badger, Julia, Mad Scientist Conference: Disruption and the Future Operational Environment, University of Texas at Austin, 25 April 2019.

128. Disruption and the Future Operational Environment

Mad Scientist Laboratory is pleased to announce that Headquarters, U.S. Army Training and Doctrine Command (TRADOC) is co-sponsoring the Mad Scientist Disruption and the Future Operational Environment Conference with the Cockrell School of Engineering at The University of Texas at Austin on 24-25 April 2019 in Austin, Texas.

Plan on joining us virtually as we explore the individual and convergent impacts of technological innovations on Multi-Domain Operations and the Future Operational Environment, from today through 2050.

Disruptors addressed include robotics, artificial intelligence and autonomy, the future of space, planetary habitability, and the legal and ethical dilemmas surrounding how they will impact the future of warfare, specifically in the land and space domains.

Acknowledged global experts presenting include renowned futurist Dr. James Canton, author and CEO and Chairman of the Institute for Global Futures; former Deputy Secretary of Defense Robert Work, Senior Counselor for Defense and Distinguished Senior Fellow for Defense and National Security, Center for a New American Security (CNAS); Robonaut Julia Badger, Project Manager for the NASA’s Autonomous Spacecraft Management Projects; and former NASA spacecraft navigator Dr. Moriba K. Jah, Associate Professor of Aerospace Engineering and Engineering Mechanics at UT Austin; as well as speakers from DARPA, Sandia National Labs, and Army senior leaders.

Get ready…

– Review the conference agenda’s list of presentations here.

– Read our following blog posts:  Making the Future More Personal: The Oft-Forgotten Human Driver in Future’s Analysis, An Appropriate Level of Trust…, War Laid Bare, and Star Wars 2050.

– Subscribe to the Mad Scientist Laboratory to stay abreast of this conference and all things Mad Scientist — go to the subscribe function found on the right hand side of this screen.

We look forward to your participation on-line in six weeks!

 

117. Old Human vs. New Human

[Editor’s Note: On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC. Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces. One finding from this conference is that tomorrow’s Soldiers will learn differently from earlier generations, given the technological innovations that will have surrounded them from birth through their high school graduation.  To effectively engage these “New Humans” and prepare them for combat on future battlefields, the Army must discard old paradigms of learning that no longer resonate (e.g., those desiccated lectures delivered via interminable PowerPoint presentations) and embrace more effective means of instruction.]

The recruit of 2050 will be born in 2032 and will be fundamentally different from the generations born before them. Marc Prensky, educational writer and speaker who coined the term digital native, asserts this “New Human” will stand in stark contrast to the “Old Human” in the ways they assimilate information and approach learning.1 Where humans today are born into a world with ubiquitous internet, hyper-connectivity, and the Internet of Things, each of these elements are generally external to the human. By 2032, these technologies likely will have converged and will be embedded or integrated into the individual with connectivity literally on the tips of their fingers. The challenge for the Army will be to recognize the implications of this momentous shift and alter its learning methodologies, approach to training, and educational paradigm to account for these digital natives.

These New Humans will be accustomed to the use of artificial intelligence (AI) to augment and supplement decision-making in their everyday lives. AI will be responsible for keeping them on schedule, suggesting options for what and when to eat, delivering relevant news and information, and serving as an on-demand embedded expert. The Old Human learned to use these technologies and adapted their learning style to accommodate them, while the New Human will be born into them and their learning style will be a result of them. In 2018, 94% of Americans aged 18-29 owned some kind of smartphone.2 Compare that to 73% ownership for ages 50-64 and 46% for age 65 and above and it becomes clear that there is a strong disconnect between the age groups in terms of employing technology. Both of the leading software developers for smartphones include a built-in artificially intelligent digital assistant, and at the end of 2017, nearly half of all U.S. adults used a digital voice assistant in some way.3 Based on these trends, there likely will be in the future an even greater technological wedge between New Humans and Old Humans.

http://www.pewinternet.org/fact-sheet/mobile/

New Humans will be information assimilators, where Old Humans were information gatherers. The techniques to acquire and gather information have evolved swiftly since the advent of the printing press, from user-intensive methods such as manual research, to a reduction in user involvement through Internet search engines. Now, narrow AI using natural language processing is transitioning to AI-enabled predictive learning. Through these AI-enabled virtual entities, New Humans will carry targeted, predictive, and continuous learning assistants with them. These assistants will observe, listen, and process everything of relevance to the learner and then deliver them information as necessary.

There is an abundance of research on the stark contrast between the three generations currently in the workforce: Baby Boomers, Generation X, and Millennials.4, 5 There will be similar fundamental differences between Old Humans and New Humans and their learning styles. The New Human likely will value experiential learning over traditional classroom learning.6 The convergence of mixed reality and advanced, high fidelity modeling and simulation will provide New Humans with immersive, experiential learning. For example, Soldiers learning military history and battlefield tactics will be able to experience it ubiquitously, observing how each facet of the battlefield affects the whole in real-time as opposed to reading about it sequentially. Soldiers in training could stand next to an avatar of General Patton and experience him explaining his command decisions firsthand.

There is an opportunity for the Army to adapt its education and training to these growing differences. The Army could—and eventually will need—to recruit, train, and develop New Humans by altering its current structure and recruitment programs. It will become imperative to conduct training with new tools, materials, and technologies that will allow Soldiers to become information assimilators. Additionally, the incorporation of experiential learning techniques will entice Soldiers’ learning. There is an opportunity for the Army to pave the way and train its Soldiers with cutting edge technology rather than trying to belatedly catch up to what is publicly available.

Evolution in Learning Technologies

If you enjoyed this post, please also watch Elliott Masie‘s video presentation on Dynamic Readiness and  Mark Prensky‘s presentation on The Future of Learning from of the Mad Scientist Learning in 2050 Conference

… see the following related blog posts:

… and read The Mad Scientist Learning in 2050 Final Report.


1 Prensky, Mark, Mad Scientist Conference: Learning in 2050, Georgetown University, 9 August 2018

2 http://www.pewinternet.org/fact-sheet/mobile/

3 http://www.pewresearch.org/fact-tank/2017/12/12/nearly-half-of-americans-use-digital-voice-assistants-mostly-on-their-smartphones/

4 https://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Generational-issues-in-the-workplace.aspx

5 https://blogs.uco.edu/customizededucation/2018/01/16/generational-differences-in-the-workplace/

6 https://www.apa.org/monitor/2010/03/undergraduates.aspx

113. Connected Warfare

[Editor’s Note: As stated previously here in the Mad Scientist Laboratory, the nature of war remains inherently humanistic in the Future Operational Environment.  Today’s post by guest blogger COL James K. Greer (USA-Ret.) calls on us to stop envisioning Artificial Intelligence (AI) as a separate and distinct end state (oftentimes in competition with humanity) and to instead focus on preparing for future connected competitions and wars.]

The possibilities and challenges for future security, military operations, and warfare associated with advancements in AI are proposed and discussed with ever-increasing frequency, both within formal defense establishments and informally among national security professionals and stakeholders. One is confronted with a myriad of alternative futures, including everything from a humanity-killing variation of Terminator’s SkyNet to uncontrolled warfare ala WarGames to Deep Learning used to enhance existing military processes and operations. And of course legal and ethical issues surrounding the military use of AI abound.

Source: tmrwedition.com

Yet in most discussions of the military applications of AI and its use in warfare, we have a blind spot in our thinking about technological progress toward the future. That blind spot is that we think about AI largely as disconnected from humans and the human brain. Rather than thinking about AI-enabled systems as connected to humans, we think about them as parallel processes. We talk about human-in-the loop or human-on-the-loop largely in terms of the control over autonomous systems, rather than comprehensive connection to and interaction with those systems.

But even while significant progress is being made in the development of AI, almost no attention is paid to the military implications of advances in human connectivity. Experiments have already been conducted connecting the human brain directly to the internet, which of course connects the human mind not only to the Internet of Things (IoT), but potentially to every computer and AI device in the world. Such connections will be enabled by a chip in the brain that provides connectivity while enabling humans to perform all normal functions, including all those associated with warfare (as envisioned by John Scalzi’s BrainPal in “Old Man’s War”).

Source: Grau et al.

Moreover, experiments in connecting human brains to each other are ongoing. Brain-to-brain connectivity has occurred in a controlled setting enabled by an internet connection. And, in experiments conducted to date, the brain of one human can be used to direct the weapons firing of another human, demonstrating applicability to future warfare. While experimentation in brain-to-internet and brain-to-brain connectivity is not as advanced as the development of AI, it is easy to see that the potential benefits, desirability, and frankly, market forces are likely to accelerate the human side of connectivity development past the AI side.

Source: tapestrysolutions.com

So, when contemplating the future of human activity, of which warfare is unfortunately a central component, we cannot and must not think of AI development and human development as separate, but rather as interconnected. Future warfare will be connected warfare, with implications we must now begin to consider. How would such connected warfare be conducted? How would mission command be exercised between man and machine? What are the leadership implications of the human leader’s brain being connected to those of their subordinates? How will humans manage information for decision-making without being completely overloaded and paralyzed by overwhelming amounts of data? What are the moral, ethical, and legal implications of connected humans in combat, as well as responsibility for the actions of machines to which they are connected? These and thousands of other questions and implications related to policy and operation must be considered.

The power of AI resides not just in that of the individual computer, but in the connection of each computer to literally millions, if not billions, of sensors, servers, computers, and smart devices employing thousands, if not millions, of software programs and apps. The consensus is that at some point the computing and analytic power of AI will surpass that of the individual. And therein lies a major flaw in our thinking about the future. The power of AI may surpass that of a human being, but it won’t surpass the learning, thinking, and decision-making power of connected human beings. When a future human is connected to the internet, that human will have access to the computing power of all AI. But, when that same human is connected to several (in a platoon), or hundreds (on a ship) or thousands (in multiple headquarters) of other humans, then the power of AI will be exceeded by multiple orders of magnitude. The challenge of course is being able to think effectively under those circumstances, with your brain connected to all those sensors, computers, and other humans. This is what Ray Kurzwell terms “hybrid thinking.”   Imagine how that is going to change every facet of human life, to include every aspect of warfare, and how everyone in our future defense establishment, uniformed or not, will have to be capable of hybrid thinking.

Source: Genetic Literacy Project

So, what will the military human bring to warfare that the AI-empowered computer won’t? Certainly, one of the major challenges with AI thus far has been its inability to demonstrate human intuition. AI can replicate some derivative tasks with intuition using what is now called “Artificial Intuition.” These tasks are primarily the intuitive decisions that result from experience: AI generates this experience through some large number of iterations, which is how Goggle’s AlphaGo was able to beat the human world Go champion. Still, this is only a small part of the capacity of humans in terms not only of intuition, but of “insight,” what we call the “light bulb moment”. Humans will also bring emotional intelligence to connected warfare. Emotional intelligence, including aspects such as empathy, loyalty, and courage, are critical in the crucible of war and are not capabilities that machines can provide the Force, not today and perhaps not ever.

Warfare in the future is not going to be conducted by machines, no matter how far AI advances. Warfare will instead be connected human to human, human to internet, and internet to machine in complex, global networks. We cannot know today how such warfare will be conducted or what characteristics and capabilities of future forces will be necessary for victory. What we can do is cease developing AI as if it were something separate and distinct from, and often envisioned in competition with, humanity and instead focus our endeavors and investments in preparing for future connected competitions and wars.

If you enjoyed this post, please read the following Mad Scientist Laboratory blog posts:

… and watch Dr. Alexander Kott‘s presentation The Network is the Robot, presented at the Mad Scientist Robotics, Artificial Intelligence, & Autonomy: Visioning Multi Domain Battle in 2030-2050 Conference, at the Georgia Tech Research Institute, 8-9 March 2017, in Atlanta, Georgia.

COL James K. Greer (USA-Ret.) is the Defense Threat Reduction Agency (DTRA) and Joint Improvised Threat Defeat Organization (JIDO) Integrator at the Combined Arms Command. A former cavalry officer, he served thirty years in the US Army, commanding at all levels from platoon through Brigade. Jim served in operational units in CONUS, Germany, the Balkans and the Middle East. He served in US Army Training and Doctrine Command (TRADOC), primarily focused on leader, capabilities and doctrine development. He has significant concept development experience, co-writing concepts for Force XXI, Army After Next and Army Transformation. Jim was the Army representative to OSD-Net assessment 20XX Wargame Series developing concepts OSD and the Joint Staff. He is a former Director of the Army School of Advanced Military Studies (SAMS) and instructor in tactics at West Point. Jim is a veteran of six combat tours in Iraq, Afghanistan, and the Balkans, including serving as Chief of Staff of the Multi-National Security Transition Command – Iraq (MNSTC-I). Since leaving active duty, Jim has led the conduct of research for the Army Research Institute (ARI) and designed, developed and delivered instruction in leadership, strategic foresight, design, and strategic and operational planning. Dr. Greer holds a Doctorate in Education, with his dissertation subject as US Army leader self-development. A graduate of the United States Military Academy, he has a Master’s Degree in Education, with a concentration in Psychological Counseling: as well as Masters Degrees in National Security from the National War College and Operational Planning from the School of Advanced Military Studies.

111. AI Enhancing EI in War

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s guest blog post by MAJ Vincent Dueñas, addressing how AI can mitigate a human commander’s cognitive biases and enhance his/her (and their staff’s)  decision-making, freeing them to do what they do best — command, fight, and win on future battlefields!]

Humans are susceptible to cognitive biases and these biases sometimes result in catastrophic outcomes, particularly in the high stress environment of war-time decision-making. Artificial Intelligence (AI) offers the possibility of mitigating the susceptibility of negative outcomes in the commander’s decision-making process by enhancing the collective Emotional Intelligence (EI) of the commander and his/her staff. AI will continue to become more prevalent in combat and as such, should be integrated in a way that advances the EI capacity of our commanders. An interactive AI that feels like one is communicating with a staff officer, which has human-compatible principles, can support decision-making in high-stakes, time-critical situations with ambiguous or incomplete information.

Mission Command in the Army is the exercise of authority and direction by the commander using mission orders to enable disciplined initiative within the commander’s intent.i It requires an environment of mutual trust and shared understanding between the commander and his subordinates in order to understand, visualize, describe, and direct throughout the decision-making Operations Process and mass the effects of combat power.ii

The mission command philosophy necessitates improved EI. EI is defined as the capacity to be aware of, control, and express one’s emotions, and to handle interpersonal relationships judiciously and empathetically, at much quicker speeds in order seize the initiative in war.iii The more effective our commanders are at EI, the better they lead, fight, and win using all the tools available.

AI Staff Officer

To conceptualize how AI can enhance decision-making on the battlefields of the future, we must understand that AI today is advancing more quickly in narrow problem solving domains than in those that require broad understanding.iv This means that, for now, humans continue to retain the advantage in broad information assimilation. The advent of machine-learning algorithms that could be applied to autonomous lethal weapons systems has so far resulted in a general predilection towards ensuring humans remain in the decision-making loop with respect to all aspects of warfare.v, vi AI’s near-term niche will continue to advance rapidly in narrow domains and become a more useful interactive assistant capable of analyzing not only the systems it manages, but the very users themselves. AI could be used to provide detailed analysis and aggregated assessments for the commander at the key decision points that require a human-in-the-loop interface.

The Battalion is a good example organization to visualize this framework. A machine-learning software system could be connected into different staff systems to analyze data produced by the section as they execute their warfighting functions. This machine-learning software system would also assess the human-in-the-loop decisions against statistical outcomes and aggregate important data to support the commander’s assessments. Over time, this EI-based machine-learning software system could rank the quality of the staff officers’ judgements. The commander can then consider the value of the staff officers’ assessments against the officers’ track-record of reliability and the raw data provided by the staff sections’ systems. The Bridgewater financial firm employs this very type of human decision-making assessment algorithm in order to assess the “believability” of their employees’ judgements before making high-stakes, and sometimes time-critical, international financial decisions.vii Included in such a multi-layered machine-learning system applied to the battalion, there would also be an assessment made of the commander’s own reliability, to maximize objectivity.

Observations by the AI of multiple iterations of human behavioral patterns during simulations and real-world operations would improve its accuracy and enhance the trust between this type of AI system and its users. Commanders’ EI skills would be put front and center for scrutiny and could improve drastically by virtue of the weight of the responsibility of consciously knowing the cognitive bias shortcomings of the staff with quantifiable evidence, at any given time. This assisted decision-making AI framework would also consequently reinforce the commander’s intuition and decisions as it elevates the level of objectivity in decision-making.

Human-Compatibility

The capacity to understand information broadly and conduct unsupervised learning remains the virtue of humans for the foreseeable future.viii The integration of AI into the battlefield should work towards enhancing the EI of the commander since it supports mission command and complements the human advantage in decision-making. Giving the AI the feel of a staff officer implies also providing it with a framework for how it might begin to understand the information it is receiving and the decisions being made by the commander.

Stuart Russell offers a construct of limitations that should be coded into AI in order to make it most useful to humanity and prevent conclusions that result in an AI turning on humanity. These three concepts are:  1) principle of altruism towards the human race (and not itself), 2) maximizing uncertainty by making it follow only human objectives, but not explaining what those are, and 3) making it learn by exposing it to everything and all types of humans.ix

Russell’s principles offer a human-compatible guide for AI to be useful within the human decision-making process, protecting humans from unintended consequences of the AI making decisions on its own. The integration of these principles in battlefield AI systems would provide the best chance of ensuring the AI serves as an assistant to the commander, enhancing his/her EI to make better decisions.

Making AI Work

The potential opportunities and pitfalls are abundant for the employment of AI in decision-making. Apart from the obvious danger of this type of system being hacked, the possibility of the AI machine-learning algorithms harboring biased coding inconsistent with the values of the unit employing it are real.

The commander’s primary goal is to achieve the mission. The future includes AI, and commanders will need to trust and integrate AI assessments into their natural decision-making process and make it part of their intuitive calculus. In this way, they will have ready access to objective analyses of their units’ potential biases, enhancing their own EI, and be able overcome them to accomplish their mission.

If you enjoyed this post, please also read:

An Appropriate Level of Trust…

Takeaways Learned about the Future of the AI Battlefield

Bias and Machine Learning

Man-Machine Rules

MAJ Vincent Dueñas is an Army Foreign Area Officer and has deployed as a cavalry and communications officer. His writing on national security issues, decision-making, and international affairs has been featured in Divergent Options, Small Wars Journal, and The Strategy Bridge. MAJ Dueñas is a member of the Military Writers Guild and a Term Member with the Council on Foreign Relations. The views reflected are his own and do not represent the opinion of the United States Government or any of its agencies.


i United States, Army, States, United. “ADRP 5-0 2012: The Operations Process.” ADRP 5-0 2012: The Operations Process, Headquarters, Dept. of the Army., 2012, pp. 1–1.

ii Ibid. pp. 1-1 – 1-3.

iiiEmotional Intelligence | Definition of Emotional Intelligence in English by Oxford Dictionaries.” Oxford Dictionaries | English, Oxford Dictionaries, 2018, en.oxforddictionaries.com/definition/emotional_intelligence.

iv Trent, Stoney, and Scott Lathrop. “A Primer on Artificial Intelligence for Military Leaders.” Small Wars Journal, 2018, smallwarsjournal.com/index.php/jrnl/art/primer-artificial-intelligence-military-leaders.

v Scharre, Paul. ARMY OF NONE: Autonomous Weapons and the Future of War. W W NORTON, 2019.

vi Evans, Hayley. “Lethal Autonomous Weapons Systems at the First and Second U.N. CGE Meetings.” Lawfare, 2018, https://www.lawfareblog.com/lethal-autonomous-weapons-systems-first-and-second-un-gge-meetings.

vii Dalio, Ray. Principles. Simon and Schuster, 2017.

viii Trent and Lathrop.

ix Russell, Stuart, director. Three Principles for Creating Safer AI. TED: Ideas Worth Spreading, 2017, www.ted.com/talks/stuart_russell_3_principles_for_creating_safer_ai.