182. “Tenth Man” – Challenging our Assumptions about the Operational Environment and Warfare (Part 2)

[Editor’s Note: Mad Scientist Laboratory is pleased to publish our latest “Tenth Man” post. This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging. The Mad Scientist Laboratory offers it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Operational Environment (OE). We continue our series of “Tenth Man” posts examining the foundational assumptions of The Operational Environment and the Changing Character of Future Warfare, challenging them, reviewing the associated implications, and identifying potential signals and/or indicators of change. Enjoy!]

Assumption:  The character of warfare will change but the nature of war will remain human-centric.

The character of warfare will change in the future OE as it inexorably has since the advent of flint hand axes; iron blades; stirrups; longbows; gunpowder; breech loading, rifled, and automatic guns; mechanized armor; precision-guided munitions; and the Internet of Things. Speed, automation, extended ranges, broad and narrow weapons effects, and increasingly integrated multi-domain conduct, in addition to the complexity of the terrain and social structures in which it occurs, will make mid Twenty-first Century warfare both familiar and utterly alien.

The nature of warfare, however, is assumed to remain human-centric in the future. While humans will increasingly be removed from processes, cycles, and perhaps even decision-making, nearly all content regarding the future OE assumes that humans will remain central to the rationale for war and its most essential elements of execution. The nature of war has remained relatively constant from Thucydides through Clausewitz, and forward to the present. War is still waged because of fear, honor, and interest, and remains an expression of politics by other means. While machines are becoming ever more prevalent across the battlefield – C5ISR, maneuver, and logistics – we cling to the belief that parties will still go to war over human interests; that war will be decided, executed, and controlled by humans.

Implications:  If these assumptions prove false, then the Army’s fundamental understanding of war in the future may be inherently flawed, calling into question established strategies, force structuring, and decision-making models. A changed or changing nature of war brings about a number of implications:

– Humans may not be aware of the outset of war. As algorithmic warfare evolves, might wars be fought unintentionally, with humans not recognizing what has occurred until effects are felt?

– Wars may be fought due to AI-calculated opportunities or threats – economic, political, or even ideological – that are largely imperceptible to human judgement. Imagine that a machine recognizes a strategic opportunity or impetus to engage a nation-state actor that is conventionally (read that humanly) viewed as weak or in a presumed disadvantaged state. The machine launches offensive operations to achieve a favorable outcome or objective that it deemed too advantageous to pass up.

  • – Infliction of human loss, suffering, and disruption to induce coercion and influence may not be conducive to victory. Victory may be simply a calculated or algorithmic outcome that causes an adversary’s machine to decide their own victory is unattainable.

– The actor (nation-state or otherwise) with the most robust kairosthenic power and/or most talented humans may not achieve victory. Even powers enjoying the greatest materiel advantages could see this once reliable measure of dominion mitigated. Winning may be achieved by the actor with the best algorithms or machines.

  • These implications in turn raise several questions for the Army:

– How much and how should the Army recruit and cultivate human talent if war is no longer human-centric?

– How should forces be structured – what is the “right” mix of humans to machines if war is no longer human-centric?

– Will current ethical considerations in kinetic operations be weighed more or less heavily if humans are further removed from the equation? And what even constitutes kinetic operations in such a future?

– Should the U.S. military divest from platforms and materiel solutions (hardware) and re-focus on becoming algorithmically and digitally-centric (software)?

 

– What is the role for the armed forces in such a world? Will competition and armed conflict increasingly fall within the sphere of cyber forces in the Departments of the Treasury, State, and other non-DoD organizations?

– Will warfare become the default condition if fewer humans get hurt?

– Could an adversary (human or machine) trick us (or our machines) to miscalculate our response?

Signposts / Indicators of Change:

– Proliferation of AI use in the OE, with increasingly less human involvement in autonomous or semi-autonomous systems’ critical functions and decision-making; the development of human-out-of-the-loop systems

– Technology advances to the point of near or actual machine sentience, with commensurate machine speed accelerating the potential for escalated competition and armed conflict beyond transparency and human comprehension.

– Nation-state governments approve the use of lethal autonomy, and this capability is democratized to non-state actors.

– Cyber operations have the same political and economic effects as traditional kinetic warfare, reducing or eliminating the need for physical combat.

– Smaller, less-capable states or actors begin achieving surprising or unexpected victories in warfare.

– Kinetic war becomes less lethal as robots replace human tasks.

– Other departments or agencies stand up quasi-military capabilities, have more active military-liaison organizations, or begin actively engaging in competition and conflict.

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.

… as well as our previous “Tenth Man” blog posts:

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

111. AI Enhancing EI in War

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s guest blog post by MAJ Vincent Dueñas, addressing how AI can mitigate a human commander’s cognitive biases and enhance his/her (and their staff’s)  decision-making, freeing them to do what they do best — command, fight, and win on future battlefields!]

Humans are susceptible to cognitive biases and these biases sometimes result in catastrophic outcomes, particularly in the high stress environment of war-time decision-making. Artificial Intelligence (AI) offers the possibility of mitigating the susceptibility of negative outcomes in the commander’s decision-making process by enhancing the collective Emotional Intelligence (EI) of the commander and his/her staff. AI will continue to become more prevalent in combat and as such, should be integrated in a way that advances the EI capacity of our commanders. An interactive AI that feels like one is communicating with a staff officer, which has human-compatible principles, can support decision-making in high-stakes, time-critical situations with ambiguous or incomplete information.

Mission Command in the Army is the exercise of authority and direction by the commander using mission orders to enable disciplined initiative within the commander’s intent.i It requires an environment of mutual trust and shared understanding between the commander and his subordinates in order to understand, visualize, describe, and direct throughout the decision-making Operations Process and mass the effects of combat power.ii

The mission command philosophy necessitates improved EI. EI is defined as the capacity to be aware of, control, and express one’s emotions, and to handle interpersonal relationships judiciously and empathetically, at much quicker speeds in order seize the initiative in war.iii The more effective our commanders are at EI, the better they lead, fight, and win using all the tools available.

AI Staff Officer

To conceptualize how AI can enhance decision-making on the battlefields of the future, we must understand that AI today is advancing more quickly in narrow problem solving domains than in those that require broad understanding.iv This means that, for now, humans continue to retain the advantage in broad information assimilation. The advent of machine-learning algorithms that could be applied to autonomous lethal weapons systems has so far resulted in a general predilection towards ensuring humans remain in the decision-making loop with respect to all aspects of warfare.v, vi AI’s near-term niche will continue to advance rapidly in narrow domains and become a more useful interactive assistant capable of analyzing not only the systems it manages, but the very users themselves. AI could be used to provide detailed analysis and aggregated assessments for the commander at the key decision points that require a human-in-the-loop interface.

The Battalion is a good example organization to visualize this framework. A machine-learning software system could be connected into different staff systems to analyze data produced by the section as they execute their warfighting functions. This machine-learning software system would also assess the human-in-the-loop decisions against statistical outcomes and aggregate important data to support the commander’s assessments. Over time, this EI-based machine-learning software system could rank the quality of the staff officers’ judgements. The commander can then consider the value of the staff officers’ assessments against the officers’ track-record of reliability and the raw data provided by the staff sections’ systems. The Bridgewater financial firm employs this very type of human decision-making assessment algorithm in order to assess the “believability” of their employees’ judgements before making high-stakes, and sometimes time-critical, international financial decisions.vii Included in such a multi-layered machine-learning system applied to the battalion, there would also be an assessment made of the commander’s own reliability, to maximize objectivity.

Observations by the AI of multiple iterations of human behavioral patterns during simulations and real-world operations would improve its accuracy and enhance the trust between this type of AI system and its users. Commanders’ EI skills would be put front and center for scrutiny and could improve drastically by virtue of the weight of the responsibility of consciously knowing the cognitive bias shortcomings of the staff with quantifiable evidence, at any given time. This assisted decision-making AI framework would also consequently reinforce the commander’s intuition and decisions as it elevates the level of objectivity in decision-making.

Human-Compatibility

The capacity to understand information broadly and conduct unsupervised learning remains the virtue of humans for the foreseeable future.viii The integration of AI into the battlefield should work towards enhancing the EI of the commander since it supports mission command and complements the human advantage in decision-making. Giving the AI the feel of a staff officer implies also providing it with a framework for how it might begin to understand the information it is receiving and the decisions being made by the commander.

Stuart Russell offers a construct of limitations that should be coded into AI in order to make it most useful to humanity and prevent conclusions that result in an AI turning on humanity. These three concepts are:  1) principle of altruism towards the human race (and not itself), 2) maximizing uncertainty by making it follow only human objectives, but not explaining what those are, and 3) making it learn by exposing it to everything and all types of humans.ix

Russell’s principles offer a human-compatible guide for AI to be useful within the human decision-making process, protecting humans from unintended consequences of the AI making decisions on its own. The integration of these principles in battlefield AI systems would provide the best chance of ensuring the AI serves as an assistant to the commander, enhancing his/her EI to make better decisions.

Making AI Work

The potential opportunities and pitfalls are abundant for the employment of AI in decision-making. Apart from the obvious danger of this type of system being hacked, the possibility of the AI machine-learning algorithms harboring biased coding inconsistent with the values of the unit employing it are real.

The commander’s primary goal is to achieve the mission. The future includes AI, and commanders will need to trust and integrate AI assessments into their natural decision-making process and make it part of their intuitive calculus. In this way, they will have ready access to objective analyses of their units’ potential biases, enhancing their own EI, and be able overcome them to accomplish their mission.

If you enjoyed this post, please also read:

An Appropriate Level of Trust…

Takeaways Learned about the Future of the AI Battlefield

Bias and Machine Learning

Man-Machine Rules

MAJ Vincent Dueñas is an Army Foreign Area Officer and has deployed as a cavalry and communications officer. His writing on national security issues, decision-making, and international affairs has been featured in Divergent Options, Small Wars Journal, and The Strategy Bridge. MAJ Dueñas is a member of the Military Writers Guild and a Term Member with the Council on Foreign Relations. The views reflected are his own and do not represent the opinion of the United States Government or any of its agencies.


i United States, Army, States, United. “ADRP 5-0 2012: The Operations Process.” ADRP 5-0 2012: The Operations Process, Headquarters, Dept. of the Army., 2012, pp. 1–1.

ii Ibid. pp. 1-1 – 1-3.

iiiEmotional Intelligence | Definition of Emotional Intelligence in English by Oxford Dictionaries.” Oxford Dictionaries | English, Oxford Dictionaries, 2018, en.oxforddictionaries.com/definition/emotional_intelligence.

iv Trent, Stoney, and Scott Lathrop. “A Primer on Artificial Intelligence for Military Leaders.” Small Wars Journal, 2018, smallwarsjournal.com/index.php/jrnl/art/primer-artificial-intelligence-military-leaders.

v Scharre, Paul. ARMY OF NONE: Autonomous Weapons and the Future of War. W W NORTON, 2019.

vi Evans, Hayley. “Lethal Autonomous Weapons Systems at the First and Second U.N. CGE Meetings.” Lawfare, 2018, https://www.lawfareblog.com/lethal-autonomous-weapons-systems-first-and-second-un-gge-meetings.

vii Dalio, Ray. Principles. Simon and Schuster, 2017.

viii Trent and Lathrop.

ix Russell, Stuart, director. Three Principles for Creating Safer AI. TED: Ideas Worth Spreading, 2017, www.ted.com/talks/stuart_russell_3_principles_for_creating_safer_ai.

52. Potential Game Changers

The Mad Scientist Initiative brings together cutting-edge leaders and thinkers from the technology industry, research laboratories, academia, and across the military and Government to explore the impact of potentially disruptive technologies. Much like Johannes Gutenberg’s moveable type (illustrated above), these transformational game changers have the potential to impact how we live, create, think, and prosper. Understanding their individual and convergent impacts is essential to continued battlefield dominance in the Future Operational Environment. In accordance with The Operational Environment and the Changing Character of Future Warfare, we have divided this continuum into two distinct timeframes:

The Era of Accelerated Human Progress (Now through 2035):
The period where our adversaries can take advantage of new technologies, new doctrine, and revised strategic concepts to effectively challenge U.S. military forces across multiple domains. Game changers during this era include:

• Robotics: Forty plus countries develop military robots with some level of autonomy. Impact on society, employment.
Vulnerable: To Cyber/Electromagnetic (EM) disruption, battery life, ethics without man in the loop.
Formats: Unmanned/Autonomous; ground/air vehicles/subsurface/sea systems. Nano-weapons.
Examples: (Air) Hunter/killer Unmanned Aerial Vehicle (UAV) swarms; (Ground) Russian Uran: Recon, ATGMs, SAMs.

• Artificial Intelligence: Human-Agent Teaming, where humans and intelligent systems work together to achieve either a physical or mental task. The human and the intelligent system will trade-off cognitive and physical loads in a collaborative fashion.

• Swarms/Semi Autonomous: Massed, coordinated, fast, collaborative, small, stand-off. Overwhelm target systems. Mass or disaggregate.



• Internet of Things (IoT): Trillions of internet linked items create opportunities and vulnerabilities. Explosive growth in low Size Weight and Power (SWaP) connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness). Greater than 100 devices per human. Significant end device processing (sensor analytics, sensor to shooter, supply chain management).
Vulnerable: To Cyber/EM/Power disruption. Privacy concerns regarding location and tracking.
Sensor to shooter: Accelerate kill chain, data processing, and decision-making.

• Space: Over 50 nations operate in space, increasingly congested and difficult to monitor, endanger Positioning, Navigation, and Timing (PNT)

GPS Jamming/Spoofing: Increasingly sophisticated, used successfully in Ukraine.
Anti Satellite: China has tested two direct ascent anti-satellite missiles.

The Era of Contested Equality (2035 through 2050):
The period marked by significant breakthroughs in technology and convergences in terms of capabilities, which lead to significant changes in the character of warfare. During this period, traditional aspects of warfare undergo dramatic, almost revolutionary changes which at the end of this timeframe may even challenge the very nature of warfare itself. Game changers during this era include:

• Hyper Velocity Weapons:
Rail Guns (Electrodynamic Kinetic Energy Weapons): Electromagnetic projectile launchers. High velocity/energy and space (Mach 5 or higher). Not powered by explosive.
No Propellant: Easier to store and handle.
Lower Cost Projectiles: Potentially. Extreme G-force requires sturdy payloads.
Limiting factors: Power. Significant IR signature. Materials science.
Hyper Glide Vehicles: Less susceptible to anti-ballistic missile countermeasures.

• Directed Energy Weapons: Signature not visible without technology, must dwell on target. Power requirements currently problematic.
Potential: Tunable, lethal, and non-lethal.
Laser: Directed energy damages intended target. Targets: Counter Aircraft, UAS, Missiles, Projectiles, Sensors, Swarms.
Radio Frequency (RF): Attack targets across the frequency spectrum. Targets: Not just RF; Microwave weapons “cook targets,” people, electronics.

• Synthetic Biology: Engineering / modification of biological entities
Increased Crop Yield: Potential to reduce food scarcity.
Weaponization: Potential for micro-targeting, Seek & destroy microbes that can target DNA. Potentially accessible to super-empowered individuals.
Medical Advances: Enhance soldier survivability.
Genetic Modification: Disease resistant, potentially designer babies and super athletes/soldiers. Synthetic DNA stores digital data. Data can be used for micro-targeting.
CRISPR: Genome editing.

• Information Environment: Use IoT and sensors to harness the flow of information for situational understanding and decision-making advantage.




In envisioning Future Operational Environment possibilities, the Mad Scientist Initiative employs a number of techniques. We have found Crowdsourcing (i.e., the gathering of ideas, thoughts, and concepts from a wide variety of interested individuals assists us in diversifying thoughts and challenging conventional assumptions) to be a particularly effective technique. To that end, we have published our latest, 2-page compendium of Potential Game Changers here — we would like to hear your feedback regarding them. Please let us know your thoughts / observations by posting them in this blog post’s Comment box (found below, in the Leave a Reply section). Alternatively, you can also submit them to us via email at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil. Thank you in advance for your contributions!

47. Quanta of Competition

(Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by repeat guest blogger Mr. Victor R. Morris. Strap in and prepare yourselves for a mind-expanding discussion on the competition field’s application of quantum field theory to political warfare and the extended battlefield!
Mr. Morris’ previous post addressing the cross-domain effects of human-machine networks may be read here.)

The competition field is a field of fields. It is the unification of physical, information, electromagnetic and cyber, political warfare, and extended military battle fields manifested through cross-field synergy and information feedback loop.

The competition field interacts with the physical, information, and cyber and electromagnetic fields. Political warfare and extended military battle are field quanta and reach excitable states due to cross-field synergy and information exchange. These excitable states are unpredictable, yet measurable via probability in the competition continuum. The measurements correlate to the information feedback loop of relative and finite information. The feedback loop results from system interactions, decision-making, effects, and learning. Learning drives interactions, ensuring information exchange in the competition continuum.

The competition field concept was developed from quantum mechanics, multi-domain battle operational frameworks, and geostrategic competition fundamentals to address grand strategy design, long-term, strategic inter-state competition, and non-state actor considerations in macro scale and spacetime.

The concept applies quantum field theory to political warfare and the “extended battlefield,” where Joint and multinational systems are the quanta of these fields, prone to excitable states like field quanta. In quantum mechanics, “quanta” refers to the minimum amount of physical entity involved in an interaction, like a photon or bit. The concept also unites the “Gray Zone” with the political warfare field interacting with the extended military battlefield.

Multi-domain battle and gray zone phenomena result from interactions in the extended military battle and political warfare fields. In quantum field theory, “interactions” refer to particles and corresponding underlying quantum fields. The competition field is the fundamental starting point for strategy design and system of systems thinking.

War/conflict, “Gray Zone,” and peace manifest based on uncertain, yet probability-determined interactions that drive decision-making, effects, and learning to continue the feedback loop of finite information. In the competition field, competition is relative or relational to information. Information does not measure what is known, but the probabilities of something. The competition field correlates the scientific and granular notions of information with the Operational Environment’s fields (also called domains) and physical systems during interactions. Systems are quantized like subatomic particles in the form of Centers of Gravity (COG), subsystems, critical factors, flows, nodes, and entities.

System and particle interactions are uncertain and not deterministic predictions described in exporting security as preventive war strategy and Newtonian physics. Measures short of war and war itself (i.e., violent or armed competition) are interactions in the competition field based on convergence, acceleration, force, distance, time, and other variables. Systems or things do not enter into relations; relations ground the notion of the system.

The information environment is also a field of fields. It exists with the physical, electromagnetic, cyberspace, and space-time fields in the competition field. In Joint doctrine, this is the holistic operational environment. Quantum mechanic’s granularity, relationality, and uncertainty of this field are described in the cognitive, informational, and physical dimensions.

These dimensions or fields include the quanta of human beings, Internet of Things (IoT), data, and individual or group decision-making. The cognitive dimension encompasses the minds of those who transmit, receive, and respond to or act on information.

The cognitive dimension is the most important component of the information environment and influences decision-making in the competition field. The scientific notion of information and probability of occurrence measurement are the largest contributors to understanding quantum physics and the concept of competition.

Colonel John Boyd, a military strategist, was a student of Sun Tzu and Clausewitz and studied military history to see where concepts overlapped and diverged. He knowingly or unknowingly described quantum mechanic’s postulates when he critiqued Clausewitz’s center of gravity concept. He suggested finding the thing that allows the organic whole to stay connected and breaking down those connections.

In theories of quantum gravity, that “thing” is the quanta of gravity, hypothetically called a graviton. In this assessment, it is the quanta of competition. The quanta of competition are not in competition; they are themselves competition and are described by links and the relation they express. The quanta of competition are also suited for quantum biology, since they involve both biological and environmental objects and problem sets.

Additionally, what Clausewitz described as polarity, intelligence, and friction are information at the quantum state. Position, momentum, spin, and the polarization of entangled particles are measured and correlated. The constant exchange of relevant and irrelevant information occurs as competition field quanta interact in the competition continuum.

In this vision, Joint and multinational systems are their own fields, oscillating in the political and extended military battle fields. Interactions manifest forces to exploit windows of superiority, seize the initiative, and attain positions of relative advantage in the competition continuum. Interagency and intergovernmental systems are also manifested in granular and relational manners to enable these objectives. This is only possible through combination, cooperation, and information.

The competition field attempts to explain the relationship between the holistic operational environment and physical systems bridging quantum mechanics and geostrategic competition constructs.

Clausewitz said, “War is merely a continuation of policy by other means.” Policy is a continuation of processes and events between interactions. Lethal or non-lethal effects are based on the measurement of possible alternatives enumerated by reciprocal information and the ability to make decisions in the competition field.

Victor R. Morris is a civilian irregular warfare and threat mitigation instructor at the Joint Multinational Readiness Center (JMRC) in Germany.