200. Broadening our Aperture on the Operational Environment

[Editor’s Note: Like many of our readers, Mad Scientist Laboratory is starting off the new year with a bit of introspection…. As we continue to focus on the Operational Environment (OE) and the changing character of warfare, we find ourselves wondering if we aren’t getting a little too comfortable and complacent with what we think we know and understand. Are we falling victim to our own confirmation biases, risking total surprise by something lurking just over the horizon, beyond our line of sight? To mitigate this, Mad Scientist has resolved to broaden our aperture on the OE this year. Today’s post describes several near term initiatives that will help expand our understanding of the full extent of OE possibilities to preclude our being sucker-punched. Help Mad Scientist by participating — share your ideas, pass on these opportunities to your colleagues, and actively engage in these events and activities! Happy 2020!]

Global Perspectives in the Operational Environment
The U.S. Army’s Mad Scientist Initiative will co-host our first conference this year with the Army Futures Command (AFC) and the U.S. Army Training and Doctrine Command (TRADOC) International Army Programs Directorate (IAPD) on 29 January 2020. Leveraging TRADOC’s Foreign Liaison Officer community to coordinate presentations by subject matter experts from their respective nations, this virtual, on-line conference will provide international perspectives on a diverse array of topics affecting the OE. Mark your calendar now to livestream this conference at www.tradoc.army.mil/watch, starting at 0830 EST (note that this link is not live until the conference).

Global Perspectives Conference Survey
In conjunction with the aforementioned conference, Mad Scientist is conducting an on-line survey querying your thoughts about the OE. We want your input, so take ~5 minutes to complete our short survey here. We will brief back our interim findings during the conference, then publish a blog post documenting the comprehensive survey results in February.  Stay tuned to the Mad Scientist Laboratory to learn what insights we will have gleaned from the international community regarding potential OE trends, challenges, technologies, and disruptors.

Project on International Peace and Security (PIPS)
Seeking insights into a younger demographic’s perspectives on the OE, Mad Scientist will livestream presentations by fellows from The College of William and Mary in Virginia‘s PIPS Program on 21 February 2020. This program is designed to bridge the gap between the academic and foreign policy communities in the area of undergraduate education. PIPS research fellows identify emerging international security issues and develop original policy recommendations to address those challenges. Undergraduate fellows have the chance to work with practitioners in the military and intelligence communities, and present their work to policy officials and scholars at a year-end symposium in Washington, DC. Topic areas presented at the Mad Scientist livestream event will include weaponized information, artificial intelligence, and bio convergence — representing a year’s worth of research by each of the fellows.

The Operational Environment in 2035 Mad Scientist Writing Contest Crowdsourcing is an effective tool for harvesting ideas, thoughts, and concepts from a wide variety of interested individuals, helping to diversify thought and challenge conventional assumptions. Mad Scientist’s latest writing contest seeks to harness diverse intellects to mine new knowledge and imagine the possibilities of the OE in 2035.  This contest is open to everyone around the globe. We are seeking submissions of no more than 2000 words in length — nonfiction only, please!  Topics of interest include:

    • What new skills and talent management techniques will be required by the Army in 2035?
    • What does the information landscape look like in 2035? Infrastructure? Computing? Communication? Media?
    • What can we anticipate in the Competition phase (below armed Conflict) and how do we prepare future Soldiers and Leaders for these challenges?
    • What does strategic, operational, and tactical (relative) surprise look like in 2035?
    • What does Multi-Domain Command and Control look like on the battlefield in 2035?
    • How do we prepare for the second move in a future conflict?
    • Which past battle or conflict best represents the challenges we face in the future and why?
    • What technology or convergence of technologies could provide a U.S. advantage by 2050?

For additional information on this writing contest, click here. Deadline for submission is 1 March 2020, so start outlining your entry today!

By participating in each of these events, you will enhance the Mad Scientist Initiative’s understanding of the OE and help the U.S. Army prepare for an extended array of future possibilities.

 

199. “Intelligentization” and a Chinese Vision of Future War

[Editor’s Note: While Monday’s post explored a U.S. perspective on Artificial Intelligence (AI) integration to military operations, today’s article, excerpted from this month’s OE Watch, addresses China’s embrace of “Intelligentization.” Intelligentization is the uniquely Chinese concept of applying AI’s machine speed and processing power to military planning, operational command, and decision support. In her testimony before the U.S.-China Economic and Security Review Commission Hearing on Trade, Technology, and Military-Civil Fusion earlier this year, proclaimed Mad Scientist Elsa Kania stated that President Xi Jinping, in his report to the 19th Party Congress in October 2017, “urged the PLA to ‘Accelerate the development of military intelligentization” (军事智能化)….This authoritative exhortation has elevated the concept of ‘intelligentization’ as a guiding principle for the future of Chinese military modernization.” What is unique about the PLA’s approach to implementing AI in force modernization is that they do not seek to merely integrate AI into existing warfighting functions; rather, they are using it to shape a new, cognitive domain and thus revolutionize their entire approach to warfighting — Read on!]

In today’s world of rapidly developing concepts and technologies, many theories are emerging about what warfare will resemble in the future. Nowhere does this seem truer than in China, where scholars, researchers, and scientists are putting their thoughts to paper, such as the accompanying article, which looks at how “intelligentization” will change the structure and outcome of warfare.

The thought-provoking article (below), which was republished in various journals, such as Jiefangjun Bao, the official newspaper of the People’s Republic of China’s Central Military Commission, and Qiushi Journal, which falls under the Central Party School and the Central Committee of the Communist Party of China, looks at how intelligentized warfare, a term commonly used by Chinese scholars, is expected to redraw the boundaries of warfare, restructure combat forces, and reshape the rules of engagement. Some of the more salient points worth pondering are highlighted in the accompanying excerpted passages.

The article claims that the art of combat power will inevitably change because artificial intelligence is rapidly infiltrating military operations. Traditional battlefields and battlefronts will “be hard to reproduce.” The current battle domains in warfare (the physical dimensions of land, sea, air, and space and the informational dimensions of electromagnetic and cyber) will be updated to include a new dimension: the cognitive domain, which would fall under the cognitive dimension.

Intelligentized warfare will see the integration of military and non-military domains; and the boundary between peacetime and wartime will get increasingly blurred. The outcome of a war will not be determined by who destroys whom in a kinetic sense, but rather who gains maximum political benefits. Intelligentized warfare will see the integration of human and machine intelligence. It will reshape warfighting in every dimension and within every realm. Human fighters will eventually stop being the first line of fighting and intelligent systems will prevail. “Human-on-human” warfare will be replaced by “machine-on-human” or “machine-on-machine warfare.”

Combining humans and machines into brain-machine interfaces, external skeletal systems, wearable devices, and gadgets implanted into human bodies will “comprehensively enhance the inherent cognitive and physiological capacity of human fighters and will forge out superman combatants.” Intelligentized warfare will upend traditional rules of military engagement. Cross-domain unconventional and asymmetrical fighting in military engagements will become the new normal. Unmanned operations will rewrite the rules of engagement and reshape the support process. Intelligent control will become the center of gravity.

Based on the article, one might surmise that the military tactics of yesterday and today are not likely the area in which the People’s Liberation Army will place too much effort, if any at all. With artificial intelligence and other technologies rapidly gaining ground, China seems keener on leading the curve in the long term than honing tactics in the immediate future. End OE Watch Commentary (Hurst)

The cognitive domain will become another battle domain next to the land, sea, air, space, electromagnetic, and cyber domains of warfare.”

Yang Wenzhe, “在变与不变中探寻智能 化战争制胜之道 (How to Win Intelligentized Warfare by Analyzing what are Changed and What are Unchanged),” Jiefangjun Bao, 22 October 2019.

Seeking the Way to Win Intelligentized Warfare by Analyzing what are Changed and What are Unchanged

…With AI technology rapidly infiltrating into the military domain, it will inevitably lead to a thorough change in the way combat power manifests itself. … The cognitive domain will become another battle domain next to the land, sea, air, space, electromagnetic, and cyber domains of warfare. …the three major warfighting dimensions, that is, the physical dimension, the informational dimension, and the cognitive dimension. The boundaries of war will extend into the deep land, deep sea, deep air, deep cyber, and deep brain domains… Intelligentized warfare will be generalized to all military conflicts and rivalries, giving rise to a more striking feature of integration between military and non-military domains. The scope of warfighting will expand to the extremes. The boundary between peacetime and wartime will get increasingly blurred.

Gaining political benefits is an invariable standard for measuring winning in war.… Military victories must guarantee political predominance.

Human fighters will fade away from the first line of fighting. Intelligent equipment will be brought onto the battlefield in large quantities and as whole units. “Human-on-human” warfare in the traditional sense will be superseded by “machine-on-human” or “machine-on-machine” warfare.

Such means of human-machine combination as brain-machine interfaces, external skeletal systems, wearable devices, gadgets implanted into human bodies will comprehensively enhance the inherent cognitive and physiological capacity of human fighters, and will forge out “superman combatants”…

…operations”. Cross-domain unconventional and asymmetrical fighting will become a new normal in military engagements…Unmanned operations, as a prominent hallmark of the new warfighting pattern, will rewrite the rules of engagements and reshape the support processes. Intelligence control will replace spaces control as the center of gravity in war.”

The race is on between the U.S. and its near-peer competitors, China and Russia, to develop and incorporate AI into their respective defense modernization efforts.  As Russian President Vladimir Putin stated in 2017, “whoever becomes the leader in this sphere will become the ruler of the world.”  China understands this, has embraced it at the national level, and is forging ahead with the intent to dominate the cognitive domain through intelligentization. Per Ms. Kania, the resultant “system of systems consisting of people, weapons equipment, and ways of combat… involve[s] not only intelligent weaponry but also concepts of human-machine integration (人机一体) and intelligence leading (智能主导). In practice, the PLA’s agenda for intelligentization may prove quite expansive, extending across all concepts in which AI might have military relevance in enabling and enhancing war-fighting capabilities, from logistics to early warning and intelligence, military wargaming, and command decision-making.

If you enjoyed this post, please also see:

The AI Titan’s Security Dilemmas, by Ms. Elsa Kania.

China’s Drive for Innovation Dominance, derived from Ms. Kania’s People’s Liberation Army (PLA) Human-Machine Integration briefing, presented at the Mad Scientist Bio Convergence and Soldier 2050 Conference on 9 March 2018 at SRI International‘s Silicon Valley campus in Menlo Park, California.

A Closer Look at China’s Strategies for Innovation: Questioning True Intent, by Ms. Cindy Hurst.

Integrating Artificial Intelligence into Military Operations, by Dr. James Mancillas, exploring AI implementation through an OODA lens.

The OE Watch, December issue, by the TRADOC G-2’s Foreign Military Studies Office (FMSO), featuring this piece and other articles of interest.

198. Integrating Artificial Intelligence into Military Operations

[Editor’s Note: Mad Scientist Laboratory is pleased to excerpt today’s post from Dr. James Mancillas‘ paper entitled Integrating Artificial Intelligence into Military Operations: A Boyd Cycle Framework (a link to this complete paper may be found at the bottom of this post). As Dr. Mancillas observes, “The conceptual employment of Artificial Intelligence (AI) in military affairs is rich, broad, and complex. Yet, while not fully understood, it is well accepted that AI will disrupt our current military decision cycles. Using the Boyd cycle (OODA loop) as an example, “AI” is examined as a system-of-systems; with each subsystem requiring man-in-the-loop/man-on-the-loop considerations. How these challenges are addressed will shape the future of AI enabled military operations.” Enjoy!]

Success in the battlespace is about collecting information, evaluating that information, then making quick, decisive decisions. Network Centric Warfare (NCW) demonstrated this concept during the emerging phases of information age warfare. As the information age has matured, adversaries have adopted its core tenant — winning in the decision space is winning in the battle space.1 The competitive advantage that may have once existed has eroded. Additionally, the principal feature of information age warfare — the ability to gather, and store communication data — has begun to exceed human processing capabilities.2 Maintaining a competitive advantage in the information age will require a new way of integrating an ever-increasing volume of data into a decision cycle.

Future AI systems offer the potential to continue maximizing the advantages of information superiority, while overcoming limits in human cognitive abilities. AI systems with their near endless and faultless memory, lack of emotional vestment, and potentially unbiased analyses, may continue to complement future military leaders with competitive cognitive advantages. These advantages may only emerge if AI is understood, properly utilized, and integrated into a seamless decision process.

The OODA (Observe, Orient, Decide, and Act) Loop provides a methodical approach to explore: (1) how future autonomous AI systems may participate in the various elements of decision cycles; (2) what aspects of military operations may need to change to accommodate future AI systems; and (3) how implementation of AI and its varying degrees of autonomy may create a competitive decision space.3

Observe
The automation of observe can be performed using AI systems, either as a singular activity or as part of a broader integrated analysis. Systems that observe require sophisticated AI analyses and systems. Within these systems, various degrees of autonomy can be applied. Because observe is a combination of different activities, the degree of autonomy for scanning the environment may differ from the degree of autonomy for recognizing potentially significant events. Varying degrees of autonomy may be applied to very specific tasks integral to scanning and recognizing.

High autonomous AI systems may be allowed to select or alter scan patterns, times and frequencies, boundary conditions, and other parameters; potentially including the selection of the scanning platforms and their sensor packages. High autonomous AI systems, integrated into feedback systems, could also alter and potentially optimize the scanning process, allowing AI systems to independently assess the effectiveness of previous scans and explore alternative scanning processes.

Low autonomous AI systems might be precluded from altering pattern recognition parameters or thresholds for flagging an event as potentially significant. In this domain, AI systems could perform potentially complex analyses, but with limited ability to explore alternative approaches to examine additional environmental data.

When AI systems operate as autonomous observation systems, they could easily be integrated into existing doctrine, organizations, and training. Differences between AI systems and human observers must be taken into account, especially when we consider manned and unmanned mixed teams. For example: AI systems could operate with human security forces, each with potentially different endurance limitations. Sentry outpost locations and configurations described by existing Field Manuals may need to be revised to address differing considerations for AI systems, i.e., safety, degrees of autonomy, communication, physical capabilities, dimensions, and integration issues with human forces.

The potential for ubiquitous and ever-present autonomous AI observation platforms presents a new dimension to informational security. The possibility of persistent, covert, and mobile autonomous observation systems offer security challenges that we only have just begun to understand. Information security within the cyber domain is just one example of the emerging challenges that AI systems can create as they continue to influence the physical domain.

Orient
Orient is the processes and analyses that establish the relative significance and context of the signal or data observed. An observation in its original raw form is unprocessed data of potential interest. The orientation and prioritization of that observation begins when the observation is placed within the context of (among other things) previous experiences, organizational / cultural / historic frameworks, or other observations.

One of the principal challenges of today’s military leader is managing the ever-increasing flow of information available to them. The ease and low cost of collecting, storing, and communicating has resulted in a supply of data that exceeds the cognitive capacity of most humans.4 As a result, numerous approaches are being considered to maximize the capability of commanders to prioritize and develop data rich common operating pictures.5 These approaches include improved graphics displays as well as virtual reality immersion systems. Each is designed to give a commander access to larger and still larger volumes of data. When commanders are saturated with information, however, further optimizing the presentation of too much data may not significantly improve battlespace performance.

The emergence of AI systems capable of contextualizing data has already begun. The International Business Machine (IBM) Corporation has already fielded advanced cognitive systems capable of performing near human level complex analyses.6 It is expected that this trend will continue and AI systems will continue to displace humans performing many staff officer “white collar” activities.7 Much of the analyses performed by existing systems, i.e., identifying market trends or evaluating insurance payouts, have been in environments with reasonably understood rules and boundaries.

Autonomy issues associated with AI systems orientating data and developing situational awareness pictures are complex. AI systems operating with a high autonomy can: independently prioritize data; add or remove data from an operational picture; possibly de-conflict contradictory data streams; change informational lenses; and can set priorities and hierarchies. High autonomous AI systems could continuously ensure the operational picture is the best reflection of the current information. The tradeoff to this “most accurate” operational picture might be a rapidly evolving operational picture with little continuity that could possibly confound and confuse system users. This type AI might require a blind faith acceptance to the picture presented.

At the other end of the spectrum, low autonomous AI systems might not explore alternative interpretations of data. These systems may use only prescribed informational lenses, and data priorities established by system users or developers. The tradeoff for a stable and consistent operational picture might be one that is biased by the applied informational lenses and data applications. This type of AI may just show us what we want to see.

Additional considerations arise concerning future human-AI collaborations. Generic AI systems that prioritize information based on a set of standard rules may not provide the optimal human-AI paring. Instead, AI systems that are adapted to complement a specific leader’s attributes may enhance his decision-making. These man-machine interfaces could be developed over an entire career. As such, there may be a need to ensure flexibility and portability in autonomous systems, to allow leaders to transition from job to job and retain access to AI systems that are “optimized” for their specific needs.

The use of AI systems for the consolidation, prioritization, and framing of data may require a review of how military doctrine and policy guides the use of information. Similar to the development of rules of engagement, doctrine and policy present challenges to developing rules of information framing — potentially prescribing or restricting the use of informational lenses. Under a paradigm where AI systems could implement doctrine and policy without question or moderation, the consequences of a policy change might create a host of unanticipated consequences.

Additionally, AI systems capable of consolidating, prioritizing, and evaluating large streams of data may invariably displace the staff that currently performs those activities.8 This restructuring could preserve high level decision making positions, while vastly reducing personnel performing data compiling, logistics, accounting, and other decision support activities. The effect of this restructuring might be the loss of many positions that develop the judgment of future leaders. As a result, increased automation of data analytics, and subsequent decreases in the staff supporting those activities, may create a shortage of leaders with experience in these analytical skills and tested judgment.

Decide
Decide is the process used to develop and then select a course of action to achieve a desired end state. Prior to selecting a course of action, the military decision making process requires development of multiple courses of actions (COAs) and consideration of their likely outcomes, followed by the selection of the COA with the preferred outcome.

The basis for developing COA’s and choosing among them can be categorized as rules-based or values-based decisions. If an AI system is using a rules-based decision process, there is inherently a human-in-the-loop, regardless of the level of the AI autonomy. This is because human value judgments are inherently contained within the rule development process. Values-based decisions explore ends, ways, and means, through the lenses of feasibility and suitability, while also potentially addressing issues of acceptability and/or risk. Values-based decisions are generally associated with subjective value assessments, greater dimensionality, and generally contain some legal, moral, or ethical qualities. The generation of COA’s and their selection may involve substantially more nuanced judgments.

Differentiation of COAs may require evaluations of disparate value propositions. Values such as speed of an action, materiel costs, loss of life, liberty, suffering, morale, risk, and numerous other values often need to be weighed when selecting a COA for a complex issue. These subjective values, not easily quantified or universally weighted, can present significant challenges in assessing the level of autonomy to grant to AI decision activities. As automation continues to encroach into the decision space, these subjective areas may offer the best opportunities for humans to continue to contribute.

The employment of values-based or rules-based decisions tends to vary according to the operational environment and the level of operation. Tactical applications often tend towards rules-based decisions, while operational and strategic applications tend towards values-based decisions. Clarifying doctrine, training, and policies on rules-based and values-based decisions could be an essential element of ensuring that autonomous decision making AI systems are effectively understood, trusted, and utilized.

Act
The last element of the OODA Loop is Act. For AI systems, this ability to manipulate the environment may take several forms. The first form may be indirect, where an AI system concludes its manipulation step by notifying an operator of its recommendations. The second form may be through direct manipulation, both in the cyber and the physical or “real world” domains.

Manipulation in the cyber domain may include the retrieval or dissemination of information, the performance of analysis, the execution of cyber warfare activities, or any number of other cyber activities. In the physical realm, AI systems can manipulate the environment through mechanized systems tied into an electronic system. These mechanized systems may be a direct extension of the AI system or may be separate systems operated remotely.

Within the OODA framework, once the decision has been made, the act is reflexive. For advanced AI systems, there is the potential for feedback to be provided and integrated as an action is taken. If the systems supporting the decision operate as expected, and events unfold as predicted, the importance of the degree of autonomy for the AI system (to act) may be trivial. However, if events unfold unexpectedly, the autonomy of an AI system to respond could be of great significance.

Consider a scenario where an observation point (OP) is being established. The decision to set up the OP was supported by many details. Among these concerns were: the path taken to set up the OP, the optimal location of the OP, the expected weather conditions, and the exact time the OP would be operational. Under a strict out-of-scope interpretation, if any of the real world details differed from those supporting the original decision, they would all be viewed as adjustments to the decision, and the decision would be voided. Under a less restrictive in-scope interpretation, if the details closely matched the expected conditions, they would be viewed as adjustments to the approved decision, and the decision would still be valid.

High autonomous AI systems could be allowed to make in-scope adjustments to the “act”. Allowing adjustments to the “act” would preclude a complete OODA cycle review. By avoiding this requirement — a new OODA cycle — an AI system might outperform low autonomous AI elements (and human oversight) and provide an advantage to the high autonomous system. Low autonomous AI systems following the out-of-scope perspective would be required to re-initiate a new decision cycle every time the real world did not exactly match expected conditions. While the extreme case may cause a perpetual initiation of OODA cycles, some adjustments could be made to the AI system to mitigate some of these concerns. The question still remains to determine the level of change that is significant enough to restart the OODA loop. Ultimately, designers of the system would need to consider how to resolve this issue.

This is not a comprehensive examination of autonomous AI systems performing the act step of the OODA loop. Yet in the area of doctrine, training, and leadership, an issue rises for quick discussion. Humans often employ assumptions when assigning/performing an action. There is a natural assumption that real world conditions will differ from those used in the planning and authorization process. When those differences appear large, a decision is re-evaluated. When the differences appear small, a new decision is not sought, and some risk is accepted. The amount of risk is often intuitively assessed and depending on personal preferences, the action continues or is stopped. Due to the more literal nature of computational systems, autonomous systems may not have the ability to assess and accept “personal” risks. Military doctrine addressing command and leadership philosophies, i.e., Mission Command and decentralized operations, should be reviewed and updated, as necessary, to determine their applicability to operations in the information age.9

The integration of future AI systems has the potential to permeate the entirety of military operations, from acquisition philosophies to human-AI team collaborations. This will require the development of clear categories of AI systems and applications, aligned along axes of trust, with rules-based and values-based decision processes clearly demarcated. Because of the nature of machines to abide to literal interpretations of policy, rules, and guidance, a review of their development should be performed to minimize unforeseen consequences.

If you enjoyed this post, please review Dr. Mancillas’ complete report here;

… see the following MadSci blog posts:

… read the Crowdsourcing the Future of the AI Battlefield information paper;

… and peruse the Final Report from the Mad Scientist Robotics, Artificial Intelligence & Autonomy Conference, facilitated at Georgia Tech Research Institute (GTRI), 7-8 March 2017.

Dr. Mancillas received a PhD in Quantum Physics from the University of Tennessee. He has extensive experience performing numerical modeling of complex engineered systems. Prior to working for the U.S. Army, Dr. Mancillas worked for the Center For Nuclear Waste and Regulatory Analyses, an FFRDC established by the Nuclear Regulatory Commission to examine the deep future of nuclear materials and their storage.

Disclaimer:  The views expressed herein are those of the author(s) and do not necessarily reflect the official policy or position of the U.S. Army Training and Doctrine Command (TRADOC), Army Futures Command (AFC), Department of the Army, Department of Defense, or the U.S. Government.


1 Roger N. McDermott, Russian Perspectives on Network-Centric Warfare: The Key Aim of Serdyukov’s Reform (Fort Leavenworth, KS: Foreign Military Studies Office (Army), 2011).

2 Ang Yang, Abbass Hussein, and Sarker Ruhul. “Evolving Agents for Network Centric Warfare,” Proceedings of the 7th Annual Workshop on Genetic and Evolutionary Computation, 2005, 193-195.

3 Decision space is the range of options that military leaders explore in response to adversarial activities. Competitiveness in the decision space is based on abilities to develop more options, more effective options and to develop and execute them more quickly. Numerous approaches to managing decision space exist. NCW is an approach that emphasizes information rich communications and a high degree of decentralized decisions to generate options and “self synchronized” activities.

4 Yang, Hussein, and Ruhul. “Evolving Agents for Network Centric Warfare,” 193-195.

5 Alessandro Zocco, and Lucio Tommaso De Paolis. “Augmented Command and Control Table to Support Network-centric Operations,” Defence Science Journal 65, no. 1 (2015): 39-45.

6 The IBM Corporation, specifically IBM Watson Analytics, has been employing “cognitive analytics” and natural language dialogue to perform “big data” analyses. IBM Watson Analytics has been employed in the medical, financial and insurance fields to perform human level analytics. These activities include reading medical journals to develop medical diagnosis and treatment plans; performing actuary reviews for insurance claims; and recommending financial customer engagement and personalized investment strategies.

7 Smith and Anderson. “AI, Robotics, and the Future of Jobs”; Executive Office of the President, “Artificial Intelligence, Automation, and the Economy,” December 2016, https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF, (accessed online 5 March 2017).

8 Smith and Anderson. “AI, Robotics, and the Future of Jobs”; Executive Office of the President, “Artificial Intelligence, Automation, and the Economy,” December 2016, https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artificial-Intelligence-Automation-Economy.PDF, (accessed online 5 March 2017).

9 Jim Storr, “A Command Philosophy for the Information Age: The Continuing Relevance of Mission Command,” Defence Studies 3, no. 3 (2003): 119-129.

192. New Skills Required to Compete & Win in the Future Operational Environment

[Editor’s Note: The U.S. Army Training and Doctrine Command (TRADOC) recruits, trains, educates, develops, and builds the Army, driving constant improvement and change to ensure that the Army can successfully compete and deter, fight, and decisively win on any battlefield. The pace of change, however, is accelerating with the convergence of new and emergent technologies that are driving the changing character of warfare in the future Operational Environment (OE).  Preparing to compete and win in this future OE is one of the toughest challenges facing the Army. TRADOC must identify the requisite new Knowledge, Skills, and Behaviors (KSBs) that our Soldiers and leaders will need to compete and win, and then program and implement the associated policy changes, improvements to training facilities, development of leader programs, and the integration of required equipment into the Multi-Domain force.]

The future OE will compel a change in the character of warfare driven by the diffusion of power, economic disparity, and the democratization and convergence of technology. There are no longer defined transitions from peace to war, or from competition to conflict. “Steady State” now consists of continuous, dynamic, and simultaneous competition and conflict that is not necessarily cyclical. Russia and China, our near-peer competitors, confront us globally, converging capabilities with hybrid strategies to expand the battlefield across all domains and create hemispheric threats challenging us from home stations to the Close Area. They seek to achieve national objectives through competition short of conflict and synthesize emerging technologies with military doctrine and operations to deploy capabilities that create multiple layers of multi-domain stand-off. Additionally, regional competitors and non-state actors such as Iran, North Korea, and regional and transnational terrorist organizations, will effectively compete and fight in similar ways shaped to their strategic situations, but with lesser scope and scale in terms of capabilities.

The convergence and availability of cutting-edge technologies will act as enablers and force multipliers for our adversaries. Artificial intelligence (AI), quantum information sciences, and the Internet of Things will flatten decision making structures and increase speed on the battlefield, while weaponized information will empower potential foes, enabling them to achieve effects at a fraction of the cost of conventional weapons, without risking armed conflict. Space will become a contested domain, as our adversaries will enhance their ability to operate in that domain while working to deny us what was once a key area of advantage.

Preparing for this new era is one of the toughest challenges the Army will face in the next 25 years. A key component of this preparation is identifying the skills and attributes required for the Soldiers and Leaders operating in our multi-domain formations.

The U.S. Army currently has more than 150 Military Occupational Specialties (MOSs), each requiring a Soldier to learn unique tasks, skills, and knowledge. The emergence of a number of new technologies – drones, AI autonomy, immersive mixed reality, big data storage and analytics, etc. – coupled with the changing character of warfare means that many of these MOSs will need to change, while new ones will need to be created. This already has been seen in the wider U.S. and global economy, where the growth of internet services, smartphones, social media, and cloud technology over the last ten years has introduced a host of new occupations that previously did not exist.

Acquiring and developing the talent pool and skills for a new MOS requires policy changes, improvements to training facilities, development of leader programs, and the integration of required equipment into current and planned formations. The Army’s recent experience building a cyber MOS offers many lessons learned. The Army needed to change policies for direct entry into the force, developed cyber training infrastructure at Fort Gordon, incorporated cyber operations into live training exercises at home station and the Combat Training Centers, built the Army Cyber Institute at West Point, and developed concepts and equipment baselines for cyber protection teams. This effort required action from Department of the Army and each of the subordinate Army commands. Identifying, programming, and implementing new knowledge, skills, and attributes is a multi-year effort that requires synchronizing the delivery of Soldiers possessing the requisite skills with the fielding of a Multi-Domain Operations (MDO)-capable force in 2028 and the MDO-ready force in 2035.

The Army’s MDO concept offers a clear glimpse of the types of new skills that will be required to win on the future battlefield. A force with all warfighting functions enabled by big data and AI will require Soldiers with data science expertise and some basic coding experience to improve AI integration and to maintain proper transparency and biases supporting leader decision making. The Internet of Battle things connecting Soldiers and systems will require Soldiers with technical integration skills and cyber security experience. The increased numbers of air and land robots and associated additive manufacturing systems to support production and maintenance means a new series of maintenance skills now only found in manufacturing centers, Amazon warehouses, and universities. There are many more emerging skill requirements. Not all of these will require a new MOS, but in some cases, the introduction of new skill identifiers and functional areas may be required.

Some of the needed skills may be inherent within the next generation(s) of recruits. Many of the games, drones, and other everyday technologies that already are, or soon will be very common – narrow AI, app development and general programming, and smart devices – will yield a variety of intrinsic skills that recruits will have prior to entering the Army. Just like we no longer train Soldiers on how to use a computer, games like Fortnite©, with no formal relationship with the military, will provide players with militarily-useful skills such as communications, problem solving, and creative thinking, all while attempting to survive against persistent attack. Due to these trends, recruits may come into the Army with fundamental technical skills and baseline military thinking attributes that flatten the learning curve for Initial Entry Training (IET).

While these new recruits may have a set of some required skills, there will still be a premium placed on premier skillsets in fields such as AI and machine learning, robotics, big data management, and quantum information sciences. Due to the high demand for these skillsets, the Army will have to compete for talent with private industry, battling them on compensation, benefits, perks, and a less restrictive work environment. In light of this, the Army may have to consider adjusting or relaxing its current recruitment processes, business practices, and force structuring to ensure it is able to attract and retain expertise. It also may have to reconsider how it adapts and utilizes its civilian workforce to undertake these types of tasks in new and creative ways.

If you enjoyed reading this, please see the following MadSci blog posts:

… and the Mad Scientist Learning in 2050 Conference Final Report.

190. Weaponized Information: One Possible Vignette

[Editor’s Note:  The Information Environment (IE) is the point of departure for all events across the Multi-Domain Operations (MDO) spectrum. It’s a unique space that demands our understanding, as the Internet of Things (IoT) and hyper-connectivity have democratized accessibility, extended global reach, and amplified the effects of weaponized information. Our strategic competitors and adversaries have been quick to grasp and employ it to challenge our traditional advantages and exploit our weaknesses.

    • Our near-peers confront us globally, converging IE capabilities with hybrid strategies to expand the battlefield across all domains and create hemispheric threats challenging us from home station installations (i.e., the Strategic Support Area) to the Close Area fight.
    • Democratization of weaponized information empowers regional hegemons and non-state actors, enabling them to target the U.S. and our allies and achieve effects at a fraction of the cost of conventional weapons, without risking armed conflict.
    • The IE enables our adversaries to frame the conditions of future competition and/or escalation to armed conflict on their own terms.

Today’s post imagines one such vignette, with Russia exploiting the IE to successfully out-compete us and accomplish their political objectives, without expending a single bullet!]

Ethnic Russian minorities’ agitation against their respective governments in Estonia, Lithuania, and Latvia spike. Simultaneously, the Russian Government ratchets up tensions, with inflammatory statements of support for these ethnic Russian minorities in the Baltic States; coordinated movements and exercises by Russian ground, naval, and air forces adjacent to the region; and clandestine support to ethnic Russians in these States. The Russian Government started a covert campaign to shape people’s views about the threats against the Russian diaspora. More than 200,000 twitter accounts send 3.6 million tweets trending #protectRussianseverywhere. This sprawling Russian disinformation campaign is focused on building internal support for the Russian President and a possible military action. The U.S. and NATO respond…

The 2nd Cav Regt is placed on alert; as it prepares to roll out of garrison for Poland, several videos surface across social media, purportedly showing the sexual assault of several underage German nationals by U.S. personnel. These disturbingly graphic deepfakes appear to implicate key Leaders within the Regiment. German political and legal authorities call for an investigation and host nation protests erupt outside the gates of Rose Barracks, Vilseck, disrupting the unit’s deployment.

Simultaneously, in units comprising the initial Force Package earmarked to deploy to Europe, key personnel (and their dependents) are targeted, distracting troops from their deployment preparations and disrupting unit cohesion:

    • Social media accounts are hacked/hijacked, with false threats by dependents to execute mass/school shootings, accusations of sexual abuse, hate speech posts by Leaders about their minority troops, and revelations of adulterous affairs between unit spouses.
    • Bank accounts are hacked: some are credited with excessive amounts of cash followed by faux “See Something, Say Something” hotline accusations being made about criminal and espionage activities; while others are zeroed out, disrupting families’ abilities to pay bills.

Russia’s GRU (Military Intelligence) employs AI Generative Adversarial Networks (GANs) to create fake persona injects that mimic select U.S. Active Army, ARNG, and USAR commanders making disparaging statements about their confidence in our allies’ forces, the legitimacy of the mission, and their faith in our political leadership. Sowing these injects across unit social media accounts, Russian Information Warfare specialists seed doubt and erode trust in the chain of command amongst a percentage of susceptible Soldiers, creating further friction in deployment preparations.

As these units load at railheads or begin their road march towards their respective ports of embarkation, Supervisory Control and Data Acquisition (SCADA) attacks are launched on critical rail, road, port, and airfield infrastructures, snarling rail lines, switching yards, and crossings; creating bottlenecks at key traffic intersections; and spoofing navigation systems to cause sealift asset collisions and groundings at key maritime chokepoints. The fly-by-wire avionics are hacked on a departing C-17, causing a crash with the loss of all 134 Soldiers onboard. All C-17s are grounded, pending an investigation.

Salvos of personalized, “direct inject” psychological warfare attacks are launched against Soldiers via immersive media (Augmented, Virtual, and Mixed Reality; 360o Video/Gaming), targeting them while they await deployment and are in-transit to Theater. Similarly, attacks are vectored at spouses, parents, and dependents, with horrifying imagery of their loved ones’ torn and maimed bodies on Artificial Intelligence-generated battlefields (based on scraped facial imagery from social media accounts).

Multi-Domain Operations has improved Jointness, but exacerbated problems with “the communications requirements that constitute the nation’s warfighting Achilles heel.” As units arrive in Theater, seams within and between these U.S. and NATO Intelligence, Surveillance, and Reconnaissance; Fires; Sustainment; and Command and Control inter-connected and federated tactical networks that facilitate partner-to-partner data exchanges are exploited with specifically targeted false injects, sowing doubt and distrust across the alliance for the Multi-Domain Common Operating Picture. Spoofing of these systems leads to accidental air defense engagements, resulting in Blue-on-Blue fratricide or the downing of a commercial airliner, with additional civilian deaths on the ground from spent ordnance, providing more opportunities for Russian Information Operations to spread acrimony within the alliance and create dissent in public opinion back home.

With the flow of U.S. forces into the Baltic Nations, real instances of ethnic Russians’ livelihoods being disrupted (e.g., accidental destruction of livestock and crops, the choking off of main routes to market, and damage to essential services [water, electricity, sewerage]) by maneuver units on exercise are captured on video and enhanced digitally to exacerbate their cumulative effects. Proliferated across the net via bots, these instances further stoke anti-Baltic / anti-U.S. opinion amongst Russian-sympathetic and non-aligned populations alike.

Following years of scraping global social media accounts and building profiles across the full political spectrum, artificial influencers are unleashed on-line that effectively target each of these profiles within the U.S. and allied civilian populations. Ostensibly engaging populations via key “knee-jerk” on-line affinities (e.g., pro-gun, pro-choice, etc.), these artificial influencers, ever so subtly, begin to shift public opinion to embrace a sympathetic position on the rights of the Russian diaspora to greater autonomy in the Baltic States.

The release of deepfake videos showing Baltic security forces massacring ethnic Russians creates further division and causes some NATO partners to hesitate, question, and withhold their support, as required under Article 5. The alliance is rent asunder — Checkmate!

Many of the aforementioned capabilities described in this vignette are available now. Threats in the IE space will only increase in verisimilitude with augmented reality and multisensory content interaction. Envisioning what this Bot 2.0 Competition will look like is essential in building whole-of-government countermeasures and instilling resiliency in our population and military formations.

The Mad Scientist Initiative will continue to explore the significance of the IE to Competition and Conflict and information weaponization throughout our FY20 events — stay tuned to the MadSci Laboratory for more information. In anticipation of this, we have published The Information Environment:  Competition and Conflict anthology, a collection of previously published blog posts that serves as a primer on this topic and examines the convergence of technologies that facilitates information weaponization — Enjoy!

183. Ethics, Morals, and Legal Implications

[Editor’s Note: The U.S. Army Futures Command (AFC) and Training and Doctrine Command (TRADOC) co-sponsored the Mad Scientist Disruption and the Operational Environment Conference with the Cockrell School of Engineering at The University of Texas at Austin on 24-25 April 2019 in Austin, Texas. Today’s post is excerpted from this conference’s Final Report and addresses how the speed of technological innovation and convergence continues to outpace human governance. The U.S. Army must not only consider how best to employ these advances in modernizing the force, but also the concomitant ethical, moral, and legal implications their use may present in the Operational Environment (see links to the newly published TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, and the complete Mad Scientist Disruption and the Operational Environment Conference Final Report at the bottom of this post).]

Technological advancement and subsequent employment often outpaces moral, ethical, and legal standards. Governmental and regulatory bodies are then caught between technological progress and the evolution of social thinking. The Disruption and the Operational Environment Conference uncovered and explored several tension points that the Army may be challenged by in the future.

Space

Cubesats in LEO / Source: NASA

Space is one of the least explored domains in which the Army will operate; as such, we may encounter a host of associated ethical and legal dilemmas. In the course of warfare, if the Army or an adversary intentionally or inadvertently destroys commercial communications infrastructure – GPS satellites – the ramifications to the economy, transportation, and emergency services would be dire and deadly. The Army will be challenged to consider how and where National Defense measures in space affect non-combatants and American civilians on the ground.

Per proclaimed Mad Scientists Dr. Moriba Jah and Dr. Diane Howard, there are ~500,000 objects orbiting the Earth posing potential hazards to our space-based services. We are currently able to only track less than one percent of them — those that are the size of a smart phone / softball or larger. / Source: NASA Orbital Debris Office

International governing bodies may have to consider what responsibility space-faring entities – countries, universities, private companies – will have for mitigating orbital congestion caused by excessive launching and the aggressive exploitation of space. If the Army is judicious with its own footprint in space, it could reduce the risk of accidental collisions and unnecessary clutter and congestion. It is extremely expensive to clean up space debris and deconflicting active operations is essential. With each entity acting in their own self-interest, with limited binding law or governance and no enforcement, overuse of space could lead to a “tragedy of the commons” effect.1  The Army has the opportunity to more closely align itself with international partners to develop guidelines and protocols for space operations to avoid potential conflicts and to influence and shape future policy. Without this early intervention, the Army may face ethical and moral challenges in the future regarding its addition of orbital objects to an already dangerously cluttered Low Earth Orbit. What will the Army be responsible for in democratized space? Will there be a moral or ethical limit on space launches?

Autonomy in Robotics

AFC’s Future Force Modernization Enterprise of Cross-Functional Teams, Acquisition Programs of Record, and Research and Development centers executed a radio rodeo with Industry throughout June 2019 to inform the Army of the network requirements needed to enable autonomous vehicle support in contested, multi-domain environments. / Source: Army.mil

Robotics have been pervasive and normalized in military operations in the post-9/11 Operational Environment. However, the burgeoning field of autonomy in robotics with the potential to supplant humans in time-critical decision-making will bring about significant ethical, moral, and legal challenges that the Army, and larger DoD are currently facing. This issue will be exacerbated in the Operational Environment by an increased utilization and reliance on autonomy.

The increasing prevalence of autonomy will raise a number of important questions. At what point is it more ethical to allow a machine to make a decision that may save lives of either combatants or civilians? Where does fault, responsibility, or attribution lie when an autonomous system takes lives? Will defensive autonomous operations – air defense systems, active protection systems – be more ethically acceptable than offensive – airstrikes, fire missions – autonomy? Can Artificial Intelligence/Machine Learning (AI/ML) make decisions in line with Army core values?

Deepfakes and AI-Generated Identities, Personas, and Content

Source: U.S. Air Force

A new era of Information Operations (IO) is emerging due to disruptive technologies such as deepfakes – videos that are constructed to make a person appear to say or do something that they never said or did – and AI Generative Adversarial Networks (GANs) that produce fully original faces, bodies, personas, and robust identities.2  Deepfakes and GANs are alarming to national security experts as they could trigger accidental escalation, undermine trust in authorities, and cause unforeseen havoc. This is amplified by content such as news, sports, and creative writing similarly being generated by AI/ML applications.

This new era of IO has many ethical and moral implications for the Army. In the past, the Army has utilized industrial and early information age IO tools such as leaflets, open-air messaging, and cyber influence mechanisms to shape perceptions around the world. Today and moving forward in the Operational Environment, advances in technology create ethical questions such as: is it ethical or legal to use cyber or digital manipulations against populations of both U.S. allies and strategic competitors? Under what title or authority does the use of deepfakes and AI-generated images fall? How will the Army need to supplement existing policy to include technologies that didn’t exist when it was written?

AI in Formations

With the introduction of decision-making AI, the Army will be faced with questions about trust, man-machine relationships, and transparency. Does AI in cyber require the same moral benchmark as lethal decision-making? Does transparency equal ethical AI? What allowance for error in AI is acceptable compared to humans? Where does the Army allow AI to make decisions – only in non-combat or non-lethal situations?

Commanders, stakeholders, and decision-makers will need to gain a level of comfort and trust with AI entities exemplifying a true man-machine relationship. The full integration of AI into training and combat exercises provides an opportunity to build trust early in the process before decision-making becomes critical and life-threatening. AI often includes unintentional or implicit bias in its programming. Is bias-free AI possible? How can bias be checked within the programming? How can bias be managed once it is discovered and how much will be allowed? Finally, does the bias-checking software contain bias? Bias can also be used in a positive way. Through ML – using data from previous exercises, missions, doctrine, and the law of war – the Army could inculcate core values, ethos, and historically successful decision-making into AI.

If existential threats to the United States increase, so does pressure to use artificial and autonomous systems to gain or maintain overmatch and domain superiority. As the Army explores shifting additional authority to AI and autonomous systems, how will it address the second and third order ethical and legal ramifications? How does the Army rectify its traditional values and ethical norms with disruptive technology that rapidly evolves?

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.
    • Ethics and the Future of War panel, facilitated by LTG Dubik (USA-Ret.) at the Mad Scientist Visualizing Multi Domain Battle 2030-2050 Conference, facilitated at Georgetown University, on 25-26 July 2017.

Just Published! TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, 7 October 2019, describes the conditions Army forces will face and establishes two distinct timeframes characterizing near-term advantages adversaries may have, as well as breakthroughs in technology and convergences in capabilities in the far term that will change the character of warfare. This pamphlet describes both timeframes in detail, accounting for all aspects across the Diplomatic, Information, Military, and Economic (DIME) spheres to allow Army forces to train to an accurate and realistic Operational Environment.


1 Munoz-Patchen, Chelsea, “Regulating the Space Commons: Treating Space Debris as Abandoned Property in Violation of the Outer Space Treaty,” Chicago Journal of International Law, Vol. 19, No. 1, Art. 7, 1 Aug. 2018. https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1741&context=cjil

2 Robitzski, Dan, “Amazing AI Generates Entire Bodies of People Who Don’t Exist,” Futurism.com, 30 Apr. 2019. https://futurism.com/ai-generates-entire-bodies-people-dont-exist

182. “Tenth Man” – Challenging our Assumptions about the Operational Environment and Warfare (Part 2)

[Editor’s Note: Mad Scientist Laboratory is pleased to publish our latest “Tenth Man” post. This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging. The Mad Scientist Laboratory offers it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Operational Environment (OE). We continue our series of “Tenth Man” posts examining the foundational assumptions of The Operational Environment and the Changing Character of Future Warfare, challenging them, reviewing the associated implications, and identifying potential signals and/or indicators of change. Enjoy!]

Assumption:  The character of warfare will change but the nature of war will remain human-centric.

The character of warfare will change in the future OE as it inexorably has since the advent of flint hand axes; iron blades; stirrups; longbows; gunpowder; breech loading, rifled, and automatic guns; mechanized armor; precision-guided munitions; and the Internet of Things. Speed, automation, extended ranges, broad and narrow weapons effects, and increasingly integrated multi-domain conduct, in addition to the complexity of the terrain and social structures in which it occurs, will make mid Twenty-first Century warfare both familiar and utterly alien.

The nature of warfare, however, is assumed to remain human-centric in the future. While humans will increasingly be removed from processes, cycles, and perhaps even decision-making, nearly all content regarding the future OE assumes that humans will remain central to the rationale for war and its most essential elements of execution. The nature of war has remained relatively constant from Thucydides through Clausewitz, and forward to the present. War is still waged because of fear, honor, and interest, and remains an expression of politics by other means. While machines are becoming ever more prevalent across the battlefield – C5ISR, maneuver, and logistics – we cling to the belief that parties will still go to war over human interests; that war will be decided, executed, and controlled by humans.

Implications:  If these assumptions prove false, then the Army’s fundamental understanding of war in the future may be inherently flawed, calling into question established strategies, force structuring, and decision-making models. A changed or changing nature of war brings about a number of implications:

– Humans may not be aware of the outset of war. As algorithmic warfare evolves, might wars be fought unintentionally, with humans not recognizing what has occurred until effects are felt?

– Wars may be fought due to AI-calculated opportunities or threats – economic, political, or even ideological – that are largely imperceptible to human judgement. Imagine that a machine recognizes a strategic opportunity or impetus to engage a nation-state actor that is conventionally (read that humanly) viewed as weak or in a presumed disadvantaged state. The machine launches offensive operations to achieve a favorable outcome or objective that it deemed too advantageous to pass up.

  • – Infliction of human loss, suffering, and disruption to induce coercion and influence may not be conducive to victory. Victory may be simply a calculated or algorithmic outcome that causes an adversary’s machine to decide their own victory is unattainable.

– The actor (nation-state or otherwise) with the most robust kairosthenic power and/or most talented humans may not achieve victory. Even powers enjoying the greatest materiel advantages could see this once reliable measure of dominion mitigated. Winning may be achieved by the actor with the best algorithms or machines.

  • These implications in turn raise several questions for the Army:

– How much and how should the Army recruit and cultivate human talent if war is no longer human-centric?

– How should forces be structured – what is the “right” mix of humans to machines if war is no longer human-centric?

– Will current ethical considerations in kinetic operations be weighed more or less heavily if humans are further removed from the equation? And what even constitutes kinetic operations in such a future?

– Should the U.S. military divest from platforms and materiel solutions (hardware) and re-focus on becoming algorithmically and digitally-centric (software)?

 

– What is the role for the armed forces in such a world? Will competition and armed conflict increasingly fall within the sphere of cyber forces in the Departments of the Treasury, State, and other non-DoD organizations?

– Will warfare become the default condition if fewer humans get hurt?

– Could an adversary (human or machine) trick us (or our machines) to miscalculate our response?

Signposts / Indicators of Change:

– Proliferation of AI use in the OE, with increasingly less human involvement in autonomous or semi-autonomous systems’ critical functions and decision-making; the development of human-out-of-the-loop systems

– Technology advances to the point of near or actual machine sentience, with commensurate machine speed accelerating the potential for escalated competition and armed conflict beyond transparency and human comprehension.

– Nation-state governments approve the use of lethal autonomy, and this capability is democratized to non-state actors.

– Cyber operations have the same political and economic effects as traditional kinetic warfare, reducing or eliminating the need for physical combat.

– Smaller, less-capable states or actors begin achieving surprising or unexpected victories in warfare.

– Kinetic war becomes less lethal as robots replace human tasks.

– Other departments or agencies stand up quasi-military capabilities, have more active military-liaison organizations, or begin actively engaging in competition and conflict.

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.

… as well as our previous “Tenth Man” blog posts:

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

175. “I Know the Sound it Makes When It Lies” AI-Powered Tech to Improve Engagement in the Human Domain

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s post by guest bloggers LTC Arnel P. David, LTC (Ret) Patrick James Christian, PhD, and Dr. Aleksandra Nesic, who use storytelling to illustrate how the convergence of Artificial Intelligence (AI), cloud computing, big data, augmented and enhanced reality, and deception detection algorithms could complement decision-making in future specialized engagements.  Enjoy this first in a series of three posts exploring how game changing tech will enhance operations in the Human Domain!]

RAF A400 Atlas / Source:  Flickr, UK MoD, by Andrew Linnett

It is 2028. Lt Col Archie Burton steps off the British A400-M Atlas plane onto the hard pan desert runway of Banku Airfield, Nigeria. This is his third visit to Nigeria, but this time he is the commander of the Engagement Operations Group – Bravo (EOG-B). This group of bespoke, specialized capabilities is the British Army’s agile and highly-trained force for specialized engagement. It operates amongst the people and builds indigenous mass with host nation security forces. Members of this outfit operate in civilian clothes and speak multiple languages with academic degrees ranging from anthropology to computational science.

Source:  Flickr, Com Salud

Archie donned his Viz glasses on the drive to a meeting with local leadership of the town of Banku. Speaking to his AI assistant, “Jarvis,” Archie cycles through past engagement data to prep for the meeting and learn the latest about the local town and its leaders. Jarvis is connected to a cloud-computing environment, referred to as “HDM” for “Human Doman Matrix,” where scientifically collected and curated population data is stored, maintained, and integrated with a host of applications to support operations in the human domain in both training and deployed settings.

Several private organizations that utilize integrated interdisciplinary social science have helped NATO, the U.K. MoD, and the U.S. DoD develop CGI-enabled virtual reality experiences to accelerate learning for operators who work in challenging conflict settings laden with complex psycho-social and emotional dynamics that drive the behaviour and interactions of the populations on the ground. Together with NGOs and civil society groups, they collected ethnographic data and combined it with phenomenological qualitative inquiry using psychology and sociology to curate anthropological stories that reflect specific cultural audiences.

EOG-Bravo’s mission letter from Field Army Headquarters states that they must leverage the extensive and complex human network dynamic to aid in the recovery of 11 females kidnapped by the Islamic Revolutionary Brotherhood (IRB) terrorist group. Two of the females are British citizens, who were supporting a humanitarian mission with the ‘Save the Kids’ NGO prior to being abducted.

At the meeting in Banku, the mayor, police chief, and representative from Save the Kids were present. Archie was welcomed by handshakes and hugs by the police chief who was a former student at Sandhurst and knows Archie from past deployments. The discussion leaped immediately into the kidnapping situation.

The girls were last seen transiting a jungle area North of Oyero. Our organization is in contact by email with one of the IRB facilitators. He is asking for £2 million and we are ready to make that payment,” said Simon Moore of Save the Kids.

Archie’s Viz glasses scanned the facial expressions of those present and Jarvis cautioned him regarding the behaviour of the police chief whose micro facial expressions and eyes revealed a biological response of excitement at the mention of the £2M.

Archie asks “Chief Adesola, what do you think? Should we facilitate payment?

Hmmm, I’m not sure. We don’t know what the IRB will do. We should definitely consider it though,” said Police Chief Adesola.

The Viz glasses continued to feed the facial expressions into HDM, where the recurrent AI neural network recognition algorithm, HOMINID-AI, detected a lie. The AI system and human analysts at the Land Information Manoeuvre Centre (LIMOC) back in the U.K. estimate with a high-level of confidence that Chief Adesola was lying.

At the LIMOC, a 24-hour operation under 77th Brigade, Sgt Richards, determines that the Police Chief is worthy of surveillance by EOG-Alpha, Archie’s sister battlegroup. EOG-Alpha informs local teams in Lagos to deploy unmanned ground sensors and collection assets to monitor the police chief.

Small teams of 3-4 soldiers depart from Lagos in the middle of the night to link up with host nation counterparts. Together, the team of operators and Nigerian national-level security forces deploy sensors to monitor the police chief’s movements and conversations around his office and home.

The next morning, Chief Adesola is picked up by a sensor meeting with an unknown associate. The sensor scanned this associate and the LIMOC processed an immediate hit — he was a leader of the IRB; number three in their chain of command. EOG-A’s operational element is alerted and ordered to work with local security forces to detain this terrorist leader.  Intelligence collected from him and the Chief will hopefully lead them to the missing females…

If you enjoyed this post, stay tuned for Part 2 on the Human Domain Matrix, Part 3 on Emotional Warfare in Yemen, and check out the following links to other works by today’s blog post authors:

Operationalizing the Science of the Human Domain by Aleks Nesic and Arnel P. David

A Psycho-Emotional Human Security Analytical Framework by Patrick J. Christian, Aleksandra Nesic, David Sniffen, Tasneem Aljehani, Khaled Al Sumairi, Narayan B. Khadka, Basimah Hallawy, and Binamin Konlan

Military Strategy in the 21st Century:  People, Connectivity, and Competition by Charles T. Cleveland, Benjamin Jensen, Susan Bryant, and Arnel P. David

… and see the following MadSci Lab blog posts on how AI can augment our Leaders’ decision-making on the battlefield:

Takeaways Learned about the Future of the AI Battlefield

The Guy Behind the Guy: AI as the Indispensable Marshal, by Mr. Brady Moore and Mr. Chris Sauceda

LTC Arnel P. David is an Army Strategist serving in the United Kingdom as the U.S. Special Assistant for the Chief of the General Staff. He recently completed an Artificial Intelligence Program from the Saïd Business School at the University of Oxford.

LTC (Ret) Patrick James Christian, PhD is co-founder of Valka-Mir and a Psychoanalytical Anthropologist focused on the psychopathology of violent ethnic and cultural conflict. He a retired Special Forces officer serving as a social scientist for the Psychological Operations Task Forces in the Arabian Peninsula and Afghanistan, where he constructs psychological profiles of designated target audiences.

Aleksandra Nesic, PhD is co-founder of Valka-Mir and Visiting Faculty for the Countering Violent Extremism and Countering Terrorism Fellowship Program at the Joint Special Operations University (JSOU), USSOCOM. She is also Visiting Faculty, U.S. Army JFK Special Warfare Center and School, and a Co-Founder and Senior Researcher of Complex Communal Conflicts at Valka-Mir Human Security, LLC.

Acknowledgements:  Special Thanks to the British Army Future Force Development Team for their help in creating the British characters depicted in this first story.

Disclaimer:  The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

 

 

138. “The Monolith”

The Monolith set from the dawn of man sequence, 2001: A Space Odyssey, Metro-Goldwyn-Mayer (1968) / Source: Wikimedia Commons

[Editor’s Note: Mad Scientist Laboratory is pleased to introduce a new, quarterly feature, entitled “The Monolith.” Arthur C. Clarke and Stanley Kubrick fans alike will recognize and appreciate our allusion to the alien artifact responsible for “uplifting” mankind from primitive, defenseless hominids into tool using killers — destined for the stars — from their respective short story, “The Sentinel,” and movie, “2001: A Space Odyssey.” We hope that you will similarly benefit from this post (although perhaps in not quite so evolutionary a manner!), reflecting the Mad Scientist Teams’ collective book and movie recommendations — Enjoy!]

Originally published by PublicAffairs on 5 October 2017

The Future of War by Sir Lawrence Freedman. The evolution of warfare has taken some turns that were quite unexpected and were heavily influenced by disruptive technologies of the day. Sir Lawrence examines the changing character of warfare over the last several centuries, how it has been influenced by society and technology, the ways in which science fiction got it wrong and right, and how it might take shape in the future. This overarching look at warfare causes one to pause and consider whether we may be asking the right questions about future warfare.

 

Royal Scots Guardsmen engaging the enemy with a Lewis Machine Gun / Source:  Flickr

They Shall Not Grow Old directed by Sir Peter Jackson. This lauded 2018 documentary utilizes original film footage from World War I (much of it unseen for the past century) that has been digitized, colorized, upscaled, and overlaid with audio recordings from British servicemen who fought in the war. The divide between civilians untouched by the war and service members, the destructive impact of new disruptive technologies, and the change they wrought on the character of war resonate to this day and provide an excellent historical analogy from which to explore future warfare.

Gene Simmons plays a nefarious super empowered individual in Runaway

Runaway directed by Michael Crichton. This film, released in 1984, is set in the near future, where a police officer (Tom Selleck) and his partner (Cynthia Rhodes) specialize in neutralizing malfunctioning robots. A rogue killer robot – programmed to kill by the bad guy (Gene Simmons) – goes on homicidal rampage. Alas, the savvy officers begin to uncover a wider, nefarious plan to proliferate killer robots. This offbeat Sci-Fi thriller illustrates how dual-use technologies in the hands of super-empowered individuals could be employed innovatively in the Future Operational Environment. Personalized warfare is also featured, as a software developer’s family is targeted by the ‘bad guy,’ using a corrupted version of the very software he helped create. This movie illustrates the potential for everyday commercial products to be adapted maliciously by adversaries, who, unconstrained ethically, can out-innovate us with convergent, game changing technologies (robotics, CRISPR, etc.).

Originally published by Macmillan on 1 May 2018

The Military Science of Star Wars by George Beahm. Storytelling is a powerful tool used to visualize the future, and Science Fiction often offers the best trove of ideas. The Military Science of Star Wars by George Beahm dissects and analyzes the entirety of the Star Wars Universe to mine for information that reflects the real world and the future of armed conflict. Beahm tackles the personnel, weapons, technology, tactics, strategy, resources, and lessons learned from key battles and authoritatively links them to past, current, and future Army challenges. Beahm proves that storytelling, and even fantasy (Star Wars is more a fantasy story than a Science Fiction story), can teach us about the real world and help evolve our thinking to confront problems in new and novel ways. He connects the story to the past, present, and future Army and asks important questions, like “What makes Han Solo a great military Leader?”, “How can a military use robots (Droids) effectively?”, and most importantly, “What, in the universe, qualified Jar Jar Binks to be promoted to Bombad General?”.

Ex Machina, Universal Pictures (2014) / Source: Vimeo

Ex Machina directed by Alex Garland. This film, released in 2014, moves beyond the traditional questions surrounding the feasibility of Artificial Intelligence (AI) and the Turing test to explore the darker side of synthetic beings, knowing that it is achievable and that the test can be passed. The film is a cautionary tale of what might be possible at the extreme edge of AI computing and innovation where control may be fleeting or even an illusion. The Army may never face the same consequences that the characters in the film face, but it can learn from their lessons. AI is a hotly debated topic with some saying it will bring about the end of days, and others saying generalized AI will never exist. With a future this muddy, one must be cautious of exploring new and undefined technology spaces that carry so much risk. As more robotic entities are operationalized, and AI further permeates the battlefield, future Soldiers and Leaders would do well to stay abreast of the potential for volatility in an already chaotic environment. If Military AI progresses substantially, what will happen when we try to turn it off?

Astronaut and Lunar Module pilot Buzz Aldrin is pictured during the Apollo 11 extravehicular activity on the moon / Source: NASA

Apollo 11 directed by Todd Douglas Miller. As the United States prepares to celebrate the fiftieth anniversary of the first manned mission to the lunar surface later this summer, this inspiring documentary reminds audiences of just how audacious an achievement this was. Using restored archival audio recordings and video footage (complemented by simple line animations illustrating each of the spacecrafts’ maneuver sequences), Todd Miller skillfully re-captures the momentousness of this historic event, successfully weaving together a comprehensive point-of-view of the mission. Watching NASA and its legion of aerospace contractors realize the dream envisioned by President Kennedy eight years before serves to remind contemporary America that we once dared and dreamed big, and that we can do so again, harnessing the energy of insightful and focused leadership with the innovation of private enterprise. This uniquely American attribute may well tip the balance in our favor, given current competition and potential future conflicts with our near-peer adversaries in the Future Operational Environment.

Originally published by Penguin Random House on 3 July 2018

Artemis by Andy Weir. In his latest novel, following on the heels of his wildly successful The Martian, Andy Weir envisions an established lunar city in 2080 through the eyes of Jasmine “Jazz” Bashara, one of its citizen-hustlers, who becomes enmeshed in a conspiracy to control the tremendous wealth generated from the space and lunar mineral resources refined in the Moon’s low-G environment. His suspenseful plot, replete with descriptions of the science and technologies necessary to survive (and thrive!) in the hostile lunar environment, posits a late 21st century rush to exploit space commodities. The resultant economic boom has empowered non-state actors as new competitors on the global — er, extraterrestrial stage — from the Kenya Space Corporation (blessed by its equatorial location and reduced earth to orbit launch costs) to the Sanchez Aluminum mining and refining conglomerate, controlled by a Brazilian crime syndicate scheming to take control of the lunar city. Readers are reminded that the economic hegemony currently enjoyed by the U.S., China, and the E.U. may well be eclipsed by visionary non-state actors who dare and dream big enough to exploit the wealth that lies beyond the Earth’s gravity well.