7. Timeless Competitions

The nature of war remains inherently humanistic and largely unchanging. That said, Mad Scientists must understand the changing character of warfare in the future Operational Environment, as discussed on pages 16-18 of The Operational Environment and the Changing Character of Future Warfare. With emergences in technologies that are so significant, extensive, and pervasive, warfare will be transformed – made faster, more destructive, and fought at longer ranges; targeting civilians and military equally across the physical, cognitive, and moral dimensions; and (if waged effectively) securing its objectives before actual battle is joined. Although the character of warfare changes dramatically, there are a number of timeless competitions that will endure for the foreseeable future.

Finders vs Hiders. As in preceding decades, that which can be found, if unprotected, can still be hit. By 2050, it will prove increasingly difficult to stay hidden. Most competitors can access space-based surveillance, networked multi-static radars, a wide variety of drones / swarms of drones, and a vast array of passive and active sensors that are far cheaper to produce than the countermeasures required to defeat them. Quantum computing and advanced sensing will open new levels of situational awareness. Passive sensing, especially when combined with artificial intelligence and big-data techniques, may routinely outperform active sensors. Hiding will still be possible, but will require a dramatic reduction of thermal, electromagnetic, and optical signatures. More successful methods may involve “hiding” amongst an obscuration of emitters and signals – presenting adversaries with a veritable needle within a stack of like-appearing and emitting needles.

Strikers vs Shielders. Precision strike will improve exponentially through 2050, with the type of precision once formerly reserved for high-end aerospace assets now extended to all domains and at every echelon of engagement. Combatants, both state and non-state, will have a host of advanced delivery options available to them, including advanced kinetic weapons, hypersonics, directed energy (including laser and microwave), and cyber. Space-based assets will become increasingly integrated into striker-shielder complexes, with sensors, anti-satellite weapons, and possibly space-to-earth strike platforms.
At the same time, and on the other end of the spectrum, it will be possible to deploy swarms of massed, low-cost, self-organizing unmanned systems (directed by bio-mimetic algorithms) to overwhelm opponents, offering an alternative to expensive, exquisite systems. With operational range spanning from the strategic – including the homeland – to the tactical, the application of advanced fires from one domain to another will become routine. A wide range of effects can be delivered by a striker, ranging from point precision to area suppression using thermobarics, brilliant cluster munitions, and even a variety of nuclear, chemical, or biological systems. Shielders, on the other hand, will focus on an integrated approach to defense, which target enemy finders, their linkages to strikers, or the strikers themselves.

Protection vs Access. While protection vs. access is generally thought about in physical terms,
there is a more prevalent competition emerging in the future regarding cyber protection and access to data. Data is increasingly important, as it underpins AI, machine learning, decision-making, and battlefield management. Due to its vital but often sensitive nature, there is a tension point between the need to access friendly and adversarial information and the need for both sides to protect it.

Planning and Judgement vs Reaction and Autonomy. The mid-Century duel for the initiative has a unique character. New operational tools offer extraordinary speed and reach and often precipitate unintended consequences. Commanders will need to open multi-domain windows through which to deliver effects, balancing deliberate planning to set conditions with “l’audace” — the ability to rapidly exploit opportunities and strike at vulnerabilities as they appear — thereby achieving success against sophisticated defensive deployments and shielder complexes.

This will place an absolute imperative on ISR, as well as on intelligence analysis that is augmented by AI, big data, and advanced analytic techniques to determine the conditions on the battlefield, and specifically when, and for how long, a window of operation is open. On the defensive, a commander will be faced with increasingly short decision cycles, with automation and artificial intelligence-assisted decisions becoming the norm. Man-machine teaming will be essential to staff planning, with carefully trained, educated, and possibly cognitive performance-enhanced personnel working to create and exploit opportunities. This means that Armies no longer merely adapt between wars, but do so between and during short-term engagements.

Escalation vs De-Escalation. The competition between violence escalation and de-escalation will be central to stability, deterrence, and strategic success. Violence is readily available on unprecedented scales to a wide-range of actors. Conventional and cyber capabilities can be so potent as to generate effects on the scale of WMD. State and non-state actors alike will utilize hybrid strategies and “Gray Zone” operations, demonstrating a willingness to escalate conflict to a level of violence that exceeds the interests of an adversary to intervene. Long-range strikers and shielder complexes, which extend from the terrestrial domains into space – taken together with cyber technology and more ubiquitous finders – are significantly destabilizing and allow a combatant a freedom of maneuver to achieve objectives short of open war. The ability to effectively escalate and de-escalate along a scalable series of options will be a prominent feature of force design, doctrine, and policy by mid-Century.

These timeless competitions prompt the following questions:

1) What kind of R&D implications might each of these competitions have? Does R&D become increasingly ceded to the private sector as technological advances become exceedingly agnostic to defensive and offensive focuses?

2) In what ways do technological shifts in society impact these timeless competitions? (i.e., does the emergence of the Internet of Things – and eventually Internet of Everything – re-characterize Hiders vs. Finders?)

3) Does the democratization of technology and information increase the role of the Army in land warfare or does the pervasive nature of these technologies and cyber force the Army to incorporate itself more in a whole-of-government approach?

4) What kind of changes do the evolutions of timeless competitions bring about to Army force structuring, organization, strategy, tactics, training, and recruiting?

For further discussions regarding these Timeless Competitions, please see pages 43-49 of the Robotics, Artificial Intelligence & Autonomy Conference Final Report, and An Advanced Engagement Battlespace: Tactical, Operational and Strategic Implications for the Future Operational Environment.

6. Trends in Autonomy

“Control leads to compliance; autonomy leads to engagement.” – Daniel H. Pink

During the Robotics, Artificial Intelligence & Autonomy Conference, Georgia Tech Research Institute (GTRI), 7-8 March 2017, Mad Scientists addressed how these interdependent technologies will exercise key roles in future military operations, including land operations.

In order to better address Autonomy’s relevance to future military operations, the Mad Scientist community identified the following Autonomy Trends:

Autonomy Definition. The Joint Concept for Robotics and Autonomous Systems defines autonomy as follows:


“… the level of independence that humans grant a system to execute a given task. It is the condition or quality of being self-governing to achieve an assigned task based on the system’s own situational awareness (integrated sensing, perceiving, analyzing), planning and decision-making. Autonomy refers to a spectrum of automation in which independent decision-making can be tailored for a specific mission, level of risk, and degree of human-machine teaming.”

Degrees of Autonomy. The phrase “spectrum of automation” alludes to the different degrees to autonomy:

Fully Autonomous: “Human Out of the Loop”: no ability for human to intervene in real time.

Supervised Autonomous: “Human on the Loop”: humans can intervene in real time.

Semi-Autonomous: “Human in the Loop”: machines wait for human input before taking action.

Non-Autonomous (Remote Control): “Human in the Loop”: machines guided via remote controls; no autonomy in system.

Autonomy Baseline. Autonomy is already evident on the battlefield. At least 30 countries have defensive, human-supervised autonomous weapons such as the Aegis and Patriot. Some “fully autonomous” weapon systems are also emerging. The Israeli Harpy drone (anti-radiation loitering munition) has been sold to India, Turkey, South Korea, and China. China reportedly has reverse-engineered their own variant. The U.S. has also experimented with similar systems in the Tacit Rainbow and the Low Cost Autonomous Attack System (LOCAAS) programs.

Autonomy Projections. Mad Scientists expect autonomy to evolve into solutions that are flexible, multi-modal, and goal-oriented featuring trusted man-machine collaboration, distributed autonomy and continuous learning.

Collaborative Autonomy will be learning and adaptation to perform a new task based on mere demonstration of the task by end-users (i.e., Soldiers) to teach the robot what to do.

Distributed Autonomy will be dynamic team formation from heterogeneous platforms to include coordination in settings with limited or impaired communication and the emergence of new tactics and strategies enabled by multi-agent capabilities.

Continuous Learning will be a continuous, incremental evolution and expansion of capabilities, to include the incorporation of high-level guidance (such as human instruction, changes in laws / ROEs / constraints) and “Transfer Learning.”

Autonomy Challenges. Mad Scientists acknowledged that the aforementioned “autonomy projections” pose the following challenges:

• Goal-Oriented Autonomy: Decision and adaptation, to include the incorporation of ethics and morality into decision-making.

• Trusted Collaboration: The challenge of trust between man and machine continues to be a dominant theme. Machines must properly perceive human goals and preserve their autonomous system integrity while achieving joint man-machine goals in a manner explainable to – and completely trusted by — the human component.

• Distributed Systems: Rethinking the execution of tasks using multiple, distributed agents while preserving command-level understanding and decision adds an additional layer of complexity to the already challenging task of designing and building autonomous systems.

• Transfer Learning: Learning by inference from similar tasks must address the challenges of seamless adaptation to changing contexts and environments, including the contextual inference of missing data and physical attributes.

• High Reliability Theory: “Normal Accident Theory” holds that accidents are inevitable in complex, tightly-coupled systems. “High Reliability Theory” asserts that organizations can contribute significantly to the prevention of accidents. Because of the significant complexity and “tight-coupling” of future autonomous systems, there is an obvious challenge in the application of high reliability theory to emerging technologies that are not yet well comprehended.

Relevance of Autonomous Systems. Hollywood inevitably envisions autonomous systems as either predisposed for malevolence, destined to “go rogue” and turn on their creators at the earliest opportunity; or coolly logical, dispassionately taking actions with disastrously unintended consequences for humankind. For the foreseeable future, however, no autonomous system will have the breadth, robustness and flexibility of human cognition. That said, autonomous systems offer the potential for speed, mass, and penetration capabilities in future lethal, high threat environments — minimizing risks to our Soldiers.

For additional insights regarding Autonomy Trends, watch “Unmanned and Autonomous Systems,” presented by Mr. Paul Scharre, Senior Fellow / Director, Future Warfare Initiative, Center for New American Security, during the GTRI Conference last spring.

5. Personalized Warfare

The future of warfare, much like the future of commerce, will be personalized.

Emerging threat capabilities targeting the genome; manipulating individual’s personal interests, lives, and familial ties; and subtle coercive / subversive avenues of attack against the human brain will transform war into something far more personalized, scalable, and potentially more attractive to nation-states, non-state actors, and super-empowered individuals.

A recent short dystopian-esque film created by the Future of Life Institute, entitled Slaughterbots, highlights the dangers of lethal autonomy in the future but also frames what personalized warfare could look like. Individuals are targeted very precisely by their social media presence and activism against policies deemed important by some government, non-state actor, or even super-empowered individual. While it is not shown in the film, it is possible that machine learning and artificial intelligence are assisting in these targeting and lethal autonomous efforts. The ever more connected nature of personal lives (familial and social connections) and sensitive personal information – Ethnicity, DNA, biometrics, detailed medical and psychological information – through social media, commerce, work, and financial transactions makes these vulnerabilities even more prominent.

Additionally, due to advances in the field of neuro–mapping, attacking, changing, and protecting the brain – individuals can be targeted even more specifically; environments (populated by people) could truly be shaped in ways that were never possible before.

The focus of warfare may shift from being nation-state centered to something more personal that targets specific individuals, their families, ethnic, societal, or interest groups, or defined segments of populations. This raises a number of important questions regarding the future of ethics, rules of engagement, and the scope of warfare:

1) Given the potential for adversaries to target populations based on their genomes, how do civil societies deter, defend, and (as necessary) respond to such attacks?

2) What constitutes an act of war? What happens when gray zone and asymmetric attacks extend to the living room?

3) Does war become increasingly enticing as attacks and effects can be so personalized?

4) Is influencing and changing the brain (through physical methods: bugs and drugs) the same as attacking someone? Does coercion through these capabilities constitute an act of war?

For further learning on the future of neuroscience in warfare, check out Georgetown University’s Chief of the Neuroethics Studies Program, Dr. James Giordano’s presentation “Neurotechnology in National Security and Defense,” as well as a podcast featuring Dr. Giordano by our partners at Modern War Institute.

4. Ethical Dilemmas of Future Warfare

At the Visualizing Multi Domain Battle 2030-2050 Conference, Georgetown University, 25-26 July 2017, Mad Scientists addressed the requirement for United States policymakers and warfighters to address the ethical dilemmas arising from an ever-increasing convergence of Artificial Intelligence (AI) and smart technologies in both battlefield systems and embedded within individual Soldiers. While these disruptive technologies have the potential to lessen the burden of many military tasks, they may come with associated ethical costs. The Army must be prepared to enter new ethical territory and make difficult decisions about the creation and employment of these combat multipliers.

Human Enhancement:

“Human enhancement will undoubtedly afford the Soldier a litany of increased capabilities on the battlefield. Augmenting a human with embedded communication technology, sensors, and muscular-skeletal support platforms will allow the Soldier to offload many physical, mundane, or repetitive tasks but will also continue to blur the line between human and machine. Some of the many ethical/legal questions this poses, are at what point does a Soldier become more machine than human, and how will that Soldier be treated and recognized by law? At what point does a person lose their legal personhood? If a person’s nervous system is intact, but other organs and systems are replaced by machines, is he/she still a person? These questions do not have concrete answers presently, but, more importantly, they do not have policy that even begins to address them. The Army must take these implications seriously and draft policy that addresses these issues now before these technologies become commonplace. Doing so will guide the development and employment of these technologies to ensure they are administered properly and protect Soldiers’ rights.”

Fully Autonomous Weapons:

“Fully autonomous weapons with no human in the loop will be employed on the battlefield in the near future. Their employment may not necessarily be by the United States, but they will be present on the battlefield by 2050. This presents two distinct dilemmas regarding this technology. The first dilemma is determining responsibility when an autonomous weapon does not act in a manner consistent with our expectations. For a traditional weapon, the decision to fire always comes back to a human counterpart. For an autonomous weapon, that may not be the case. Does that mean that the responsibility lies with the human who programmed the machine? Should we treat the programmer the same as we treat the human who physically pulled the trigger? Current U.S. policy doesn’t allow for a weapon to be fired without a human in the loop. As such, this alleviates the responsibility problem and places it on the human. However, is this the best use of automated systems and, more importantly, will our adversaries adhere to this same policy? It’s almost assured that the answer to both questions is no. There is little reason to believe that our adversaries will employ the same high level of ethics as the Army. This means Soldiers will likely encounter autonomous weapons that can target, slew, and fire on their own on the future battlefield. The human Soldier facing them will be slower, less accurate, and therefore less lethal. So the Army is at a crossroads where it must decide if employing automated weapons aligns with its ethical principles or if they will be compromised by doing so. It must also be prepared to deal with a future battlefield where it is at a distinct disadvantage as its adversaries can fire with speed and accuracy unmatched by humans. Policy must address these dilemmas and discussion must be framed in a battlefield where autonomous weapons operating at machine speed are the norm.”

Given the inexorable advances and implementation of the aforementioned technologies, how will U.S. policymakers and warfighters tackle the following concomitant ethical dilemmas:

• How do these technologies affect U.S. research and development, rules of engagement, and in general, the way we conduct war?

• Must the United States cede some of its moral obligations and ethical standards in order to gain/retain relative military advantage?

• At what point does the efficacy of AI-enabled targeting and decision-making render it unethical to maintain a human in the loop?

For additional insights regarding these dilemmas, watch this Ethics and the Future of War panel discussion, facilitated by LTG Dubik (USA-Ret.) from this Georgetown conference.

3. Redefining the Role of Soldiers on the Future Battlefield

Will future Soldiers be augmented hyper-enhanced fighters; a force manager of unmanned and autonomous/semi-autonomous systems; or an amalgamation of the two?

Conflict in the Mid-21st Century will witness the proliferation of unmanned, robotic, semi-autonomous, and autonomous weapons, platforms, and combatants that will dramatically change the role of Soldiers on the battlefield. At the Visualizing Multi Domain Battle 2030-2050 Conference, Georgetown University, 25-26 July 2017, the Mad Scientist community discussed how these capabilities will supplement or supplant humans in both support and combat roles that are dull, dirty, and/or dangerous. Potential adversaries may also use them to undertake operations that are morally, legally, or ethically questionable. Artificial intelligence (AI) and autonomy will provide essential time-critical decision-making support to Leaders and Warfighters regarding force employment courses of action and the authorization/ ordering of lethal force. While the nature of warfare will remain intrinsically human as long as its aim is the imposition of our will over that of an adversary, the character of warfare will change as the tools used to execute warfare become increasingly less human.

“As Artificial Intelligence matures and machines on the battlefield become more pervasive, the future U.S. Soldier will be equipped to offload an increasing number of responsibilities normally reserved for a human. This will range from the obvious mundane and repetitive tasks, to ones that require accuracy and speed that only a machine can deliver. This will also include tasks that are inherently dangerous or life threatening. As the intensity of conflict increases, machines will occupy a greater portion of the range of military operations and human occupation will diminish. This is not to say wars will no longer be fought by humans, rather, it will mean that the role of the human on the battlefield will need to be redefined. In the context of the range of military operations, in the 2030-2050 timeframe, human operations will be machine-assisted (i.e., fully integrated man-machine), then move on to machine operations that will be human-assisted. Certain operations, especially those on the low-intensity spectrum, will remain better served with machine-assisted humans; conversely, high-intensity conflict operations will be fought and occupied largely with robotic systems with the potential for human intervention in a best case scenario (man-on-the-loop).”

As future military operations range from Human Operations (Machine-assisted) to Human-Machine (Hybrid Operations) to Machine Operations (Human-directed) based on the level of conflict intensity:

• How do Warfighters ensure continued compliance with ethical standards, given ever shortening decision cycles?

The Law of War (specifically para 6.5.9.4) and DoD Directive 3000.09, Autonomy in Weapon Systems address U.S. policy regarding Autonomy in Weapon Systems. This latter Directive, however, specifically “does not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations.”

• Given the continuing evolution of the Internet of Things (IoT) into the Internet of Everything (IoE), will the potential for Cyber operations to result in lethal effects necessitate a revision of this Directive?

For more on the transformative impact of AI, robotics, and autonomy on our Soldiers in future conflicts, select and play the Patrolling in the Infosphere presentation by Mr. Mathison Hall from the aforementioned Georgetown University Conference.