105. Emerging Technologies as Threats in Non-Kinetic Engagements

[Editor’s Note:  Mad Scientist Laboratory is pleased to present today’s post by returning guest blogger and proclaimed Mad Scientist Dr. James Giordano and CAPT (USN – Ret.) L. R. Bremseth, identifying the national security challenges presented by emerging technologies, specifically when employed by our strategic competitors and non-state actors alike in non-kinetic engagements.

Dr. Giordano’s and CAPT Bremseth’s post is especially relevant, given the publication earlier this month of TRADOC Pamphlet 525-3-1, U.S. Army in Multi-Domain Operations 2028, and its solution to the “problem of layered standoff,” namely “the rapid and continuous integration of all domains of warfare to deter and prevail as we compete short of armed conflict; penetrate and dis-integrate enemy anti-access and area denial systems; exploit the resulting freedom of maneuver to defeat enemy systems, formations and objectives and to achieve our own strategic objectives; and consolidate gains to force a return to competition on terms more favorable to the U.S., our allies and partners.”]

“Victorious warriors seek to win first then go to war, while defeated warriors go to war first then seek to win.” — Sun Tzu

Non-kinetic Engagements

Political and military actions directed at adversely impacting or defeating an opponent often entail clandestine operations which can be articulated across a spectrum that ranges from overt warfare to subtle “engagements.” Routinely, the United States, along with its allies (and adversaries), has employed clandestine tactics and operations across the kinetic and non-kinetic domains of warfare. Arguably, the execution of clandestine kinetic operations is employed more readily as these collective activities often occur after the initiation of conflict (i.e., “Right of Bang”), and their effects may be observed (to various degrees) and/or measured. Given that clandestine non-kinetic activities are less visible and insidious, they may be particularly (or more) effective because often they are unrecognized and occur “Left of Bang.” Other nations, especially adversaries, understand the relative economy of force that non-kinetic engagements enable and increasingly are focused upon developing and articulating advanced methods for operations.

Much has been written about the fog of war. Non-kinetic engagements can create unique uncertainties prior to and/or outside of traditional warfare, precisely because they have qualitatively and quantitatively “fuzzy boundaries” as blatant acts of war. The “intentionally induced ambiguity” of non-kinetic engagements can establish plus-sum advantages for the executor(s) and zero-sum dilemmas for the target(s). For example, a limited scale non-kinetic action, which exerts demonstrably significant effects but does not meet defined criteria for an act of war, places the targeted recipient(s) at a disadvantage:  First, in that the criteria for response (and proportionality) are vague and therefore any response could be seen as questionable; and second, in that if the targeted recipient(s) responds with bellicose action(s), there is considerable likelihood that they may be viewed as (or provoked to be) the aggressor(s) (and therefore susceptible to some form of retribution that may be regarded as sanctionable).

Nominally, non-kinetic engagements often utilize non-military means to expand the effect-space beyond the conventional battlefield. The Department of Defense and Joint Staff do not have a well agreed-upon lexicon to define and to express the full spectrum of current and potential activities that constitute non-kinetic engagements. It is unfamiliar – and can be politically uncomfortable – to use non-military terms and means to describe non-kinetic engagements. As previously noted, it can be politically difficult – if not precarious– to militarily define and respond to non-kinetic activities.

Non-kinetic engagements are best employed to incur disruptive effects in and across various dimensions of effect (e.g., biological, psychological, social) that can lead to intermediate to long-term destructive manifestations (in a number of possible domains, ranging from the economic to the geo-political). The latent disruptive and destructive effects should be framed and regarded as “Grand Strategy” approaches that evoke outcomes in a “long engagement/long war” context rather than merely in more short-term tactical situations.1

Thus, non-kinetic operations must be seen and regarded as “tools of mass disruption,” incurring “rippling results” that can evoke both direct and indirect de-stabilizing effects. These effects can occur and spread:  1) from the cellular (e.g., affecting physiological function of a targeted individual) to the socio-political scales (e.g., to manifest effects in response to threats, burdens and harms incurred by individual and/or groups); and 2) from the personal (e.g., affecting a specific individual or particular group of individuals) to the public dimensions in effect and outcome (e.g., by incurring broad scale reactions and responses to key non-kinetic events).2

Given the increasing global stature, capabilities, and postures of Asian nations, it becomes increasingly important to pay attention to aspects of classical Eastern thought (e.g., Sun Tzu) relevant to bellicose engagement. Of equal importance is the recognition of various nations’ dedicated enterprises in developing methods of non-kinetic operations (e.g., China; Russia), and to understand that such endeavors may not comport with the ethical systems, principles, and restrictions adhered to by the United States and its allies.3, 4 These differing ethical standards and practices, if and when coupled to states’ highly centralized abilities to coordinate and to synchronize activity of the so-called “triple helix” of government, academia, and the commercial sector, can create synergistic force-multiplying effects to mobilize resources and services that can be non-kinetically engaged.5 Thus, these states can target and exploit the seams and vulnerabilities in other nations that do not have similarly aligned, multi-domain, coordinating capabilities.

Emerging Technologies – as Threats

Increasingly, emerging technologies are being leveraged as threats for such non-kinetic engagements. While the threat of radiological, nuclear, and (high yield) explosive technologies have been and remain generally well surveilled and controlled to date, new and convergent innovations in the chemical, biological, cyber sciences, and engineering are yielding tools and methods that currently are not completely, or effectively addressed. An overview of these emerging technologies is provided in Table 1 below.

Table 1

Of key interest are the present viability and current potential value of the brain sciences to be engaged in these ways.6, 7, 8 The brain sciences entail and obtain new technologies that can be applied to affect chemical and biological systems in both kinetic (e.g., chemical and biological ‘warfare’ but in ways that may sidestep definition – and governance – by existing treaties and conventions such as the Biological Toxins and Weapons Convention (BTWC), and Chemical Weapons Convention (CWC), and/or non-kinetic ways (which fall outside of, and therefore are not explicitly constrained by, the scope and auspices of the BTWC or CWC).9, 10

As recent incidents (e.g., “Havana Syndrome”; use of novichok; infiltration of foreign-produced synthetic opioids to US markets) have demonstrated, the brain sciences and technologies have utility to affect “minds and hearts” in (kinetic and non-kinetic) ways that elicit biological, psychological, socio-economic, and political effects which can be clandestine, covert, or attributional, and which evoke multi-dimensional ripple effects in particular contexts (as previously discussed). Moreover, apropos current events, the use of gene editing technologies and techniques to modify existing microorganisms11, and/or selectively alter human susceptibility to disease12 , reveal the ongoing and iterative multi-national interest in and considered weaponizable use(s) of emerging biotechnologies as instruments to incur “precision pathologies” and “immaculate destruction” of selected targets.

Toward Address, Mitigation, and Prevention

Without philosophical understanding of and technical insight into the ways that non-kinetic engagements entail and affect civilian, political, and military domains, the coordinated assessment and response to any such engagement(s) becomes procedurally complicated and politically difficult. Therefore, we advocate and propose increasingly dedicated efforts to enable sustained, successful surveillance, assessment, mitigation, and prevention of the development and use of Emerging Technologies as Threats (ETT) to national security. We posit that implementing these goals will require coordinated focal activities to:  1) increase awareness of emerging technologies that can be utilized as non-kinetic threats; 2) quantify the likelihood and extent of threat(s) posed; 3) counter identified threats; and 4) prevent or delay adversarial development of future threats.

Further, we opine that a coordinated enterprise of this magnitude will necessitate a Whole of Nations approach so as to mobilize the organizations, resources, and personnel required to meet other nations’ synergistic triple helix capabilities to develop and non-kinetically engage ETT.

Utilizing this approach will necessitate establishment of:

1. An office (or network of offices) to coordinate academic and governmental research centers to study and to evaluate current and near-future non-kinetic threats.

2. Methods to qualitatively and quantitatively identify threats and the potential timeline and extent of their development.

3. A variety of means for protecting the United States and allied interests from these emerging threats.

4. Computational approaches to create and to support analytic assessments of threats across a wide range of emerging technologies that are leverageable and afford purchase in non-kinetic engagements.

In light of other nations’ activities in this domain, we view the non-kinetic deployment of emerging technologies as a clear, present, and viable future threat. Therefore, as we have stated in the past13, 14, 15 , and unapologetically re-iterate here, it is not a question of if such methods will be utilized but rather questions of when, to what extent, and by which group(s), and most importantly, if the United States and its allies will be prepared for these threats when they are rendered.

If you enjoyed reading this post, please also see Dr. Giordano’s presentations addressing:

War and the Human Brain podcast, posted by our colleagues at Modern War Institute on 24 July 2018.

Neurotechnology in National Security and Defense from the Mad Scientist Visioning Multi-Domain Battle in 2030-2050 Conference, co-hosted by Georgetown University in Washington, D.C., on 25-26 July 2017.

Brain Science from Bench to Battlefield: The Realities – and Risks – of Neuroweapons from Lawrence Livermore National Laboratory’s Center for Global Security Research (CGSR), on 12 June 2017.

Mad Scientist James Giordano, PhD, is Professor of Neurology and Biochemistry, Chief of the Neuroethics Studies Program, and Co-Director of the O’Neill-Pellegrino Program in Brain Science and Global Law and Policy at Georgetown University Medical Center. He also currently serves as Senior Biosciences and Biotechnology Advisor for CSCI, Springfield, VA, and has served as Senior Science Advisory Fellow of the Strategic Multilayer Assessment Group of the Joint Staff of the Pentagon.

R. Bremseth, CAPT, USN SEAL (Ret.), is Senior Special Operations Forces Advisor for CSCI, Springfield, VA. A 29+ years veteran of the US Navy, he commanded SEAL Team EIGHT, Naval Special Warfare GROUP THREE, and completed numerous overseas assignments. He also served as Deputy Director, Operations Integration Group, for the Department of the Navy.

This blog is adapted with permission from a whitepaper by the authors submitted to the Strategic Multilayer Assessment Group/Joint Staff Pentagon, and from a manuscript currently in review at HDIAC Journal. The opinions expressed in this piece are those of the authors, and do not necessarily reflect those of the United States Department of Defense, and/or the organizations with which the authors are involved. 


1 Davis Z, Nacht M. (Eds.) Strategic Latency- Red, White and Blue: Managing the National and international Security Consequences of Disruptive Technologies. Livermore CA: Lawrence Livermore Press, 2018.

2 Giordano J. Battlescape brain: Engaging neuroscience in defense operations. HDIAC Journal 3:4: 13-16 (2017).

3 Chen C, Andriola J, Giordano J. Biotechnology, commercial veiling, and implications for strategic latency: The exemplar of neuroscience and neurotechnology research and development in China. In: Davis Z, Nacht M. (Eds.) Strategic Latency- Red, White and Blue: Managing the National and international Security Consequences of Disruptive Technologies. Livermore CA: Lawrence Livermore Press, 2018.

4 Palchik G, Chen C, Giordano J. Monkey business? Development, influence and ethics of potentially dual-use brain science on the world stage. Neuroethics, 10:1-4 (2017).

5 Etzkowitz H, Leydesdorff L. The dynamics of innovation: From national systems and “Mode 2” to a Triple Helix of university-industry-government relations. Research Policy, 29: 109-123 (2000).

6 Forsythe C, Giordano J. On the need for neurotechnology in the national intelligence and defense agenda: Scope and trajectory. Synesis: A Journal of Science, Technology, Ethics and Policy 2(1): T5-8 (2011).

7 Giordano J. (Ed.) Neurotechnology in National Security and Defense: Technical Considerations, Neuroethical Concerns. Boca Raton: CRC Press (2015).

8 Giordano J. Weaponizing the brain: Neuroscience advancements spark debate. National Defense, 6: 17-19 (2017).

9 DiEuliis D, Giordano J. Why gene editors like CRISPR/Cas may be a game-changer for neuroweapons. Health Security 15(3): 296-302 (2017).

10 Gerstein D, Giordano J. Re-thinking the Biological and Toxin Weapons Convention? Health Security 15(6): 1-4 (2017).

11 DiEuliis D, Giordano J. Gene editing using CRISPR/Cas9: implications for dual-use and biosecurity. Protein and Cell 15: 1-2 (2017).

12 See, for example: https://www.vox.com/science-and-health/2018/11/30/18119589/crispr-technology-he-jiankui (Accessed 2. December, 2018).

13 Giordano J, Wurzman R. Neurotechnology as weapons in national intelligence and defense. Synesis: A Journal of Science, Technology, Ethics and Policy 2: 138-151 (2011).

14 Giordano J, Forsythe C, Olds J. Neuroscience, neurotechnology and national security: The need for preparedness and an ethics of responsible action. AJOB-Neuroscience 1(2): 1-3 (2010).

15 Giordano J. The neuroweapons threat. Bulletin of the Atomic Scientists 72(3): 1-4 (2016).

102. The Human Targeting Solution: An AI Story

[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by guest blogger CW3 Jesse R. Crifasi, envisioning a combat scenario in the not too distant future, teeing up the twin challenges facing the U.S Army in incorporating Artificial Intelligence (AI) across the force — “human-in-the-loop” versus “human-out-of-the-loop” and trust.  In it, CW3 Crifasi describes the inherent tension between human critical thinking and the benefits of Augmented Intelligence facilitating warfare at machine speed.  Enjoy!]

“CAITT, let’s re-run the targeting solution for tomorrow’s engagement… again,” asked Chief Warrant Officer Five Robert Menendez, in a not altogether annoyed tone of voice. Considering this was the fifth time he had asked, the tone of control Bob was exercising was nothing short of heroic for those knew him well. Fortunately, CAITT, short for Commander’s Artificially Intelligent Targeting Tool, did not seem to notice. Bob quietly thanked the nameless software engineer who had not programmed it to recognize the sarcasm and vitriol that he felt when he made the request.

“Chief, do you really think she is going to come up with anything different this time? You know that old saying about the definition of insanity, right?” asked DeMarcus Austin.  Bob shot the 28-year Captain a glare, clearly indicating that he knew exactly what the young man was implying. It was 0400 hours, and the entire Brigade Combat Team (BCT) was preparing to defend along its forward boundary. This after an exhausting three-day rapid deployment from their forward staging bases in Germany had everyone already on edge. In short, nothing had gone as expected or as planned for in the Operations Plan (OPLAN).

The UBRA’s, short for Unified Belorussian Russian Alliance’s, 323rd Tank Division was a mere 68 kilometers from the BCT’s Forward Line of Troops or FLOT. They would be in the BCT’s primary engagement area in six hours. Between 1EU DIV and the EU’s Expeditionary Air Force’s efforts, nothing was slowing UBRA’s advance towards the critical seaport city of Gdansk, Poland.

All the assumptions about air supremacy and cyber domination went out the window after the first UBRA tactical Electromagnetic Pulse (EMP) weapon detonated over Vilnius, Lithuania,  48 hours prior. A brilliant strategic move, the EMP fried every unshielded computer networked system the Allied Forces possessed. The Coalition AI Partner Network, so heavily relied on to execute the OPLAN, was inaccessible, as was every weapon system that linked to it. Right about now, Bob wished that CAITT was one of those systems.

Luckily for him and his boss, Colonel Steph “Duke” Ducalis, CAITT was designed with an internal Faraday shield preventing it and most of the U.S. Army’s other AI systems from suffering the same catastrophic damage. Unfortunately, the EU Armed Forces did not heed the same warnings and indicators. They were essentially crippled as they fervently worked to repair the damage. With the majority of U.S. military might committed to the Pacific Theatre, Colonel Ducalis’ BCT, a holdover from the old NATO alliance, was the lone American combat unit forward deployed in Western Europe. Alone and unafraid, as they say.

“Sir…” asked CAITT, snapping Bob out of his fatigue induced musings, “all data still indicates that engaging with our M56 Long-Range High-Velocity Missiles against the 323rd’s logistical assembly areas in Elblag will compel them to defeat. I estimate their advance will cease approximately 18 hours after direct fire battle commences. Given all of the variables, this is the optimal targeting solution.” Bob really hated how CAITT dispassionately stated her “optimal targeting solution,” in that sultry female tone. Clearly, that same software engineer who had ensured CAITT was durable also had a soft spot for British accents.

“CAITT, that makes no sense!” Bob stated exasperatedly. “The 323rd has approximately 250 T-90 MBTs — even if they expend all their fuel and munitions in that 18 hours, they will still overrun our defensive positions in less than six. We only have a single armored battalion with 35 FMC LAV3s. Even if they meet 3-1 K-kill ratios, we will not be able to hold our position. If they dislodge the LAVs, the dismounted infantrymen won’t stand a chance. We need to target the C2 nodes of their lead tank regiment now with the M56s. If we can neutralize their centralized command and control and delay their rate of march, it may give the EUAF enough time to get us those CAS and AI sorties they promised,” replied Bob. “That’s the right play, space for time.”

“I am sorry Mr. Menendez, I have no connection to the coalition network and cannot get a status update for the next Air Tasking Order. There is no confirmation that our Air Support Requests were received. I am issuing the target nominations to 2-142 HIMARS, they are moving towards their Position Areas Artillery now, airspace coordination is proceeding, and Colonel Ducalis is receiving his Commander’s Intervention Brief now. Pending his override there is nothing you can do.” CAITTs response almost sounded condescending to Bob; but then again, he remembered a time when human staff officers made recommendations to the boss, not smart-ass video game consoles.

“Chief, shouldn’t we just go with CAITTs solution? I mean she has all the raw data from the S2’s threat template and the weaponeering guidances that you built. CAITT is the joint program of record that we have to use, don’t we?” asked Captain Austin. Bob did not blame the young man for saying that. After all, this is what the Army wanted, staff officers that were more technicians and data managers than tacticians. The young man was simply not trained to question the AI’s conclusions.

“No sir, we should not, and by the way, I really hate how you call it a she,” answered Bob as he pondered his dilemma. Dammit! I’m the freaking Targeting Officer; I own this process, not this stupid thing… he thought for about five seconds before his instincts reasserted control of his senses.

Quickly jumping out of his chair, Bob left Captain Austin to oversee the data refinement and went outside to seek out the Commander’s Joint Lightweight Tactical Vehicle (JLTV). It took him a moment to locate it under the winter camouflage shielding, since Polish winters were just as brutal as advertised.

I must be getting old, Bob mused to himself, the cold air biting into his face. After twenty-five years of service, despite countless combat deployments in the Middle East, he was starting to get complacent. It was easy to think like young Captain Austin. He never should have trusted CAITT in the first place. It was so easy to let it make the decisions for you that many just stopped thinking altogether. The CIB would be Bob’s last chance to convince the boss that CAITT’s solution was wrong and he was right.

Bob entered the camo shield behind the JLTV constructing his argument to the boss in his mind. Colonel Ducalis had no time to entertain lengthy debate, this Bob knew. The fight was moving just too fast. Information is the currency of decision-making, and he would at best get about twenty seconds to make his case before something else grabbed the boss’s attention. CAITT would already be running the targeting solution straight to the boss via his Commanders Oculatory Device, jokingly called “COD,” referencing the old bawdy medieval term. Colonel Ducalis, already wearing the COD when Bob came in, was oblivious to everything else around him. Designed to construct a virtual and interactive battlefield environment, the COD worked almost too well. Even as Bob came in, CAITT was constructing the virtual battlefield, displaying missile aimpoints, HIMARs firing positions, airspace coordination measures, and detailed damage predictions for the target areas.

Bob could not understand how one person could absorb all that visual information in one sitting, but Colonel Ducalis was an exceptional commander. Standing nearby was the boss’s ever-present guardian, Major Lawrence Atlee, BCT XO, acting as always like a consigliere to his boss. His annoyance at Bob’s presence was evident by the scowl he received as he entered unannounced and, more egregiously, unrequested by him.

“Chief, what do you need?” asked Atlee, in his typically hurried tone, indicating that the boss should not be disturbed for all but the most serious reasons.

“Sir, it’s imperative I talk to the boss right now,” Bob demanded, somewhat out of breath — again, old age catching up. Without providing a reason to the XO, Bob moved directly to Colonel Ducalis and gently touched his arm. One did not shake a Brigade Commander, especially a former West Point Rugby player the size of Duke. The XO was not pleased.

“Bob, what’s up? I was just reviewing CAITT’s targeting solution,” said Duke as he lifted the COD off his face and saw his very distraught looking Targeting Officer. That’s hopeful, thought Bob, most Commanders would not even have bothered, simply letting the AI execute its solution.

Bob took a moment to compose himself and as he was about to pitch his case Atlee stepped in, “Sir, I’m very sorry. Chief here was just trying to let you know that he was ready to proceed.” Then turning to Bob he said in a manner that would not be confused as optional, “He was just leaving.”

Bob seized his chance as Duke looked right at him. They had served together for a long time. Bob remembered when Duke had asked him to come down from the 1EU Division Staff to fill his targeting officer billet. Undoubtedly, Duke trusted him and genuinely wanted to know what his concern was when he remove the COD in the first place. Bob owed it to him to give it to him straight.

“Sir, that is not correct,” Bob said speaking hurriedly. “We have a serious problem. CAITT’s targeting solution is completely wrong. The variables and assumptions were all predicated on the EUAF having air and cyber superiority. Those plans went out the window the second that EMP detonated. With all those aircraft down for CPU hardware replacement and software re-installs, those data points are now irrelevant. CAITT doesn’t know how long that will take because it is delinked from the Coalition’s AI Partner Network. I managed to get a low-frequency transmission established with Colonel Collins in Warsaw, and he thinks they can get us some sorties in the next six hours. CAITTs solution is ignoring the time versus space dynamic and going with a simple comparison of forces mathematical model. I’m betting it thinks that our casualties will be within acceptable limits after the 373rd expends all of its pre-staged consumable fuel and ammo. It thinks that we can hold our position if we cut off their re-supply. It may be right, but our losses will render us combat ineffective and unable to hold while 1EU DIV reconsolidates behind us.

“We need to implement this High Payoff Target List and Attack Guidance immediately disrupting and attriting their lead maneuver formations. Sir, we need to play for time and space,” Bob explained, hoping the sports analogy resonated while simultaneously accessing his Fires Forearm Display or FFaD, transmitting the data to Duke’s COD with a wave of his hand.

“Sir, I am not sure we should be deviating from the AI solution,” Atlee started to interject. “To be candid, and no offense to Mr. Menendez, the Army is eliminating their billets anyway since CAITT was fielded last year, same as they did for all the BCT S3s and FSOs. Their type of thinking is just not needed anymore, now that we have CAITT to do it for us.” Bob was amazed at how Major Atlee stated this dispassionately.

Bob, realizing where this was going, took a knee next to Duke.  He was clearly as tired as everyone else. Bob leaned in to speak while Duke started to review the new battlespace geometries and combat projections in his COD. “Duke,” Bob said in a low tone of voice so Major Atlee could not easily overhear him, “We’ve been friends a long time, I’ve never given you a bad recommendation. Please, override CAITT. LTC Givens can reposition his HIMARS battalion, but he has to start doing it now. This is our only chance; once those missiles are gone, we won’t get them back.”

He then stood up and patiently waited. Bob understood that he had pushed things as far as he could. Duke was a good man, a fine commander, and would make the right decision, Bob was certain of it.

Taking off his COD and rubbing his eyes, Duke leaned back and sighed heavily. The weight of command taking its full effect.

“CAITT,” stated Colonel Ducalis. “I am initiating Falcon 06’s override prerogative. Issue Chief Menendez’s targeting solution to LTC Givens immediately. Larry, get a hold of 1EU DIV and tell them we can hold our positions for 24 hours. After that, we may have to withdraw, but we will live to fight another day. Right now, trading time for space may not be the optimal strategy, but it is the human one. Let’s Go!”

If you enjoyed reading this post, please also see the following blog posts:

An Appropriate Level of Trust…

A Primer on Humanity: Iron Man versus Terminator

Takeaways Learned about the Future of the AI Battlefield

Leveraging Artificial Intelligence and Machine Learning to Meet Warfighter Needs

CW3 Jesse R. Crifasi is an active duty Field Artillery Warrant Officer. He has over 24 years in service and is currently serving as the Field Artillery Intelligence Officer (FAIO) for the 82nd Airborne Division.

The views expressed in this article are those of the author and do not reflect the official policy or position of the Department of the Army, DoD, or the U.S. Government.

101. TRADOC 2028

[Editor’s Note:  The U.S. Army Training and Doctrine Command (TRADOC) mission is to recruit, train, and educate the Army, driving constant improvement and change to ensure the Total Army can deter, fight, and win on any battlefield now and into the future. Today’s post addresses how TRADOC will need to transform to ensure that it continues to accomplish this mission with the next generation of Soldiers.]

Per The Army Vision:

The Army of 2028 will be ready to deploy, fight, and win decisively against any adversary, anytime and anywhere, in a joint, multi-domain, high-intensity conflict, while simultaneously deterring others and maintaining its ability to conduct irregular warfare. The Army will do this through the employment of modern manned and unmanned ground combat vehicles, aircraft, sustainment systems, and weapons, coupled with robust combined arms formations and tactics based on a modern warfighting doctrine and centered on exceptional Leaders and Soldiers of unmatched lethality.” GEN Mark A. Milley, Chief of Staff of the Army, and Dr. Mark T. Esper, Secretary of the Army, June 7, 2018.

In order to achieve this vision, the Army of 2028 needs a TRADOC 2028 that will recruit, organize, and train future Soldiers and Leaders to deploy, fight, and win decisively on any future battlefield. This TRADOC 2028 must account for: 1) the generational differences in learning styles; 2) emerging learning support technologies; and 3) how the Army will need to train and learn to maintain cognitive overmatch on the future battlefield. The Future Operational Environment, characterized by the speeding up of warfare and learning, will challenge the artificial boundaries between institutional and organizational learning and training (e.g., Brigade mobile training teams [MTTs] as a Standard Operating Procedure [SOP]).

Soldiers will be “New Humans” – beyond digital natives, they will embrace embedded and integrated sensors, Artificial Intelligence (AI), mixed reality, and ubiquitous communications. “Old Humans” adapted their learning style to accommodate new technologies (e.g., Classroom XXI). New Humans’ learning style will be a result of these technologies, as they will have been born into a world where they code, hack, rely on intelligent tutors and expert avatars (think the nextgen of Alexa / Siri), and learn increasingly via immersive Augmented / Virtual Reality (AR/VR), gaming, simulations, and YouTube-like tutorials, rather than the desiccated lectures and interminable PowerPoint presentations of yore. TRADOC must ensure that our cadre of instructors know how to use (and more importantly, embrace and effectively incorporate) these new learning technologies into their programs of instruction, until their ranks are filled with “New Humans.”

Delivering training for new, as of yet undefined MOSs and skillsets. The Army will have to compete with Industry to recruit the requisite talent for Army 2028. These recruits may enter service with fundamental technical skills and knowledges (e.g., drone creator/maintainer, 3-D printing specialist, digital and cyber fortification construction engineer) that may result in a flattening of the initial learning curve and facilitate more time for training “Green” tradecraft. Cyber recruiting will remain critical, as TRADOC will face an increasingly difficult recruiting environment as the Army competes to recruit new skillsets, from training deep learning tools to robotic repair. Initiatives to appeal to gamers (e.g., the Army’s eSports team) will have to be reflected in new approaches to all TRADOC Lines of Effort. AI may assist in identifying potential recruits with the requisite aptitudes.

“TRADOC in your ruck.” Personal AI assistants bring Commanders and their staffs all of the collected expertise of today’s institutional force. Conducting machine speed collection, collation, and analysis of battlefield information will free up warfighters and commanders to do what they do best — fight and make decisions, respectively. AI’s ability to quickly sift through and analyze the plethora of input received from across the battlefield, fused with the lessons learned data from thousands of previous engagements, will lessen the commander’s dependence on having had direct personal combat experience with conditions similar to his current fight when making command decisions.

Learning in the future will be personalized and individualized with targeted learning at the point of need. Training must be customizable, temporally optimized in a style that matches the individual learners, versus a one size fits all approach. These learning environments will need to bring gaming and micro simulations to individual learners for them to experiment. Similar tools could improve tactical war-gaming and support Commander’s decision making.  This will disrupt the traditional career maps that have defined success in the current generation of Army Leaders.  In the future, courses will be much less defined by the rank/grade of the Soldiers attending them.

Geolocation of Training will lose importance. We must stop building and start connecting. Emerging technologies – many accounted for in the Synthetic Training Environment (STE) – will connect experts and Soldiers, creating a seamless training continuum from the training base to home station to the fox hole. Investment should focus on technologies connecting and delivering expertise to the Soldier rather than brick and mortar infrastructure.  This vision of TRADOC 2028 will require “Big Data” to effectively deliver this personalized, immersive training to our Soldiers and Leaders at the point of need, and comes with associated privacy issues that will have to be addressed.

In conclusion, TRADOC 2028 sets the conditions to win warfare at machine speed. This speeding up of warfare and learning will challenge the artificial boundaries between institutional and organizational learning and training.

If you enjoyed this post, please also see:

– Mr. Elliott Masie’s presentation on Dynamic Readiness from the Learning in 2050 Conference, co-hosted with Georgetown University’s Center for Security Studies in Washington, DC, on 8-9 August 2018.

Top Ten” Takeaways from the Learning in 2050 Conference.

92. Ground Warfare in 2050: How It Might Look

[Editor’s Note: Mad Scientist Laboratory is pleased to review proclaimed Mad Scientist Dr. Alexander Kott’s paper, Ground Warfare in 2050: How It Might Look, published by the US Army Research Laboratory in August 2018. This paper offers readers with a technological forecast of autonomous intelligent agents and robots and their potential for employment on future battlefields in the year 2050. In this post, Mad Scientist reviews Dr. Kott’s conclusions and provides links to our previously published posts that support his findings.]

In his paper, Dr. Kott addresses two major trends (currently under way) that will continue to affect combat operations for the foreseeable future. They are:

•  The employment of small aerial drones for Intelligence, Surveillance, and Reconnaissance (ISR) will continue, making concealment difficult and eliminating distance from opposing forces as a means of counter-detection. This will require the development and use of decoy capabilities (also intelligent robotic devices). This counter-reconnaissance fight will feature prominently on future battlefields between autonomous sensors and countermeasures – “a robot-on-robot affair.”

See our related discussions regarding Concealment in the Fundamental Questions Affecting Army Modernization post and Finders vs Hiders in our Timeless Competitions post.

  The continued proliferation of intelligent munitions, operating at greater distances, collaborating in teams to seek out and destroy designated targets, and able to defeat armored and other hardened targets, as well as defiladed and entrenched targets.

See our descriptions of the future recon / strike complex in our Advanced Engagement Battlespace and the “Hyperactive Battlefield” post, and Robotics and Swarms / Semi Autonomous capabilities in our Potential Game Changers post.

These two trends will, in turn, drive the following forecasted developments:

  Increasing reliance on unmanned systems, “with humans becoming a minority within the overall force, being further dispersed across the battlefield.”

See Mr. Jeff Becker’s post on The Multi-Domain “Dragoon” Squad: A Hyper-enabled Combat System, and Mr. Mike Matson’s Demons in the Tall Grass, both of which envision future tactical units employing greater numbers of autonomous combat systems; as well as Mr. Sam Bendett’s post on Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward, addressing the contemporary hurdles that one of our strategic competitors must address in operationalizing Unmanned Ground Vehicles.

•  Intelligent munitions will be neutralized “primarily by missiles and only secondarily by armor and entrenchments. Specialized autonomous protection vehicles will be required that will use their extensive load of antimissiles to defeat the incoming intelligent munitions.”

See our discussion of what warfare at machine-speed looks like in our Advanced Engagement Battlespace and the “Hyperactive Battlefield”.

Source: Fausto De Martini / Kill Command

  Forces will exploit “very complex terrain, such as dense forest and urban environments” for cover and concealment, requiring the development of highly mobile “ground robots with legs and limbs,” able to negotiate this congested landscape.

 

See our Megacities: Future Challenges and Responses and Integrated Sensors: The Critical Element in Future Complex Environment Warfare posts that address future complex operational environments.

Source: www.defenceimages.mod.uk

  The proliferation of autonomous combat systems on the battlefield will generate an additional required capability — “a significant number of specialized robotic vehicles that will serve as mobile power generation plants and charging stations.”

See our discussion of future Power capabilities on our Potential Game Changers handout.

 “To gain protection from intelligent munitions, extended subterranean tunnels and facilities will become important. This in turn will necessitate the tunnel-digging robotic machines, suitably equipped for battlefield mobility.”

See our discussion of Multi-Domain Swarming in our Black Swans and Pink Flamingos post.

  All of these autonomous, yet simultaneously integrated and networked battlefield systems will be vulnerable to Cyber-Electromagnetic Activities (CEMA). Consequently, the battle within the Cyber domain will “be fought largely by various autonomous cyber agents that will attack, defend, and manage the overall network of exceptional complexity and dynamics.”

See MAJ Chris Telley’s post addressing Artificial Intelligence (AI) as an Information Operations tool in his Influence at Machine Speed: The Coming of AI-Powered Propaganda.

 The “high volume and velocity of information produced and demanded by the robot-intensive force” will require an increasingly autonomous Command and Control (C2) system, with humans increasingly being on, rather than in, the loop.

See Mr. Ian Sullivan’s discussion of AI vs. AI and how the decisive edge accrues to the combatant with more autonomous decision-action concurrency in his Lessons Learned in Assessing the Operational Environment post.

If you enjoyed reading this post, please watch Dr. Alexander Kott’s presentation, “The Network is the Robot,” from the Mad Scientist Robotics, Artificial Intelligence, and Autonomy: Visioning Multi-Domain Warfare in 2030-2050 Conference, co-sponsored by the Georgia Tech Research Institute (GTRI), in Atlanta, Georgia, 7-8 March 2017.

Dr. Alexander Kott serves as the ARL’s Chief Scientist. In this role he provides leadership in development of ARL technical strategy, maintaining technical quality of ARL research, and representing ARL to external technical community. He published over 80 technical papers and served as the initiator, co-author and primary editor of over ten books, including most recently Cyber Defense and Situational Awareness (2015) and Cyber Security of SCADA and other Industrial Control Systems (2016), and the forthcoming Cyber Resilience of Systems and Networks (2019).

91. Army Installations: A Whole Flock of Pink Flamingos

[Editor’s Note: Mad Scientist Laboratory is pleased to present the following guest blog post by Dr. Jason R. Dorvee, Mr. Richard G. Kidd IV, and Mr. John R. Thompson.  The Army of the future will need installations that will enable strategic support areas critical to Multi-Domain Operations (MDO).  There are 156 installations that serve as the initial platform of maneuver for Army readiness. Due to increasing connectivity of military bases (and the Soldiers, Airmen, Marines, Sailors, and Civilians who live and work there) to the Internet of Things (IoT), DoD and Army installations will not be the sanctuaries they once were.  These threats are further discussed in Mr. Kidd’s AUSA article last December, entitled “Threats to Posts: Army Must Rethink Base Security.” The following story posits the resulting “what if,” should the Army fail to address installation resilience (to include Soldiers, their families, and surrounding communities) when modernizing the overall force to face Twenty-first Century threats.]

“Army Installations are no longer sanctuaries” — Mr. Richard G. Kidd IV, Deputy Assistant Secretary of the Army (Installations, Energy and Environment), Strategic Integration

Why the most powerful Army the world had ever seen… never showed up to the fight.

The adversary, recognizing that they could not defeat the U.S. Army in a straight-up land fight, kept the Army out of the fight by creating hundreds of friction points around Army installations that disrupted, delayed, and ultimately prevented the timely application of combat power.

The year was 2030. New weapons, doctrine, training, and individual readiness came together to make the US Army the most capable land force in the world. Fully prepared, the Army was ready to fight and win in the complex environments of multi domain operations. The Army Futures Command generated a series of innovations empowering the Army to overcome the lethargy and distractions of protracted counter-insurgency warfare.

Heavy Duty by rOEN911 / Source: DeviantArt

New equipment gave the Army technical and operational overmatch against all strategic competitors, rogue states, and emerging threats.

 

Ghost Recon: Future Soldier / Credit: Joshua B. Livingston / Source: Flickr

With virtualized synthetic training environments, the Army—active duty, Reserve, and National Guard— achieved a continuous, high-level state of unit readiness. The Army’s Soldiers achieved personalized elite-level fitness following tailored diet and physical fitness training regimens. No adversary stood a chance… after the Army arrived.

In the years leading up to 2030, the U.S. Army enjoyed the status of being the world’s most powerful land force. The United States’ national security was squarely centered on deterrence with diplomatic advantage deriving from military superiority. It was a somewhat surprising curiosity when this superiority was challenged by a land invasion of an allied state in the middle of Eurasia. This would not be the only surprise experienced by the Army.

The overseas contest unfolded along a fairly predictable pattern, one that was anticipated in multiple war games and exercises. A near-peer competitor engaged in a hybrid of operations against a partner nation. They first acted to destabilize the country, and then, within the confusion created, they invaded. In response to the partner’s request for assistance, the U.S authorized mobilization and deployment of active and reserve component forces to counter the invasion. The mission was straightforward: retake lost ground, expel the adversary, and restore local government control. This was a task the Army had trained for and was more than capable of successfully executing. It just had to get there. While the partner nation struggled with an actual invasion, a different struggle was taking place in the U.S. homeland. The adversary combined a series of relatively minor cyber, information, and physical disruptions, which taken together, overwhelmed the Army Enterprise. Each act focused on clogging individual systems or processes needed to execute the mobilization and deployment functions.

Cyber-mercenaries, paid in cryptocurrency, attacked the information environment and undermined the communication mechanisms essential for mobilizing the Army. Building on earlier trials in Korea and Europe, a range of false orders were sent to units and individuals. These false orders focused on early entry forces and reserve units needed to open ports and railheads in the United States. Compounding the situation, misleading information was simultaneously placed on social media and the news that indicated the mobilization had been cancelled. These efforts created so much uncertainty in the minds of individual Soldiers over their place of duty, initial musters for key reserve component units ran at less than 40% strength. Days were added to mobilization timelines as it took time for accurate information to be disseminated and formations to build to full strength.

Focused cyber attention was given to individuals with critical enabling jobs – not just commanders or senior NCOs – but those with access to arms rooms and motor pools. Long-standing efforts to collect PII from these individuals allowed the adversary to compromise credit scores, alter social media presence, and target family members. Soldiers with mission-related demands already on their hands, now found themselves unable to use their credit cards, fuel their vehicles, or operate their cell phones. Instead of focusing on getting troops to the battle, they were caught in an array of false social media messages about themselves or their loved ones. Sorting fact from fiction and regaining their financial functionality competed for their time and attention. Time was lost as Soldiers were distracted and overwhelmed. Arms rooms remained locked, access to the motor pool was delayed, and deployments were disrupted.

 

The communities surrounding Army installations also came under attack. Systems below the threshold of “critical,” such as street lights, traffic lights, and railroad crossings, were all locked in the “off” position, making road travel hazardous. The dispatch systems of key civilian first-responders were overwhelmed with misleading calls reporting false accidents, overwhelming response mechanisms and diverting or delaying much needed assistance. Soldiers were prevented from getting to their duty stations or transitioning quickly from affected communities. In parallel, an information warfare campaign was waged with the aim of undermining trust between civilian and military personnel. False narratives about spills of hazardous military materials and soldiers being contaminated by exposure to diseases created by malfunctioning vaccines added to the chaos.

Key utility, water, and energy control systems on or adjacent to Army installations, understandably a “hard” target from the cyber context, were of such importance that they came under near constant attack across all their operations from transmission to customer billing. Only those few installations that had invested in advanced micro-grids, on-site power generation, and storage were able to maintain coherent operations beyond 72 hours. For most installations, backup generators that worked singularly when the maintenance teams were present for annual servicing, cascaded into collective failure when they all operated at once. For the Army, only the 10th and 24th Infantry Divisions were able to deploy, thanks to onsite energy resilience.

Small, but significant physical attacks occurred as well. Standard shipping containers, pieces of luggage, and Amazon Prime boxes were “weaponized” as drone transports, with their cargo activated on command. In the key out-loading ports of Savannah and Galveston, shore cranes were disabled by homemade thermite charges placed on gears and cables by such drones using photo recognition and artificial intelligence. Repairing and verifying the safety of these cranes added days to timelines and disrupted shipping schedules. Other drones deployed, having been “trained” with thousands of photo’s to fly into the air intakes of jet engines, either military or civilian. Only two downed airliners and a few near misses were sufficient to shut down air transportation across the country and initiate a multi-month inspection of all truck stops, docks, airports, and rail yards trying to find the “last” possible container. Perhaps the most effective drone attacks occurred when such drones dispersed chemical agents in municipal water supplies for those communities adjacent to installations or along lines of communication. The effects of these later attacks were compounded by shrewd information warfare operations to generate mass panic. Roads were clogged with evacuees, out-loading operations were curtailed, and key military assets that should have been supporting the deployment were diverted to provide support to civil authorities.

Cumulatively, these cyber, informational, social, and physical attacks within the homeland and across Army installations and formations took their toll. Every step in the deployment and mobilization processes was disrupted and delayed as individuals and units had to work through the fog of friction, confusion, and hysteria that was generated. The Army was gradually overwhelmed and immobilized. In the end, the war for the partner country in Eurasia was lost. The adversary’s attacks on the homeland had given it sufficient time to complete all of its military objectives. The most lethal Army in history was “stuck,” unable to arrive in time. US command authorities now faced a much more difficult military problem and the dilemma of choosing between all out war, or accepting a limited defeat.

There’s a saying from the Northeastern United States about infrastructure. It refers to the tangled mess of roads and paths in New England, specifically Maine. Spoken in the Mainer or “Mainah” accent, it goes:

You cahn’t ghet thah from hehah.”

That was the US Army in 2030. Ignoring its infrastructure and its vulnerabilities at home, it got caught in a Mainah Scenario. This was a classic “Pink Flamingo;” the US Army knew its homeland operations were a vulnerability, but it failed to prepare.

 

There were some attempts to recognize the potential problem:

– The National Defense Strategy of 2018 laid out the following:

It is now undeniable that the homeland is no longer a sanctuary. America is a target, whether from terrorists seeking to attack our citizens; malicious cyber activity against personal, commercial, or government infrastructure; or political and information subversion. New threats to commercial and military uses of space are emerging, while increasing digital connectivity of all aspects of life, business, government, and military creates significant vulnerabilities. During conflict, attacks against our critical defense, government, and economic infrastructure must be anticipated.

– Even earlier (in 2015), The Army’s Energy Security and Sustainability Strategy clearly stated with respect to Army installations:

We will seek to use multi-fuel platforms and infrastructure that can provide flexible operations during energy and water shortages at fixed installations and forward locations. If a subsystem fails or is temporarily unavailable, other parts of the system will continue to operate at an acceptable level until full functionality is restored…. Implement integrated and distributed technologies and procedures to ensure critical systems remain operational in the face of disruptive events…. Advance the capability for systems, installations, personnel, and units to respond to unforeseen disruptions and quickly recover while continuing critical activities.

And despite numerous other examples across industry, academia, and the military, only a few locations, installations, or organizations across the Army embraced the notion of resilience for homeland operations. Installations were not considered a true “weapons system” and were left behind in the modernization process, creating a vulnerability that our enemies could exploit.

Installations are a flock of 156 pink flamingos wading around the beach of national security. They are vulnerable to disruption that would have a very real impact on readiness and the timely application of combat power. With the advance of technology-applications, these threats are not for the Army of tomorrow—they affect the Army today. Let us not get stranded in a Mainah Scenario.

If you enjoyed this post, please also see Dr. Jason R. Dorvee‘s article entitled, A modern Army needs modern installations.”

Dr. Jason R. Dorvee serves as the U.S. Army Engineer Research and Development Center’s liaison to the Office of the Assistant Secretary of the Army for Installations Energy and the Environment (ASA IE&E), where he is assisting with the Installations of the Future Initiative.
Mr. Richard G. Kidd IV serves as the Deputy Assistant Secretary of the Army for Strategic Integration, leading the strategic effort to examine options for future Army installations and the strategy development, resource requirements, and overall business transformation processes for the Office of the ASA IE&E.
Mr. John R. Thompson serves as the Strategic Planner, Office of the ASA IE&E, Strategic Integration.

90. “The Tenth Man” — War’s Changing Nature in an AI World

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish yet another in our series of “The Tenth Man” posts (read our previous posts here and here). This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging.  The Mad Scientist Laboratory offers it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Future Operational Environment. Today’s post is by guest blogger Dr. Peter Layton, challenging the commonly held belief of the persistent and abiding nature of war.]

There’s a debate underway about the nature of war. Some say it’s immutable, others say hogwash; ironically both sides quote Clausewitz for support.[i] Interestingly, Secretary of Defense Mattis, once an ‘immutable’ defender, has now declared he’s not sure anymore, given recent Artificial intelligence (AI) developments.[ii]

 

At the core of the immutable case is the belief that war has always been violent, chaotic, destructive, and murderous – and will thus always be so. Buried within this is the view that wars are won by infantry occupying territory; as Admiral Wylie opined “the ultimate determinant in war is a man on the scene with a gun.”[iii] It is the clash of infantry forces that is decisive, with both sides experiencing the deadly violence of war in a manner that would have been comprehendible by Athenian hoplites 2,500 years ago.

Technology though really has changed this. Firstly, the lethality of modern weapons has emptied out the battlefield.[iv] What can be ‘seen’ by sensors of diverse types can be targeted by increasingly precise direct and indirect fires. The Russo-Ukraine war in the Donbas hints that in future wars between state-based military forces, tactical units will need to remain unseen to survive and that they will now ‘occupy’ territory principally through long-range firepower.[v] Secondly, Phillip Meilinger makes a strong case that drone crews firing missiles at insurgents from 3,000 miles away or navies blockading countries and staving their people into submission do not experience war the same as those hoplite infantry did years ago.[vi] The experience of violence in some wars has become one-sided, while wars are now increasingly waged against civilians well behind any defensive front lines.

Source: Griffith Asia Institute

AI may deepen both trends. AI has the potential to sharply enhance the defense continuing to empty out the battlefield, turning it into a no-man’s zone where automated systems and semi-autonomous devices wage attrition warfare.[vii]   If both sides have intelligent machines, war may become simply a case of machines being violent to other machines. In a re-run of World War One, strategic stalemate would seem the likely outcome with neither side able to win meaningful battlefield victories.[viii]

If so, the second aspect of war’s changing nature comes into play. If a nation’s borders cannot be penetrated and its critical centers of gravity attacked using kinetic means, perhaps non-kinetic means are the offensive style of the future.  Indeed, World War One’s battlefield stalemate was resolved as the naval blockade caused significant civilian starvation and the collapse of the homefront.

The application of information warfare by strategic competitors against the US political system hints at new cyber techniques that AI may greatly enhance.[ix] Instead of destroying another’s capabilities and national infrastructures, they might be exploited and used as bearers to spread confusion and dissent amongst the populace. In this century, starvation may not be necessary to collapse the homefront; AI may offer more efficacious methods. War may no longer be violent and murderous but it may still be as Clausewitz wrote a “true political instrument.”[x] Secretary Mattis may be right; perhaps war’s nature is not immutable but rather ripe for our disruption and innovation.

If you enjoyed this guest post, please also read proclaimed Mad Scientist Dr. Lydia Kostopoulos’ paper addressing this topic, entitled War is Having an Identity Crisis, hosted by our colleagues at Small Wars Journal.

Dr. Peter Layton is a Visiting Fellow at the Griffith Asia Institute, Griffith University. A former RAAF Group Captain, he has extensive defense experience, including in the Pentagon and at National Defense University. He holds a doctorate in grand strategy. He is the author of the book ‘Grand Strategy.’

 


[i] For the immutable, see Rob Taber (2018), Character vs. Nature of Warfare: What We Can Learn (Again) from Clausewitz, Mad Scientist Laboratory, 27 August 2018.  For the mutable, see Phillip S. Meilinger (2010), The Mutable Nature of War, Air & Space Power Journal, Winter 2010, pp 25-28. For Clausewitz (both sides), see Dr. A.J. Echevarria II (2012), Clausewitz and Contemporary War: The Debate over War’s Nature, 2nd Annual Terrorism & Global Security Conference 2012.

[ii] Aaron Mehta (2018), AI makes Mattis question ‘fundamental’ beliefs about war, C4ISRNET, 17 February 2018.

[iii] J.C. Wylie (1967), Military Strategy: A General Theory of Power Control, New Brunswick, Rutgers University Press, p. 85.

[iv] James J Schneider (1987), The theory of the empty battlefield, The RUSI Journal, Vol. 132, Issue 3, pp. 37-44.

[v] Brandon Morgan (2018), Artillery in Tomorrow’s Battlefield: Maximizing the Power of the King of Battle, Modern War Institute, 25 September 2018.

[vi] The Mutable Nature of War: The Author Replies, Air & Space Power Journal, Summer 2011, pp 21-22.  And also: Phillip S. Meilinger (2010), The Mutable Nature of War, Air & Space Power Journal, Winter 2010, pp 25-28.

[vii] Peter Layton (2018), Our New Model Robot Armies, Small Wars Journal, 7 August 2018.

[viii] Peter Layton (2018), Algorithm Warfare: Applying Artificial Intelligence to Warfighting, Canberra: Air Power Development Centre, pp. 31-32.

[ix] Renee Diresta (2018), The Information War Is On. Are We Ready For It? , Wired, 3 August.

[x] Carl Von Clausewitz, On War, Edited and Translated by Michael Howard and Peter Paret (1984), Princeton: Princeton University Press, p.87.

85. Benefits, Vulnerabilities, and the Ethics of Soldier Enhancement

[Editor’s Note: The United States Army Training and Doctrine Command (TRADOC) co-hosted the Mad Scientist Bio Convergence and Soldier 2050 Conference with SRI International at their Menlo Park, CA, campus on 8-9 March 2018, where participants discussed the advent of new biotechnologies and the associated benefits, vulnerabilities, and ethics associated with Soldier enhancement for the Army of the Future.  The following post is an excerpt from this conference’s final report.]

Source:  Max Pixel

Advances in synthetic biology likely will enhance future Soldier performance – speed, strength, endurance, and resilience – but will bring with it vulnerabilities, such as genomic targeting, that can be exploited by an adversary and/or potentially harm the individual undergoing the enhancement.

 

Emerging synthetic biology tools – e.g., CRISPR, Talon, and ZFN – present an opportunity to engineer Soldiers’ DNA and enhance their abilities. Bioengineering is becoming easier and cheaper as a bevy of developments are reducing biotechnology transaction costs in gene reading, writing, and editing. [1] Due to the ever-increasing speed and lethality of the future battlefield, combatants will need cognitive and physical enhancement to survive and thrive.

Cognitive enhancement could make Soldiers more lethal, more decisive, and perhaps more resilient. Using neurofeedback, a process that allows a user to see their brain activity in real-time, one can identify ideal brain states, and use them to enhance an individual’s mental performance. Through the mapping and presentation of identified expert brains, novices can rapidly improve their acuity after just a few training sessions. [2] Further, there are studies being conducted that explore the possibility of directly emulating those expert brain states with non-invasive EEG caps that could improve performance almost immediately. [3]  Dr. Amy Kruse, the Chief Scientific Officer at the Platypus Institute, referred to this phenomenon as “sitting on a gold mine of brains.”

There is also the potential to change and improve Soldier’s physical attributes. Scientists can develop drugs, specific dietary plans, and potentially use genetic editing to improve speed, strength, agility, and endurance.

Source: Andrew Herr, CEO Helicase

In order to fully leverage the capability of human performance enhancement, Andrew Herr, CEO of Helicase and an Adjunct Fellow at CNAS, suggested that human performance R&D be moved out of the medical field and become its own research area due to its differing objectives and the convergence between varying technologies.

Soldiers, Airmen, Marines, and Sailors are already trying to enhance themselves with commercial products – often containing unknown or unsafe ingredients – so it is incumbent on the U.S. military to, at the very least, help those who want to improve.

However, a host of new vulnerabilities, at the genetic level, accompany this revolutionary leap in human evolution. If one can map the human genome and more thoroughly scan and understand the brain, they can target genomes and brains in the same ways. Soldiers could become incredibly vulnerable at the genomic level, forcing the Army to not only protect Soldiers using body armor and armored vehicles, but also protect their identities, genomes, and physiologies.

Adversaries will exploit all biological enhancements to gain competitive advantage over U.S. forces. Targeted genome editing technology such as CRISPR will enable adversarial threats to employ super-empowered Soldiers on the battlefield and target specific populations with bioweapons. U.S. adversaries may use technologies recklessly to achieve short term gains with no consideration of long range effects. [4] [5]

There are numerous ethical questions that come with the enhancement of Soldiers such as the moral acceptability of the Army making permanent enhancements to Soldiers, the responsibility for returning transitioning Soldiers to a “baseline human,” and the general definition of what a “baseline human” is legally defined as.

Transhumanism H+ symbol by Antonu / Source:  https://commons.wikimedia.org/wiki/File:Transhumanism_h%2B.svg

By altering, enhancing, and augmenting the biology of the human Soldier, the United States Army will potentially enter into uncharted ethical territory. Instead of issuing items to Soldiers to complement their physical and cognitive assets, by 2050, the U.S. Army may have the will and the means to issue them increased biological abilities in those areas. The future implications and the limits or thresholds for enhancement have not yet been considered. The military is already willing to correct the vision of certain members – laser eye surgery, for example – a practice that could be accurately referred to as human enhancement, so discretely defining where the threshold lies will be important. It is already known that other countries, and possible adversaries, are willing to cross the line where we are not. Russia, most recently, was banned from competition in the 2018 Winter Olympics for widespread performance-enhancing drug violations that were believed to be supported by the Russian Government. [6] Those drugs violate the spirit of competition in the Olympics, but no such spirit exists in warfare.

Another consideration is whether or not the Soldier enhancements are permanent. By enhancing Soldiers’ faculties, the Army is, in fact, enhancing their lethality or their ability to defeat the enemy. What happens with these enhancements—whether the Army can or should remove them— when a Soldier leaves the Army is an open question. As stated previously, the Army is willing and able to improve eyesight, but does not revert that eyesight back to its original state after the individual has separated. Some possible moral questions surrounding Soldier enhancement include:

• If the Army were to increase a Soldier’s stamina, visual acuity, resistance to disease, and pain tolerance, making them a more lethal warfighter, is it incumbent upon the Army to remove those enhancements?

• If the Soldier later used those enhancements in civilian life for nefarious purposes, would the Army be responsible?

Answers to these legal questions are beyond the scope of this paper, but can be considered now before the advent of these new technologies becomes widespread.

Image by Leonardo da Vinci / Source: Flickr

If the Army decides to reverse certain Soldier enhancements, it likely will need to determine the definition of a “baseline human.” This would establish norms for features, traits, and abilities that can be permanently enhanced and which must be removed before leaving service. This would undoubtedly involve both legal and moral challenges.

 

The complete Mad Scientist Bio Convergence and Soldier 2050 Final Report can be read here.

To learn more about the ramifications of Soldier enhancement, please go to:

– Dr. Amy Kruse’s Human 2.0 podcast, hosted by our colleagues at Modern War Institute.

– The Ethics and the Future of War panel discussion, facilitated by LTG Jim Dubik (USA-Ret.) from Day 2 (26 July 2017) of the Mad Scientist Visualizing Multi Domain Battle in 2030-2050 Conference at Georgetown University.


[1] Ahmad, Zarah and Stephanie Larson, “The DNA Utility in Military Environments,” slide 5, presented at Mad Scientist Bio Convergence and the Soldier 2050 Conference, 8 March 2018.
[2] Kruse, Amy, “Human 2.0 Upgrading Human Performance,” Slide 12, presented at Mad Scientist Bio Convergence and the Soldier 2050 Conference, 8 March 2018
[3]https://www.frontiersin.org/articles/10.3389/fnhum.2016.00034/full
[4] https://www.technologyreview.com/the-download/610034/china-is-already-gene-editing-a-lot-of-humans/
[5] https://www.c4isrnet.com/unmanned/2018/05/07/russia-confirms-its-armed-robot-tank-was-in-syria/
[6] https://www.washingtonpost.com/sports/russia-banned-from-2018-olympics-following-doping-allegations/2017/12/05/9ab49790-d9d4-11e7-b859-fb0995360725_story.html?noredirect=on&utm_term=.d12db68f42d1

82. Bias and Machine Learning

[Editor’s Note:  Today’s post poses four central questions to our Mad Scientist community of action regarding bias in machine learning and the associated ramifications for artificial intelligence, autonomy, lethality, and decision-making on future warfighting.]

We thought that we had the answers, it was the questions we had wrong” – Bono, U2

Source: www.vpnsrus.com via flickr

As machine learning and deep learning algorithms become more commonplace, it is clear that the utopian ideal of a bias-neutral Artificial Intelligence (AI) is exactly just that. These algorithms have underlying biases embedded in their coding, imparted by their human programmers (either consciously or unconsciously). These algorithms can develop further biases during the machine learning and training process.  Dr. Tolga Bolukbasi, Boston University, recently described algorithms as not being capable of distinguishing right from wrong, unlike humans that can judge their actions, even when they act against ethical norms. For algorithms, data is the ultimate determining factor.

Realizing that algorithms supporting future Intelligence, Surveillance, and Reconnaissance (ISR) networks and Commander’s decision support aids will have inherent biases — what is the impact on future warfighting? This question is exceptionally relevant as Soldiers and Leaders consider the influence of biases in man-machine relationships, and their potential ramifications on the battlefield, especially with regard to the rules of engagement (i.e., mission execution and combat efficiency versus the proportional use of force and minimizing civilian casualties and collateral damage).

It is difficult to make predictions, particularly about the future.” This quote has been attributed to anyone ranging from Mark Twain to Niels Bohr to Yogi Berra. Point prediction is a sucker’s bet. However, asking the right questions about biases in AI is incredibly important.

The Mad Scientist Initiative has developed a series of questions to help frame the discussion regarding what biases we are willing to accept and in what cases they will be acceptable. Feel free to share your observations and questions in the comments section of this blog post (below) or email them to us at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil.

1) What types of bias are we willing to accept? Will a so-called cognitive bias that forgoes a logical, deliberative process be allowable? What about a programming bias that is discriminative towards any specific gender(s), ethnicity(ies), race(s), or even age(s)?

2) In what types of systems will we accept biases? Will machine learning applications in supposedly non-lethal warfighting functions like sustainment, protection, and intelligence be given more leeway with regards to bias?

3) Will the biases in machine learning programming and algorithms be more apparent and/or outweigh the inherent biases of humans-in-the-loop? How will perceived biases affect trust and reliance on machine learning applications?

4) At what point will the pace of innovation and introduction of this technology on the battlefield by our adversaries cause us to forego concerns of bias and rapidly field systems to gain a decisive Observe, Orient, Decide, and Act (OODA) loop and combat speed advantage on the Hyperactive Battlefield?

For additional information impacting on this important discussion, please see the following:

An Appropriate Level of Trust… blog post

Ethical Dilemmas of Future Warfare blog post

Ethics and the Future of War panel discussion video

81. “Maddest” Guest Blogger!

[Editor’s Note: Since its inception last November, the Mad Scientist Laboratory has enabled us to expand our reach and engage global innovators from across industry, academia, and the Government regarding emergent disruptive technologies and their individual and convergent impacts on the future of warfare. For perspective, our blog has accrued almost 60K views by over 30K visitors from around the world!

Our Mad Scientist Community of Action continues to grow — in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. Almost half (36 out of 81) of the blog posts published have been submitted by guest bloggers. We challenge you to contribute your ideas!

In particular, we would like to recognize Mad Scientist Mr. Sam Bendett by re-posting his submission entitled “Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward,” originally published on 25 June 2018. This post generated a record number of visits and views during the past six month period. Consequently, we hereby declare Sam to be the Mad Scientist Laboratory’s “Maddest” Guest Blogger! for the latter half of FY18. In recognition of his achievement, Sam will receive much coveted Mad Scientist swag.

While Sam’s post revealed the many challenges Russia has experienced in combat testing the Uran-9 Unmanned Ground Vehicle (UGV) in Syria, it is important to note that Russia has designed, prototyped,  developed, and operationally tested this system in a combat environment, demonstrating a disciplined and proactive approach to innovation.  Russia is learning how to integrate robotic lethal ground combat systems….

Enjoy re-visiting Sam’s informative post below, noting that many of the embedded links are best accessed using non-DoD networks.]

Russia’s Forpost UAV (licensed copy of IAI Searcher II) in Khmeimim, Syria; Source: https://t.co/PcNgJ811O8

Russia, like many other nations, is investing in the development of various unmanned military systems. The Russian defense establishment sees such systems as mission multipliers, highlighting two major advantages: saving soldiers’ lives and making military missions more effective. In this context, Russian developments are similar to those taking place around the world. Various militaries are fielding unmanned systems for surveillance, intelligence, logistics, or attack missions to make their forces or campaigns more effective. In fact, the Russian military has been successfully using Unmanned Aerial Vehicles (UAVs) in training and combat since 2013. It has used them with great effect in Syria, where these UAVs flew more mission hours than manned aircraft in various Intelligence, Surveillance, and Reconnaissance (ISR) roles.

Russia is also busy designing and testing many unmanned maritime and ground vehicles for various missions with diverse payloads. To underscore the significance of this emerging technology for the nation’s armed forces, Russian Defense Minister Sergei Shoigu recently stated that the serial production of ground combat robots for the military “may start already this year.”

Uran-9 combat UGV at Victory Day 2018 Parade in Red Square; Source: independent.co.uk

But before we see swarms of ground combat robots with red stars emblazoned on them, the Russian military will put these weapons through rigorous testing in order to determine if they can correspond to battlefield realities. Russian military manufacturers and contractors are not that different from their American counterparts in sometimes talking up the capabilities of their creations, seeking to create the demand for their newest achievement before there is proof that such technology can stand up to harsh battlefield conditions. It is for this reason that the Russian Ministry of Defense (MOD) finally established several centers such as Main Research and Testing Center of Robotics, tasked with working alongside the defense-industrial sector to create unmanned military technology standards and better communicate warfighters’ needs.  The MOD is also running conferences such as the annual “Robotization of the Armed Forces” that bring together military and industry decision-makers for a better dialogue on the development, growth, and evolution of the nation’s unmanned military systems.

Uran-9 Combat UGV, Source: nationalinterest.org

This brings us to one of the more interesting developments in Russian UGVs. Then Russian Deputy Defense Minister Borisov recently confirmed that the Uran-9 combat UGV was tested in Syria, which would be the first time this much-discussed system was put into combat. This particular UGV is supposed to operate in teams of three or four and is armed with a 30mm cannon and 7.62 mm machine guns, along with a variety of other weapons.

Just as importantly, it was designed to operate at a distance of up to three kilometers (3000 meters or about two miles) from its operator — a range that could be extended up to six kilometers for a team of these UGVs. This range is absolutely crucial for these machines, which must be operated remotely. Russian designers are developing operational electronics capable of rendering the Uran-9 more autonomous, thereby moving the operators to a safer distance from actual combat engagement. The size of a small tank, the Uran-9 impressed the international military community when first unveiled and it was definitely designed to survive battlefield realities….

Uran-9; Source: Defence-Blog.com

However, just as “no plan survives first contact with the enemy,” the Uran-9, though built to withstand punishment, came up short in its first trial run in Syria. In a candid admission, Andrei P. Anisimov, Senior Research Officer at the 3rd Central Research Institute of the Ministry of Defense, reported on the Uran-9’s critical combat deficiencies during the 10th All-Russian Scientific Conference entitled “Actual Problems of Defense and Security,” held in April 2018. In particular, the following issues came to light during testing:

• Instead of its intended range of several kilometers, the Uran-9 could only be operated at distance of “300-500 meters among low-rise buildings,” wiping out up to nine-tenths of its total operational range.

• There were “17 cases of short-term (up to one minute) and two cases of long-term (up to 1.5 hours) loss of Uran-9 control” recorded, which rendered this UGV practically useless on the battlefield.

• The UGV’s running gear had problems – there were issues with supporting and guiding rollers, as well as suspension springs.

• The electro-optic stations allowed for reconnaissance and identification of potential targets at a range of no more than two kilometers.

• The OCH-4 optical system did not allow for adequate detection of adversary’s optical and targeting devices and created multiple interferences in the test range’s ground and airspace.

Uran-9 undergoing testing; Source: YouTube

• Unstable operation of the UGV’s 30mm automatic cannon was recorded, with firing delays and failures. Moreover, the UGV could fire only when stationary, which basically wiped out its very purpose of combat “vehicle.”

• The Uran-9’s combat, ISR, and targeting weapons and mechanisms were also not stabilized.

On one hand, these many failures are a sign that this much–discussed and much-advertised machine is in need of significant upgrades, testing, and perhaps even a redesign before it gets put into another combat situation. The Russian military did say that it tested nearly 200 types of weapons in Syria, so putting the Uran-9 through its combat paces was a logical step in the long development of this particular UGV. If the Syrian trial was the first of its kind for this UGV, such significant technical glitches would not be surprising.

However, the MOD has been testing this Uran-9 for a while now, showing videos of this machine at a testing range, presumably in Russia. The truly unexpected issue arising during operations in Syria had to do with the failure of the Uran-9 to effectively engage targets with its cannon while in motion (along with a number of other issues). Still, perhaps many observers bought into the idea that this vehicle would perform as built – tracks, weapons, and all. A closer examination of the publicly-released testing video probably foretold some of the Syrian glitches – in this particular one, Uran-9 is shown firing its machine guns while moving, but its cannon was fired only when the vehicle was stationary. Another interesting aspect that is significant in hindsight is that the testing range in the video was a relatively open space – a large field with a few obstacles around, not the kind of complex terrain, dense urban environment encountered in Syria. While today’s and future battlefields will range greatly from open spaces to megacities, a vehicle like the Uran-9 would probably be expected to perform in all conditions. Unless, of course, Syrian tests would effectively limit its use in future combat.

Russian Soratnik UGV

On another hand, so many failures at once point to much larger issues with the Russian development of combat UGVs, issues that Anisimov also discussed during his presentation. He highlighted the following technological aspects that are ubiquitous worldwide at this point in the global development of similar unmanned systems:

• Low level of current UGV autonomy;

• Low level of automation of command and control processes of UGV management, including repairs and maintenance;

• Low communication range, and;

• Problems associated with “friend or foe” target identification.

Judging from the Uran-9’s Syrian test, Anisimov made the following key conclusions which point to the potential trajectory of Russian combat UGV development – assuming that other unmanned systems may have similar issues when placed in a simulated (or real) combat environment:

• These types of UGVs are equipped with a variety of cameras and sensors — and since the operator is presumably located a safe distance from combat, he may have problems understanding, processing, and effectively responding to what is taking place with this UGV in real-time.

• For the next 10-15 years, unmanned military systems will be unable to effectively take part in combat, with Russians proposing to use them in storming stationary and well-defended targets (effectively giving such combat UGVs a kamikaze role).

• One-time and preferably stationary use of these UGVs would be more effective, with maintenance and repair crews close by.

• These UGVs should be used with other military formations in order to target and destroy fortified and firing enemy positions — but never on their own, since their breakdown would negatively impact the military mission.

The presentation proposed that some of the above-mentioned problems could be overcome by domestic developments in the following UGV technology and equipment areas:

• Creating secure communication channels;

• Building miniaturized hi-tech navigation systems with a high degree of autonomy, capable of operating with a loss of satellite navigation systems;

• Developing miniaturized and effective ISR components;

• Integrating automated command and control systems, and;

• Better optics, electronics and data processing systems.

According to Anisimov’s report, the overall Russian UGV and unmanned military systems development arch is similar to the one proposed by the United States Army Capabilities Integration Center (ARCIC):  the gradual development of systems capable of more autonomy on the battlefield, leading to “smart” robots capable of forming “mobile networks” and operating in swarm configurations. Such systems should be “multifunctional” and capable of being integrated into existing armed forces formations for various combat missions, as well as operate autonomously when needed. Finally, each military robot should be able to function within existing and future military technology and systems.

Source: rusmilitary.wordpress.com

Such a candid review and critique of the Uran-9 in Syria, if true, may point to the Russian Ministry of Defense’s attitude towards its domestic manufacturers. The potential combat effectiveness of this UGV was advertised for the past two years, but its actual performance fell far short of expectations. It is a sign for developers of other Russian unmanned ground vehicles – like Soratnik, Vihr, and Nerehta — since it displays the full range of deficiencies that take place outside of well-managed testing ranges where such vehicles are currently undergoing evaluation. It also brought to light significant problems with ISR equipment — this type of technology is absolutely crucial to any unmanned system’s successful deployment, and its failures during Uran-9 tests exposed a serious combat weakness.

It is also a useful lesson for many other designers of domestic combat UGVs who are seeking to introduce similar systems into existing order of battle. It appears that the Uran-9’s full effectiveness can only be determined at a much later time if it can perform its mission autonomously in the rapidly-changing and complex battlefield environment. Fully autonomous operation so far eludes its Russian developers, who are nonetheless still working towards achieving such operational goals for their combat UGVs. Moreover, Russian deliberations on using their existing combat UGV platforms in one-time attack mode against fortified adversary positions or firing points, tracking closely with ways that Western military analysts are thinking that such weapons could be used in combat.

Source: Nikolai Novichkov / Orbis Defense

The Uran-9 is still a test bed and much has to take place before it could be successfully integrated into current Russian concept of operations. We could expect more eye-opening “lessons learned” from its and other UGVs potential deployment in combat. Given the rapid proliferation of unmanned and autonomous technology, we are already in the midst of a new arms race. Many states are now designing, building, exporting, or importing various technologies for their military and security forces.

To make matters more interesting, the Russians have been public with both their statements about new technology being tested and evaluated, and with the possible use of such weapons in current and future conflicts. There should be no strategic or tactical surprise when military robotics are finally encountered in future combat.

Source: Block13
by djahal; Diviantart.com

For another perspective on Russian military innovation, please read Mr. Ray Finch’s guest post The Tenth Man” — Russia’s Era Military Innovation Technopark.

Samuel Bendett is a Research Analyst at the CNA Corporation and a Russia Studies Fellow at the American Foreign Policy Council. He is an official Mad Scientist, having presented and been so proclaimed at a previous Mad Scientist Conference.  The views expressed here are his own.

79. Character vs. Nature of Warfare: What We Can Learn (Again) from Clausewitz

[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by guest blogger LTC Rob Taber, U.S. Army Training and Doctrine Command (TRADOC) G-2 Futures Directorate, clarifying the often confused character and nature of warfare, and addressing their respective mutability.]

No one is arguing that warfare is not changing. Where people disagree, however, is whether the nature of warfare, the character of warfare, or both are changing.

Source:  Office of the Director of National Intelligence

Take, for example, the National Intelligence Council’s assertion in “Global Trends: Paradox of Progress.” They state, “The nature of conflict is changing. The risk of conflict will increase due to diverging interests among major powers, an expanding terror threat, continued instability in weak states, and the spread of lethal, disruptive technologies. Disrupting societies will become more common, with long-range precision weapons, cyber, and robotic systems to target infrastructure from afar, and more accessible technology to create weapons of mass destruction.”[I]

Additionally, Brad D. Williams, in an introduction to an interview he conducted with Amir Husain, asserts, “Generals and military theorists have sought to characterize the nature of war for millennia, and for long periods of time, warfare doesn’t dramatically change. But, occasionally, new methods for conducting war cause a fundamental reconsideration of its very nature and implications.”[II] Williams then cites “cavalry, the rifled musket and Blitzkrieg as three historical examples”[III] from Husain and General John R. Allen’s (ret.) article, “On Hyperwar.”

Unfortunately, the NIC and Mr. Williams miss the reality that the nature of war is not changing, and it is unlikely to ever change. While these authors may have simply interchanged “nature” when they meant “character,” it is important to be clear on the difference between the two and the implications for the military. To put it more succinctly, words have meaning.

The nature of something is the basic make up of that thing. It is, at core, what that “thing” is. The character of something is the combination of all the different parts and pieces that make up that thing. In the context of warfare, it is useful to ask every doctrine writer’s personal hero, Carl Von Clausewitz, what his views are on the matter.

Source: Tetsell’s Blog. https://tetsell.wordpress.com/2014/10/13/clausewitz/

He argues that war is “subjective,”[IV]an act of policy,”[V] and “a pulsation of violence.”[VI] Put another way, the nature of war is chaotic, inherently political, and violent. Clausewitz then states that despite war’s “colorful resemblance to a game of chance, all the vicissitudes of its passion, courage, imagination, and enthusiasm it includes are merely its special characteristics.”[VII] In other words, all changes in warfare are those smaller pieces that evolve and interact to make up the character of war.

The argument that artificial intelligence (AI) and other technologies will enable military commanders to have “a qualitatively unsurpassed level of situational awareness and understanding heretofore unavailable to strategic commander[s][VIII] is a grand claim, but one that has been made many times in the past, and remains unfulfilled. The chaos of war, its fog, friction, and chance will likely never be deciphered, regardless of what technology we throw at it. While it is certain that AI-enabled technologies will be able to gather, assess, and deliver heretofore unimaginable amounts of data, these technologies will remain vulnerable to age-old practices of denial, deception, and camouflage.

 

The enemy gets a vote, and in this case, the enemy also gets to play with their AI-enabled technologies that are doing their best to provide decision advantage over us. The information sphere in war will be more cluttered and more confusing than ever.

Regardless of the tools of warfare, be they robotic, autonomous, and/or AI-enabled, they remain tools. And while they will be the primary tools of the warfighter, the decision to enable the warfighter to employ those tools will, more often than not, come from political leaders bent on achieving a certain goal with military force.

Drone Wars are Coming / Source: USNI Proceedings, July 2017, Vol. 143 / 7 /  1,373

Finally, the violence of warfare will not change. Certainly robotics and autonomy will enable machines that can think and operate without humans in the loop. Imagine the future in which the unmanned bomber gets blown out of the sky by the AI-enabled directed energy integrated air defense network. That’s still violence. There are still explosions and kinetic energy with the potential for collateral damage to humans, both combatants and civilians.

Source: Lockheed Martin

Not to mention the bomber carried a payload meant to destroy something in the first place. A military force, at its core, will always carry the mission to kill things and break stuff. What will be different is what tools they use to execute that mission.

To learn more about the changing character of warfare:

– Read the TRADOC G-2’s The Operational Environment and the Changing Character of Warfare paper.

– Watch The Changing Character of Future Warfare video.

Additionally, please note that the content from the Mad Scientist Learning in 2050 Conference at Georgetown University, 8-9 August 2018, is now posted and available for your review:

– Read the Top Ten” Takeaways from the Learning in 2050 Conference.

– Watch videos of each of the conference presentations on the TRADOC G-2 Operational Environment (OE) Enterprise YouTube Channel here.

– Review the conference presentation slides (with links to the associated videos) on the Mad Scientist All Partners Access Network (APAN) site here.

LTC Rob Taber is currently the Deputy Director of the Futures Directorate within the TRADOC G-2. He is an Army Strategic Intelligence Officer and holds a Master of Science of Strategic Intelligence from the National Intelligence University. His operational assignments include 1st Infantry Division, United States European Command, and the Defense Intelligence Agency.

Note:  The featured graphic at the top of this post captures U.S. cavalrymen on General John J. Pershing’s Punitive Expedition into Mexico in 1916.  Less than two years later, the United States would find itself fully engaged in Europe in a mechanized First World War.  (Source:  Tom Laemlein / Armor Plate Press, courtesy of Neil Grant, The Lewis Gun, Osprey Publishing, 2014, page 19)

_______________________________________________________

[I] National Intelligence Council, “Global Trends: Paradox of Progress,” January 2017, https://www.dni.gov/files/documents/nic/GT-Full-Report.pdf, p. 6.
[II] Brad D. Williams, “Emerging ‘Hyperwar’ Signals ‘AI-Fueled, machine waged’ Future of Conflict,” Fifth Domain, August 7, 2017, https://www.fifthdomain.com/dod/2017/08/07/emerging-hyperwar-signals-ai-fueled-machine-waged-future-of-conflict/.
[III] Ibid.
[VI] Carl Von Clausewitz, On War, ed. Michael Howard and Peter Paret (Princeton: Princeton University Press, 1976), 85.
[V] Ibid, 87.
[VI] Ibid.
[VII] Ibid, 86.
[VIII] John Allen, Amir Hussain, “On Hyper-War,” Fortuna’s Corner, July 10, 2017, https://fortunascorner.com/2017/07/10/on-hyper-war-by-gen-ret-john-allenusmc-amir-hussain/.