191. Competition in 2035: Anticipating Chinese Exploitation of Operational Environments

[Editor’s Note:  In today’s post, Mad Scientist Laboratory explores China’s whole-of-nation approach to exploiting operational environments, synchronizing government, military, and industry activities to change geostrategic power paradigms via competition in 2035. Excerpted from products previously developed and published by the TRADOC G-2’s Operational Environment and Threat Analysis Directorate (see links below), this post describes China’s approach to exploitation and identifies the implications for the U.S. Army — Enjoy!]

The Operational Environment is envisioned as a continuum, divided into two eras: the Era of Accelerated Human Progress (now through 2035) and the Era of Contested Equality (2035 through 2050). This latter era is marked by significant breakthroughs in technology and convergences in terms of capabilities, which lead to significant changes in the character of warfare. During this period, traditional aspects of warfare undergo dramatic, almost revolutionary changes which at the end of this timeframe may even challenge the very nature of warfare itself. In this era, no one actor is likely to have any long-term strategic or technological advantage, with aggregate power between the U.S. and its strategic competitors being equivalent, but not necessarily symmetric. Prevailing in this period will depend on an ability to synchronize multi-domain capabilities against an artificial intelligence-enhanced adversary with an overarching capability to visualize and understand the battlespace at even greater ranges and velocities. Equally important will be controlling information and the narrative surrounding the conflict. Adversaries will adopt sophisticated information operations and narrative strategies to change the context of the conflict and thus defeat U.S. political will.

The future strategic environment will be characterized by a persistent state of competition where global competitors seek to exploit the conditions of operational environments to gain advantage. Adversaries understand that the application of any or all elements of national power in competition just below the threshold of armed conflict is an effective strategy against the U.S.

Chinese DF-17 carrying the DF-ZF Hypersonic Glide Vehicle / Source: Bill Bostock, Business Insider Australia, via Wikimedia Commons

China is rapidly modernizing its armed forces and developing new approaches to warfare. Beijing has invested significant resources into research and development of a wide array of advanced technologies. Coupled with its time-honored practice of reverse engineering technologies or systems it purchases or acquires through espionage, this effort likely will allow China to surpass Russia as our most capable threat sometime around 2030.

China’s Approach to Exploitation

China’s whole-of-nation approach, which involves synchronization of actions across government, military, and industry, will facilitate exploitation of operational environments and enable it to gain global influence through economic exploitation.

China will leverage the international system to advance its own interests while attempting to constrain others, including the U.S.

Preferred Conditions and Methods

The following conditions and methods are conducive to exploitation by China, enabling them to shape the strategic environment in 2035:

    • Infrastructure Capacity Challenges:  China targets undeveloped and fragile environments where their capital investments, technology, and human capital can produce financial gains and generate political influence.
    • Interconnected Economies:  China looks for partners and opportunities to become a significant stakeholder in a wide variety of economies in order to capitalize on its investments as well as generate political influence.
    • Specialized Economies:  China looks for opportunities to partner with specialized markets and leverage their vulnerabilities for gain.
    • Technology Access Gaps:  China targets areas where their capital investments in technology provide partners with key resources and competitive advantages by filling technology gaps.

Implications for the U.S. Army:

    • The Chinese People’s Liberation Army (PLA) deployed armored medical vehicles and personnel to Germany for the Combined Aid 2019 Joint Exercise with the Bundeswehr this past summer.

      Traditional Army threat paradigms may not be sufficient for competition.

    • The Army could be drawn into unanticipated escalation as a result of China’s activities during the competition phase.
    • Army military partnerships will likely be undermined by China in 2035.
    • Army operations and engagements will be increasingly impacted by the pervasiveness of Chinese goods, technology, infrastructure, and systems.

If you enjoyed this post, please see the original paper and associated infographic of the same title, both by the TRADOC G-2’s Operational Environment and Threat Analysis Directorate and hosted on their All Partners Access Network (APAN) site

… and read the following MadSci Laboratory blog posts:

A view of the Future: 2035-2050

China’s Drive for Innovation Dominance and Quantum Surprise on the Battlefield?, by Elsa Kania

A Closer Look at China’s Strategies for Innovation: Questioning True Intent, by Cindy Hurst

Critical Projection: Insights from China’s Science Fiction, by Lt Col Dave Calder

190. Weaponized Information: One Possible Vignette

[Editor’s Note:  The Information Environment (IE) is the point of departure for all events across the Multi-Domain Operations (MDO) spectrum. It’s a unique space that demands our understanding, as the Internet of Things (IoT) and hyper-connectivity have democratized accessibility, extended global reach, and amplified the effects of weaponized information. Our strategic competitors and adversaries have been quick to grasp and employ it to challenge our traditional advantages and exploit our weaknesses.

    • Our near-peers confront us globally, converging IE capabilities with hybrid strategies to expand the battlefield across all domains and create hemispheric threats challenging us from home station installations (i.e., the Strategic Support Area) to the Close Area fight.
    • Democratization of weaponized information empowers regional hegemons and non-state actors, enabling them to target the U.S. and our allies and achieve effects at a fraction of the cost of conventional weapons, without risking armed conflict.
    • The IE enables our adversaries to frame the conditions of future competition and/or escalation to armed conflict on their own terms.

Today’s post imagines one such vignette, with Russia exploiting the IE to successfully out-compete us and accomplish their political objectives, without expending a single bullet!]

Ethnic Russian minorities’ agitation against their respective governments in Estonia, Lithuania, and Latvia spike. Simultaneously, the Russian Government ratchets up tensions, with inflammatory statements of support for these ethnic Russian minorities in the Baltic States; coordinated movements and exercises by Russian ground, naval, and air forces adjacent to the region; and clandestine support to ethnic Russians in these States. The Russian Government started a covert campaign to shape people’s views about the threats against the Russian diaspora. More than 200,000 twitter accounts send 3.6 million tweets trending #protectRussianseverywhere. This sprawling Russian disinformation campaign is focused on building internal support for the Russian President and a possible military action. The U.S. and NATO respond…

The 2nd Cav Regt is placed on alert; as it prepares to roll out of garrison for Poland, several videos surface across social media, purportedly showing the sexual assault of several underage German nationals by U.S. personnel. These disturbingly graphic deepfakes appear to implicate key Leaders within the Regiment. German political and legal authorities call for an investigation and host nation protests erupt outside the gates of Rose Barracks, Vilseck, disrupting the unit’s deployment.

Simultaneously, in units comprising the initial Force Package earmarked to deploy to Europe, key personnel (and their dependents) are targeted, distracting troops from their deployment preparations and disrupting unit cohesion:

    • Social media accounts are hacked/hijacked, with false threats by dependents to execute mass/school shootings, accusations of sexual abuse, hate speech posts by Leaders about their minority troops, and revelations of adulterous affairs between unit spouses.
    • Bank accounts are hacked: some are credited with excessive amounts of cash followed by faux “See Something, Say Something” hotline accusations being made about criminal and espionage activities; while others are zeroed out, disrupting families’ abilities to pay bills.

Russia’s GRU (Military Intelligence) employs AI Generative Adversarial Networks (GANs) to create fake persona injects that mimic select U.S. Active Army, ARNG, and USAR commanders making disparaging statements about their confidence in our allies’ forces, the legitimacy of the mission, and their faith in our political leadership. Sowing these injects across unit social media accounts, Russian Information Warfare specialists seed doubt and erode trust in the chain of command amongst a percentage of susceptible Soldiers, creating further friction in deployment preparations.

As these units load at railheads or begin their road march towards their respective ports of embarkation, Supervisory Control and Data Acquisition (SCADA) attacks are launched on critical rail, road, port, and airfield infrastructures, snarling rail lines, switching yards, and crossings; creating bottlenecks at key traffic intersections; and spoofing navigation systems to cause sealift asset collisions and groundings at key maritime chokepoints. The fly-by-wire avionics are hacked on a departing C-17, causing a crash with the loss of all 134 Soldiers onboard. All C-17s are grounded, pending an investigation.

Salvos of personalized, “direct inject” psychological warfare attacks are launched against Soldiers via immersive media (Augmented, Virtual, and Mixed Reality; 360o Video/Gaming), targeting them while they await deployment and are in-transit to Theater. Similarly, attacks are vectored at spouses, parents, and dependents, with horrifying imagery of their loved ones’ torn and maimed bodies on Artificial Intelligence-generated battlefields (based on scraped facial imagery from social media accounts).

Multi-Domain Operations has improved Jointness, but exacerbated problems with “the communications requirements that constitute the nation’s warfighting Achilles heel.” As units arrive in Theater, seams within and between these U.S. and NATO Intelligence, Surveillance, and Reconnaissance; Fires; Sustainment; and Command and Control inter-connected and federated tactical networks that facilitate partner-to-partner data exchanges are exploited with specifically targeted false injects, sowing doubt and distrust across the alliance for the Multi-Domain Common Operating Picture. Spoofing of these systems leads to accidental air defense engagements, resulting in Blue-on-Blue fratricide or the downing of a commercial airliner, with additional civilian deaths on the ground from spent ordnance, providing more opportunities for Russian Information Operations to spread acrimony within the alliance and create dissent in public opinion back home.

With the flow of U.S. forces into the Baltic Nations, real instances of ethnic Russians’ livelihoods being disrupted (e.g., accidental destruction of livestock and crops, the choking off of main routes to market, and damage to essential services [water, electricity, sewerage]) by maneuver units on exercise are captured on video and enhanced digitally to exacerbate their cumulative effects. Proliferated across the net via bots, these instances further stoke anti-Baltic / anti-U.S. opinion amongst Russian-sympathetic and non-aligned populations alike.

Following years of scraping global social media accounts and building profiles across the full political spectrum, artificial influencers are unleashed on-line that effectively target each of these profiles within the U.S. and allied civilian populations. Ostensibly engaging populations via key “knee-jerk” on-line affinities (e.g., pro-gun, pro-choice, etc.), these artificial influencers, ever so subtly, begin to shift public opinion to embrace a sympathetic position on the rights of the Russian diaspora to greater autonomy in the Baltic States.

The release of deepfake videos showing Baltic security forces massacring ethnic Russians creates further division and causes some NATO partners to hesitate, question, and withhold their support, as required under Article 5. The alliance is rent asunder — Checkmate!

Many of the aforementioned capabilities described in this vignette are available now. Threats in the IE space will only increase in verisimilitude with augmented reality and multisensory content interaction. Envisioning what this Bot 2.0 Competition will look like is essential in building whole-of-government countermeasures and instilling resiliency in our population and military formations.

The Mad Scientist Initiative will continue to explore the significance of the IE to Competition and Conflict and information weaponization throughout our FY20 events — stay tuned to the MadSci Laboratory for more information. In anticipation of this, we have published The Information Environment:  Competition and Conflict anthology, a collection of previously published blog posts that serves as a primer on this topic and examines the convergence of technologies that facilitates information weaponization — Enjoy!

189. What We are Learning about the Operational Environment

[Editor’s Note: The U.S Army Training and Doctrine Command and Army Futures Command recently updated and published The Operational Environment and the Changing Character of Warfare, and the accompanying Potential Game Changers handout to reflect what we are learning about the Operational Environment (OE). Addenda included insights gleaned from our recent Mad Scientist conferences and guest blog post submissions, as well as direct input received from you all — our Mad Scientist Community of Action — thank you! Today’s blog post provides a synopsis of these latest updates — Enjoy!]

At some point during The Era of Accelerated Human Progress (now through 2035), and really for the first time since the Second World War, it is likely that the United States could face a true strategic competitor who will have the ability to operate in multi-domains, the capability to deny domains to U.S. forces, and who will be able to operate with certain technological advantages over U.S. forces. This challenge is further compounded by our reliance on coalition warfare with allies who might not be able or willing to modernize at the same pace as the U.S.

As the world becomes further digitized, states will share their strategic environments with networked societies which could pose a threat by circumventing governments that are unresponsive to their citizens’ needs. These online organizations are capable of gaining power, influence, and capital to a degree that challenges traditional nation-states. Many states will face challenges from insurgents and global identity networks – ethnic, religious, regional, social, or economic – whose members may feel a stronger affinity to their online network than to their nationality, which could result in them either resisting state authority or ignoring it altogether.

The revolution in connected devices and virtual power projection will increase the potential for adversaries to target our installations. Hyper-connectivity increases the attack surface for cyber-attacks and the access to publicly available information on our Soldiers and their families, making personalized warfare and the use of psychological attacks and deepfakes likely. A force deploying to a combat zone will remain vulnerable from the Strategic Support Area – including individual Soldiers’ personal residences, home station installations, and ports of embarkation – all the way forward to the Close Area fight during its entire deployment.

The balkanization of the internet into multiple national “intranets” could provide fewer opportunities for influence platforms and impact cyber operations. The growing presence of fake news, data, and information, coupled with deepfakes and hyper-connectivity, changes the nature of information operations. The convergence of deepfakes, AI-generated bodies and faces, and AI writing technologies – that appear authentic – are corrosive to trust between governments and their populations, and present the potential for devastating impact on nation-states’ will to compete and fight.

Artificial Intelligence (AI) may be the most disruptive technology of our time: much of today’s “thought” is artificial, vice human. However, certain operational environments are data-scarce. Missing inputs caused by data gaps inhibit a narrow AI’s ability to provide the envisioned benefits in assessing the OE, limiting military application. Decision cycle times will decrease with AI-enabled intelligence systems conducting collection, collation, and analysis of battlefield information at machine speed, freeing up warfighters and commanders to do what they excel at – fight and make decisions. AI will become critical in processing and sustaining a clear common operating picture in this data-rich environment.

Passive sensing, especially when combined with artificial intelligence and big-data techniques may routinely outperform active sensors, leading to a counter-reconnaissance fight between autonomous sensors and countermeasures – “a robot-on-robot affair.” These capabilities will be augmented by increasingly sophisticated civilian capabilities, where commercial imagery services, a robust and mature Internet of Things, and near unlimited processing power generate a battlespace that is more transparent than ever before. This transparency may result in the demise of strategic and operational deception and surprise.

The proliferation of intelligent munitions will enable strikers to engage targets at greater distances, collaborate in teams to seek out and destroy designated targets, and defeat armored and other hardened targets, as well as defiladed and entrenched targets.

Unmanned systems, including advanced battlefield robotic systems acting both autonomously and as part of a wider trend in man-machine teaming, will account for a significant percentage of a combatant force. Swarms of small, cheap, scalable, and disposable unmanned systems will be used both offensively and defensively, creating targeting dilemmas for sophisticated, expensive defensive systems. Swarming systems on the future battlefield will include not only unmanned aerial systems (UAS) but also swarms across multiple domains with self-organizing, self-reconstituting, autonomous, ground, maritime (sub and surface), and subterranean unmanned systems. Advanced robotic vehicles could serve as mobile power generation plants and charging stations, while highly dexterous ground robots with legs and limbs could negotiate complex terrain allowing humans access to places otherwise denied. This raises the question: Is using a human Soldier in a dangerous situation ethical when there are robots available?

Biotechnology will see major advances, with many chemical and materials industries being replaced or augmented by a “bio-based economy” in which precision genetic engineering allows for bulk chemical production. Individualized genetics enable precise performance enhancements for cognition, health, longevity, and fitness. The low cost and low expertise entry points into genomic editing, bioweapon production, and human enhancements will enable explorations by state, non-state, criminal, and terrorist organizations. Competitors may not adopt the same legal regulations or ethics for enhancement as the U.S., causing asymmetry between the U.S. and those choosing to operate below our defined legal and ethical thresholds.

Space is becoming an increasingly congested, commercialized, democratized, and contested domain. A maneuver Brigade Combat Team has over 2,500 pieces of equipment dependent on space-based assets for PNT and Low Earth Orbit is cluttered with satellites, debris, and thousands of pieces of refuse.

Shifting demographics, such as youth bulges in Africa and aging populations of traditional allies and competitors, will threaten economic and political stability. These factors will be attenuated by a changing climate, which likely will become a direct security threat. Risks to U.S. security include extreme weather impacting installations, increased resource scarcity and food insecurity, climate migration increasing the number of refugees and internally displaced peoples, and the Arctic as a new sphere of competition.

Our understanding of technological innovations through 2035 has broadened in several areas:

    • Robotics: The advent of legged locomotion and robotic dexterity, with robots becoming less vehicle-like, able to replicate animal or a human characteristics.
    • Quantum Computing: The expansion of Quantum applications across to the full spectrum of sciences will affect Precision, Navigation, and Timing (PNT), especially relevant in GPS-denied environments, as well as improved sensors and imaging.
    • Space: Presence in this domain will expand to over 70 nations.
    • Missiles: The addition of maneuverability to hypersonic capabilities presents further challenges in the development of effective countermeasure systems.

We also refined our descriptions of several disruptive technologies anticipated through 2050:

    • Power: Proliferation of electric/battery powered vehicles, laser charging, small modular advanced nuclear power delivering electricity via directed energy (DE) and electric transportation, and the harnessing of Thermionic power — harvesting energy at the nano-level — which is scalable to megawatts.
    • Medical Advances: The ability to produce artificial cells on demand and the advent of tailored vaccines.
    • Insensitive Munitions: The development of multifunctional munitions, tailorable to specific mission sets and functions.
    • Information Environment: Instantaneous recall, sensor-saturated environment, unmanned asset intelligence collection, algorithmic processing of high volumes of information, and virtual and augmented reality.

We welcome your input on these or any additional aspects of the OE and the changing character of warfare — What are we missing?

If you enjoyed this post, please also see:


188. “Tenth Man” — Challenging our Assumptions about the Future Force

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish our latest “Tenth Man” post. This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging. We offer it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Operational Environment (OE). Today’s post examines a foundational assumption about the Future Force by challenging it, reviewing the associated implications, and identifying potential signals and/or indicators of change. Read on!]

Assumption: The United States will maintain sufficient Defense spending as a percentage of its GDP to modernize the Multi-Domain Operations (MDO) force. [Related MDO Baseline Assumption – “b. The Army will adjust to fiscal constraints and have resources sufficient to preserve the balance of readiness, force structure, and modernization necessary to meet the demands of the national defense strategy in the mid-to far-term (2020-2040),” TRADOC Pam 525-3-1, The U.S. Army in Multi-Domain Operations 2028, p. A-1.]

Source: U.S. Census Bureau

Over the past decades, the defense budget has varied but remained sufficient to accomplish the missions of the U.S. military. However, a graying population with fewer workers and longer life spans will put new demands on the non-discretionary and discretionary federal budget. These stressors on the federal budget may indicate that the U.S. is following the same path as Europe and Japan. By 2038, it is projected that 21% of Americans will be 65 years old or older.1 Budget demand tied to an aging population will threaten planned DoD funding levels.

In the near-term (2019-2023), total costs in 2019 dollars are projected to remain the same. In recent years, the DoD underestimated the costs of acquiring weapons systems and maintaining compensation levels. By taking these factors into account, a 3% increase from the FY 2019 DoD budget is needed in this timeframe. Similarly, the Congressional Budget Office (CBO) estimates that costs will steadily climb after 2023. Their base budget in 2033 is projected to be approximately $735 billion — that is an 11% increase over ten years. This is due to rising compensation rates, growing costs of operations and maintenance, and the purchasing of new weapons systems.2 These budgetary pressures are connected to several stated and hidden assumptions:

    • An all-volunteer force will remain viable [Related MDO Baseline Assumption – “a. The U.S. Army will remain a professional, all volunteer force, relying on all components of the Army to meet future commitments.”],
    • Materiel solutions’ associated technologies will have matured to the requisite Technology Readiness Levels (TRLs), and
    • The U.S. will have the industrial ability to reconstitute the MDO force following “America’s First Battle.”

Implications: If these assumptions prove false, the manned and equipped force of the future will look significantly different than the envisioned MDO force. A smaller DoD budget could mean a small fielded Army with equipping decisions for less exquisite weapons systems. A smaller active force might also drive changes to Multi-Domain Operations and how the Army describes the way it will fight in the future.

Signpost / Indicators of Change:

    • 2008-type “Great Recession”
    • Return of budget control and sequestration
    • Increased domestic funding for:
      • Universal Healthcare
      • Universal College
      • Social Security Fix
    • Change in International Monetary Environment (higher interest rates for borrowing)

If you enjoyed this alternative view on force modernization, please also see the following posts:

  • Disclaimer: The views expressed in this blog post do not reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

1The long-term impact of aging on the federal budget,” by Louise Sheiner, Brookings, 11 January 2018 https://www.brookings.edu/research/the-long-term-impact-of-aging-on-the-federal-budget/

2Long-Term Implications of the 2019 Future Years Defense Program,” Congressional Budget Office, 13 February 2019. https://www.cbo.gov/publication/54948

187. S&T Isn’t an Enabler, It’s the Main Effort!

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s post by guest blogger Chris Kramer, advocating for a whole-of-government approach to re-invigorate and build resiliency into our national Science and Technology (S&T) infrastructure and human capital. His prescription consists of eight actions designed to better prepare the United States population and Department of Defense for Competition and Conflict in the future — Read on!]

The weakest link in a chain determines the chain’s strength. Right now the weakest links in the extended American chain of national defense capabilities are not its physical weapons of war. They are its diminishing numbers of students pursuing STEM degrees.1  We must put as much energy into preparing our minds and our culture as we are into so energetically modernizing our weapons and equipment.

Edison did not invent the modern light bulb over breakfast one morning following a breathtaking single insight into the interplay of matter and electricity. The technologies all around us, from light bulbs to communications and sensors to computing, biological, and materials science, all required massive amounts of time and effort to reach their current state. They required people with the intellectual depth and breadth to design and build them. All of this took time, money, and much intellectual effort.

We all know people with “the knack” for fixing or building things. Designing a better hammer or screwdriver takes a certain amount of innate skill and practical experience, but one does not necessarily need to attend higher education to create or perfect such relatively simple items, just a fortuitous inborn strength in the capacity to visualize engineering at the practical level and the ability to translate that vision into a hand-held reality.

One does need copious amounts of specialized higher education and training in Science and Technology (S&T) to design and build advanced equipment and realize technological innovation, however. This requirement will only increase as we continue to gain new knowledge about how to make matter and energy do our bidding and how to harness “quantum” in all its various applications.

One also needs this higher education and training in order to cross increasingly higher thresholds of understanding and insight in the fields we need to pursue. As people rise from students to seasoned practitioners, and continue their research and professional development, they will continue to more rapidly and thoroughly intuitively integrate the masses of knowledge and experience they and their peers have accrued. They will use this intellectual gestalt to make deeper and more penetrating insights into their own and possibly other fields. These insights will lead to ever deeper penetrations into the potential of S&T and will result in ever more advanced technologies and the applications of those technologies. Facilitating this paradigm on scales both broad and deep must be a national focus area if we are to maintain our strength and security across all the elements of national power.

No other activity in human history except the pursuit of science has led to actual advances, and many advances were in support of the security of a tribe or nation. Note that science includes not only advancements in the hardware and software of implements for both destruction and deterrence, but also the sciences of diplomacy, economics, political and military affairs, and indeed every other endeavor that affects or informs the dynamics of human life and group interaction.

The last time our nation awoke to the national significance of STEM was in 4 October 1957, when the Soviet Union launched Sputnik 1 and the collective American heart stopped. Rightly seeing this as a leading indicator of a potential existential threat to America, the U.S. went into a frenzy of       scientific revivalism. This flurry of activity and resourcing led to a massive improvement in the national S&T infrastructure and capability, and uncountable technological spinoffs from the space and other programs which benefited human well-being and improved our defense capabilities.

Science and history prove that keeping ahead of the competition is essential to individual and group survival. Our adversaries have inspected both our vulnerabilities and our strengths and are designing, building, and applying hybrid strategies and capabilities to exploit the former, while avoiding the latter. The Operational Environment is now ubiquitous and our adversaries’ work is often unseen. No longer is a war fought “in faraway places against people of which we know nothing,” to modify Neville Chamberlain’s quote from 1938; the fight is here. Battle lines are no longer linear and on foreign soil; our adversaries operate in the light of day through cyber lines of communication and are even now reaching into governments, businesses, homes, and devices right here in our homeland. Smart bombs can attack single buildings, but weaponized information can facilitate personalized warfare, precisely targeting specific devices and attacking individual brains.

As has been much discussed lately, the Age of the DeepFake, in which the very nature of reality is thrown into question, is upon us.

It costs large amounts of blood and treasure to win wars, but even more to lose them.

All of the aforementioned circumstances mean that every citizen is now a de facto combatant, no matter how unwelcome this thought may be. Consequently:

– Every citizen must begin to think and respond like a combatant.

– Every citizen must be a sensor, able to apply critical thinking and scientifically-informed education and rationalism to their daily lives and to the information they encounter.

With the proliferation of potential threats and attacks facing the average citizen, we as a nation need to be brought to a baseline of competence in science, rationalism, critical thinking, and frankly an understanding that we now exist in a time and place in which individual action and inaction guided by misinformation can harm large numbers of people and resources.

Adaptation is critical to survival. Failure to adapt means losing one’s place in the hierarchy, assimilating into someone else’s culture, or going extinct. History shows that no empire or nation is immune to the corrosive effects of insular thinking, hubris, and the failure to adapt. All of the empires throughout history upon which “the sun never set” have seen their power and reach shrink or vanish. If we wish to avoid this, we must adapt our weapons of war, our human capital, and our society at the same time. And we must do it quickly, because “the rate of change of the rate of change” and the capabilities and approaches used by our adversaries are increasing in scope and reach.

Here are eight actions we should take:

1. Aggressively identify and groom high-achieving/high-potential students while they are in middle and high school. Facilitate their movement into higher education, and then on into the professional or academic worlds. Have them be the foundation for the next great generation of (government?) scientists and engineers. Some of these people will be born in extremely depressed socioeconomic areas; we must leverage the schools and communities across the nation to identify these students as well.

2. Once onboard and contributing, reward and recognize these people in public for their work and contributions. Work to make intellectual achievement as publicly desirable as possible. Our future S&T and cyber warriors are going to help us win these fights, especially when the field of battle has now been extended through virtual tentacles into our own homes.

3. Elevate science as a national asset and a national treasure. As long as science is denied, it will be marginalized. As long as science is marginalized, it will be rejected, unfunded or inadequately funded. As long as it is not properly resourced, we will feel the harmful effects of the lack of resources.

4. Every S&T professional and every S&T organization should either lead or support activities, both organized and large-scale down to individual efforts, to promote S&T and its acceptance. Do outreach with kids and local groups. Guest-lecture at schools from elementary to graduate levels. Volunteer to support local S&T and scholastic improvement events. Find people who will be the next generation of patent producers, researchers and developers, and discovers of new technologies and new solutions. Bring them onto the right teams and help them to pay this forward until it is the organization’s business model.

5. Work with the Department of Education and school boards to introduce critical thinking as a stand-alone skill and subject, introduced no later than the middle school years, then fold it into all subsequent student evaluation.

6. Require objective peer-reviewed science be the deciding factor in all government decisions.

7. Implement the research that has already been done on these topics. Five minutes of research showed there is sufficient work2 already done3 on the STEM education issue to get a great start.

8. Finally, have DoD work with the federal, state, and local governments to identify the threats and the actions all citizens need to take to be part of the collective national defense, then educate the population on them and hold periodic drills to ensure horizontal and vertical readiness.

If you enjoyed this post…

  • Learn how Finland has implemented a whole-of-government approach to counter weaponized information in:

Why Is Finland Able to Fend Off Putin’s Information War?

Russia’s Neighbor Finland Mounts Defenses Against Election Meddling

  • See the following MadSci blog posts:

Four Elements for Future Innovation, by Dr. Richard Nabors

The Trouble with Talent: Why We’re Struggling to Recruit and Retain Our Workforce, by Sarah L. Sladek

Making the Future More Personal: The Oft-Forgotten Human Driver in Future’s Analysis, by Ian Sullivan

Chris Kramer is a retired Army engineer officer and futurist working to further the advancement of science, technology and critical thinking. The perspectives described in this article are the author’s and do not imply endorsement by any person or organization within the U.S. Government.

1 Pew Research Center information regarding American student aversion to STEM, dated January 17, 2018, at https://www.pewresearch.org/fact-tank/2018/01/17/half-of-americans-think-young-people-dont-pursue-stem-because-it-is-too-hard/

2 National Academies of Sciences, Engineering, and Medicine. (2016). Barriers and Opportunities for 2-Year and 4-Year STEM Degrees: Systemic Change to Support Diverse Student Pathways. Committee on Barriers and Opportunities in Completing 2-Year and 4-Year STEM Degrees. S. Malcom and M. Feder, Editors. Board on Science Education, Division of Behavioral and Social Sciences and Education. Board on Higher Education and the Workforce, Policy and Global Affairs. Washington, DC: The National Academies Press. doi: 10.17226/21739. Found at https://www.nap.edu/read/21739/chapter/1

3 Report by the Committee On STEM Education of the National Science & Technology Council, Dated December 2018, titled “Charting A Course For Success: America’s Strategy For STEM Education”. Found at https://www.whitehouse.gov/wp-content/uploads/2018/12/STEM-Education-Strategic-Plan-2018.pdf

186. “Maddest” Guest Blogger

If we don’t study the mistakes of the future, we’re doomed to repeat them the first time :o(” — Ken M, comedian

[Editor’s Note: Since its inception two years ago next month, Mad Scientist Laboratory has expanded the U.S. Army’s reach by engaging global innovators from across industry, academia, and the Government on the Operational Environment (OE), emergent disruptive technologies and their individual and convergent impacts, and the changing character of warfare. Simultaneously, the blog site has broadened public access to related open source content on the OE, facilitating further dialog and enhancing its role as “a marketplace of ideas.”

Our Mad Scientist Community of Action continues to grow — in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. To date, 54% of the blog posts published have been submitted by guest bloggers!  Thanks to their diverse contributions and perspectives, our blog site has accrued over 182K views from visitors from around the world!  We challenge you all to continue contributing your insights regarding the OE and the changing character of warfare!

In today’s post, we recognize Mr. Robert J. Hranek as our “Maddest Guest Blogger” for the past six months.  We excerpted elements of his short story entitled “Angry Engineer,” and published it on this site as “The Otso Incident with Donovia in 2030” on 2 May 2019.  This piece skillfully imagines what the United States and its allies learn following our first battle with a near-peer competitor in over 80 years.  Among the many future innovations addressed, Mr. Hranek’s “pre-mortem” of two dozen lessons learned especially resonated with our readers.  In identifying potential “mistakes of the future,” Mr. Hranek enabled us to consider them when shaping and tailoring our thoughts about Multi-Domain Operations so that we’re not “doomed to repeat them the first time!”  Enjoy!]

The U.S. responded to Donovia’s invasion of Otso by initiating combat operations against the aggressors on 1 April 2030 — April Fools’ Day. Thousands of combatants died on both sides, mostly on ships; hundreds more were wounded, primarily from the land battle, and an unverifiable number of casualties occurred worldwide due to the sabotage of power grids and other infrastructure. An accurate civilian count was impossible in the chaos of reestablishing power, computer, and financial systems worldwide.

Although U.S. forces were considered to have ‘won’ tactically, no significant change in control of territory resulted from the war.  Both sides’ weaponry was very lethal and very fast.  Overall this worked in the defender’s favor because attacking forces were too exposed, detectable, and vulnerable to last very long on the battlefield.  Stealth AI drones (both airborne and undersea) were the only systems that survived very long in combat, and then only as long as they remained undetected. Even an airborne drone with the signature of a small bird would be shot down quickly if it could be tracked and engaged by speed-of-light directed energy weapons. With no strategic victor and government leadership on both sides fearful of starting World War III, an uneasy military stalemate continued for several weeks…

The following are 24 lessons learned from the U.S.’s first battle with a near-peer competitor in over 80 years:

Lessons Learned 1 (LL1): History repeated itself in that no battle plan ever survives contact with the enemy. The unforeseen consequences of reliance on automation and the resulting escalations almost lead to World War III. No matter how sophisticated our systems become, it will always be important to keep human decision-making in the OODA (Observe, Orient, Decide, Act) loop.

LL2: Donovia was able to launch a ‘surprise’ attack on Otso because U.S. human intelligence analysis grew too dependent on their automated tools. The National Technical Means (NTM) software identified the Donovia troop movements as something it had seen many times before, labelled them as routine, and therefore the analysts reported them as routine as well.

LL3: Directed microwave and other Electro-magnetic Pulse (EMP) weapons performed within accepted parameters. The 20th century legacy electronic systems were rendered completely useless when overwhelmed with more energy than they could handle.

LL4: Differences in assumptions is what lead to the escalations on April 1st. The U.S. model of Artificial Intelligence (AI) weaponry was to always require human authorization before firing a weapon. Donovia’s paradigm was that if an AI had its ‘secure’ communications severed, then that meant it was to engage the enemy ASAP. Donovia’s Lurker Unmanned Underwater Vehicles (UUVs) had their communications severed with their parent vessels in a way that was interpreted by its on-board AI as the destruction of its parent vessel. At that point, it located the highest value target in range, maneuvered into position, and fired its supercavitating torpedo into the USS Ronald Reagan, inflicting crippling damage leading to the deaths of over 1,000 crewmembers.

Aft-end of an export variant of the VA-111 Shkval supercavitating torpedo / Source: Wikimedia Commons

LL5: There is still no effective defense versus a supercavitating torpedo once launched. Any platform capable of carrying one must be considered a threat and be dealt with accordingly. Therefore, research into better detection of all stealth platforms is critical to the future of U.S. Naval dominance.

LL6: Stealth technology cannot be relied upon to keep pilots safe. Modern stealth aircraft are very effective at diverting radar energy away from its source, but that same aircraft becomes very detectable when a radar source is synchronized with a separately located radar receiver. All that is required is a significant offset angle between the radar source, the target aircraft, and the radar-receiving sensor.

LL7: The “Four-ship” concept of each F-35 controlling three Unmanned Combat Air Vehicles (UCAVs) in combat is still considered the most effective way to employ these aircraft, but the piloted F-35 may need to control its UCAVs without coming into Line of Sight (LOS) of sophisticated enemy air defenses. The alternatives are to allow the UCAVs to perform combat missions completely independently once in flight, accept the risks of controlling UCAVs remotely with the possibility of communications loss, or even to risk those UCAVs being taken over by the enemy.

LL8: Evaluation of Donovia’s long-range stealth UCAV capabilities were incomplete. They were correctly analyzed to be too small to deliver a significant missile or bomb load, so they were incorrectly surmised to be of little threat as just another reconnaissance drone. U.S. intelligence intercepts of Donovia incorporating low power lasers on them assumed that this would be used for the designation of targets for separately launched laser-guided weaponry. The truth that they were designed to blind the pilots of aircraft deep within enemy airspace was not realized until after the loss of an Airborne Warning and Control System (AWACS) aircraft. The only current defense against this is to wear headgear designed to instantly darken when encountering a nuclear flash, which many pilots did not wear — until now.

LL9: Stealth UCAVs proved to dominate the airspace, unless they could be shot down with energy weapons. Their combat capabilities were superior to much larger piloted vehicles due to not being hampered by human limitations. They instantly followed their programming, regardless of how long they were airborne, could maneuver at much higher gees than piloted fighters, and cost less to deploy.

LL10: Our Matter Lasers were much more effective at destroying incoming artillery and mortar shells than Donovia’s more conventional laser defenses. It vindicated all the effort and expense that was undertaken to transform this technology from a research project into the most effective air defense system on Earth. By delivering an energy density thousands of times stronger than conventional lasers, it destroyed each incoming round hundreds of times faster.

LL11: Donovia’s standard laser point defenses also proved capable of destroying incoming missiles and artillery, but they were relatively easy to overwhelm, since they took much longer to eliminate each incoming threat. Their lasers proved to be useless at intercepting U.S. naval hypervelocity railgun rounds.

LL12: The U.S. was not as reliant of GPS guidance as Donovia assumed. The deployment of advanced inertial guidance systems rivalling GPS accuracy had come just in time to prevent the loss of high-precision GPS from being a crippling blow to U.S. combat capabilities.

LL13: The Expanded Advanced Battle Management System (EABMS) multi-service network was critical to U.S. forces being able to effectively coordinate their efforts even while exposed to Donovian electronic warfare. EABMS proved that integrated warfighters can still perform their missions even after several parts of the network were destroyed or disabled.

LL14: The Exoskeleton enhancements of the first pilot’s rescue team were deemed crucial to his successful retrieval. Their greater strength, endurance, and body armor helped them to safely return to U.S. lines. Perhaps even more valuable than the physical aspects of their equipment were their Augmented Reality (AR) displays that allowed them superior battlefield awareness and the ability to see through camouflage as if it was not even there. It is recommended that the U.S. continue development of the next generation of full-body powered exoskeletons.

LL15: The performance of U.S. fast-attack hovercraft was disappointing. Their battlefield speed advantage was offset by their vulnerability. Even minor damage to their air-cushion skirts would degrade their maneuverability so much that they suffered twice the casualty rate as their wheeled and tracked counterparts.

LL16: Most U.S. Soldiers Wounded In Action (WIA) on the battlefield suffered from burn-like damage. The same Donovian non-lethal microwave projectors used for crowd control had a high-power capability used to inflict severe skin damage through cloth armor.

LL17: U.S. medical triage was greatly aided by medical sensors woven directly into Soldiers’ uniforms. This led to faster diagnosis, treatment, and recovery of casualties than in any previous conflict.

LL18: Mines remain the scourge of the battlefield, with AI-controlled mines being particularly insidious. Some were even programmed to let several combatants pass by before detonating in the middle of a formation of troops that thought the area had already been cleared.

LL19: The fledgling industry of orbital cleanup services (satellites designed to collect and/or dispose of other inactive satellites) received a major inadvertent boost from renewed concerns over orbital debris.

LL20: The chaos of the space battle allowed our Space Force to capture and return to Earth a few chosen Donovian high-interest military satellites. U.S. analysts were surprised by several aspects of Donovian design, but those results have no further need for discussion in this review of the conflict over Otso.



LL21: There was speculation of possible Donovian deployment of tailored genetically-engineered bioweapons, but these were not supported by any confirmable evidence. On tactical timescales, bioweapons are still deemed ineffective. The ability to inflict casualties on an enemy with bioweapons while trying to limit the damage to your own personnel is still an unresolved strategic issue.

LL22: The reports of Donovia having anti-personnel weapons that used nanites were also unsupported. This technology is just starting to be used industrially, and nanite-based weapons are considered to be at least a decade away from being ready for battlefield use.

LL23: There were also reports on social media of Donovian atrocities towards Otso’s civilians, but no supporting evidence has been found for these numerous dubious claims.

LL24: This conflict resulted in most nations turning back to the U.S. in a leadership role in a way that completely healed the damage to U.S. prestige resulting from its involvement in decades of warfare in the Middle East and Southwest Asia.

If you enjoyed these lessons learned, please read Mr. Hranek’s complete “Angry Engineer” short story here, hosted by our colleagues at Small Wars Journal

… check out the U.S. Army Futures and Concepts Center’s Multi-Domain Operations video and Modernizing for a Multi-Domain Army handout…

… and see the following insightful Mad Scientist blog posts addressing future warfare:

Ground Warfare in 2050: How It Might Look, by proclaimed Mad Scientist and previous “Maddest” Guest Blogger Dr. Alexander Kott (whose day job is Chief Scientist, Army Research Lab)

Blurring Lines Between Competition and Conflict

If you enjoy storytelling as a tool to envision the future, read Omega, by Mr. August Cole and Mr. Amir Husain.

Mr. Robert Hranek began his professional career serving in the USAF as a computer programmer for five years, followed by 29 years as a civilian intelligence analyst, systems engineer, and program analyst. He is a vocal Space Exploitation Advocate, reads lots of Science Fiction, gives blood 5 times a year (197 pints & counting!), judges several Science Fairs every year, and runs and drinks with Hash House Harrier groups whenever he can.

Disclaimer: The views expressed in this article are Mr. Hranek’s alone and do not imply endorsement by the U.S. Army Training and Doctrine Command, the Army Futures Command, the U.S. Army, the Department of Defense, or the U.S. Government.  This piece is a work of speculative fiction, meant to be thought-provoking, and does not reflect the current position of the U.S. Army.

185. “The Queue”

[Editor’s Note: Mad Scientist Laboratory is pleased to present our latest edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Mad Scientist Initiative has come across during the previous month. In this anthology, we address how each of these works either informs or challenges our understanding of the Operational Environment (OE). We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

1.A brain-controlled exoskeleton has let a paralyzed man walk in the lab,” by Charlotte Jee, MIT Technology Review, 4 October 2019.

Amputees merge with their bionic leg,” in ScienceDaily, 2 October 2019.

Boston Dynamics’ Atlas can now do an impressive gymnastics routine,” by Jon Porter, The Verge, 24 September 2019.

Atlas bipedal robot / Source: Boston Dynamics

So what is the OE nexus for a new experimental capability that restores mobility to a quadriplegic man, prosthetics that provide realistic sensory feedback, and a bipedal robot that can split kick better than David Lee Roth in his Van Halen heyday? Scientists from Swiss universities ETH Zürich and EPFL and the latter’s spin-off SensArs Neuroprosthetics have fitted three amputees with prosthetic legs that provide sensory feedback, enabling subjects to feel the device. “Thanks to detailed sensations from [the] sole of the artificial foot and from the artificial knee, all three patients could maneuver through obstacles without the burden of looking at their artificial limb as they walked. They could stumble over objects yet mitigate falling. Most importantly, brain imaging and psychophysical tests confirmed that the brain is less solicited with the bionic leg, leaving more mental capacity available to successfully complete the various tasks.” Meanwhile, researchers at Grenoble University Hospital and Clinatec in France implanted an epidural wireless brain-machine interface into a paralyzed subject, enabling him to walk again via a four-limb neuroprosthetic exoskeleton in a laboratory proof-of-concept demonstration. Per MIT Technology Review, “researchers need to find a way to get the suit to safely balance itself before it can be used outside the laboratory.” Here’s where Boston Dynamic’s Atlas comes in — its “model predictive controller” allows the robot to anticipate forward momentum to “blend… one maneuver to the next,” without losing balance.

These three weak signals presage the potential for a dismounted Manned Unmanned Teaming (MUM-T) capability on a not-so-distant future battlefield, with Soldiers in the rear area (or even the Strategic Support Area!) controlling via cerebral interfaces whole platoons of agile fighting systems at what was once the bleeding edge of combat. This potential for a nimble, semi-autonomous close quarters combat fighting capability could keep future Warfighters out of harm’s way on particularly hazardous, “forlorn hope”-type assaults against heavily defended positions, while simultaneously maintaining humans-in-the-loop in future conflicts.

2.Coming Soon to a Battlefield: Robots That Can Kill,” by Zachary Fryer-Biggs, The Atlantic, 3 September 2019.

This detailed but engrossing piece by defense and technology reporter Zachary Fryer-Biggs provides an in-depth look at work on robotics being conducted across the Department of Defense but also helps to visualize and contextualize what a battlefield of the future looks like. This article shows the distinction between what is being explored, built, and tested by the DoD and the dystopian nightmare often portrayed in movies like The Terminator. The article addresses a number of current technologies – the U.S. Navy’s Phalanx Close-in Weapon System (CIWS) and Sea Hunter autonomous Unmanned Surface Vehicle (USV), and Israel’s Harpy autonomous anti-radiation loitering munition – that are on the pathway to autonomy already.

Several prominent figures in the DoD Artificial Intelligence (AI) innovation space, such as Mr. Bob Work (former Deputy Secretary of Defense and Dr. Bill Roper (former Director of Strategic Capabilities Office). Mr. Work is quoted as saying, “AI will make weapons more discriminant and better, less likely to violate the laws of war, less likely to kill civilians, less likely to cause collateral damage.” In a similar vein, Dr. Roper stated that the country that integrates AI into its arsenal first might have “an advantage forever.”

One of the biggest implications from this article is the following question — at what point does autonomy become completely self-deciding? Many of the technologies featured are still struggling with contextual understanding and complex cognitive processes that humans can frankly breeze through right now. The current challenge is not intelligent machines capable of eliminating humans – it’s that the machines aren’t smart enough or capable of complex decision-making. As Dr. Fei-Fei Li, the inaugural Sequoia Professor in the Computer Science Department at Stanford University, and Co-Director of Stanford’s Human-Centered AI Institute, noted “human vision is enormously complex and includes capabilities such as reading facial movements and other subtle cues.” But visual, facial, and object recognition in AI is improving rapidly in a short amount of time; we could see more autonomous systems on the battlefield faster than anticipated. Senior military leadership will be challenged with what approach must be taken: human-in-the-loop, human-out-of-the-loop, or human-starts-the-loop. Additionally, how much must be divested from manned or optionally-manned systems and invested into autonomous and optionally-teleoperated systems?


3.China’s AI Talent Base Is Growing, and then Leaving,” by Joy Dantong Ma, MacroPolo.org, 30 July 2019.

MacroPolo is the in-house think tank of the Paulson Institute in Chicago that focuses on the U.S.-China economic relationship. China’s initiative to become a global leader in Artificial Intelligence is excelling at producing talent, but failing at retaining it. The author uses invitation data from the NeurIPS conference, one of the premier events that gathers researchers exploring artificial neural networks. Prospective attendees submit related research papers for consideration to be invited to the event. Analysis of the data shows that there were 2,800 Chinese scientists accepted over the past ten years. Of those, more than 2,000 were working outside of China and 85% of them were working in the U.S. So while China has made great strides and invested heavily in creating a population of AI researchers, scientists, and engineers, they haven’t been able to insulate themselves from the competition for AI talent raging outside their borders. In 2017, they took steps to curb this exodus by offering incentives and better compensation but it remains to be seen if that is enough to compete with some of the most innovative and sought after companies in the world, like Google, IBM, and Microsoft.

For the Army, this is both an opportunity and a challenge. As the Army increasingly develops and integrates AI into formations, the need for the highest quality technical personnel will follow. A larger pool of experts available to the Army will allow for quicker and better solutions to problems Soldiers face on the battlefield. However, the Army will still be in competition with the same industry that China is losing to, while simultaneously challenged by the prospect of Chinese intellectual property theft and industrial espionage. How can the Army and the United States at large best take advantage of and retain this resource? What are the potential security implications? Even if the experts are here, can the U.S. compete with the tech industry to recruit the best and brightest AI talent?

4.Tree Planting Drones are Firing ‘Seed Missiles’ Into the Ground,” by Leo Shvedsky, Good, 17 April 2019.

In an effort to combat climate change, tech company BioCarbon Engineering has outfitted commercial drones with pre-germinated seed pods that they disperse and plant by firing them into the ground. This method is cheaper and exponentially faster than planting by hand. From a dual-use perspective, one can see many nefarious possibilities. Just as a drone can spread seeds waiting to sprout, it could just as easily spread a biological or chemical agent. The light payload of seeds used today could be replaced with a payload of powder or liquid that could infect humans, livestock, crops, or waterways. Further, this method, when employed judiciously, could be almost undetectable. Generally, when we think of weapons, we think of devices designed to destroy or kill, but with so many commercial products offering advanced capabilities and access, it’s becoming easier to achieve the same effects with none of the footprint.

5.Netflix’s ‘Unnatural Selection’ Trailer Makes Crispr Personal,” by Megan Molteni, WIRED, 04 October 2019.

Netflix’s new four-part docuseries, which debuted last week (18 October 2019), explores the democratization of genomic engineering — “Using the bacterial quirk that is CRISPR, scientists have essentially given anyone with a micropipette and an internet connection the power to manipulate the genetic code of any living thing.” This scientific revolution raises a host of new political, legal, and moral concerns that public policy, laws, and opinion are only now beginning to address. As research using this game changing technology becomes an ever-more international enterprise, distinctions in cultural norms, mores, and practices will be challenged. Within the brave new world of genetic engineering facilitated by CRISPER, all of the bright possibilities of revolutionary new treatments and cures for disease are tempered by equally dark prospects when coupled with nefarious intent.

Calls to control open source research and counter the potential use of gene-editing to produce biological weapons and/or affect global health may not be encompassing or sufficient enough. As Mad Scientist has previously explored, a growing community of individual biohackers and DIYers are pushing the boundaries of DNA editing, implants, embedded technologies, and unapproved chemical and biological injections. Not limited to the DIY community, China has announced their intent to become a global superpower and gene editing is one area where they seek to leap ahead of the United States. Their commitment to this objective is evidenced in their gene editing of 86 individuals and the births of “CRISPR babies.” In comparison, the United States is just now approaching human genomic trials (in a well-regulated environment).

Ethical principles are not standardized across cultures. State and non-state actors alike are now able to weaponize biotechnology with relative ease. The decisions we make today are crucial as we articulate, implement, and enforce public policy and international laws governing genetic engineering. These decisions will affect Soldiers and civilians alike, who face the threat of non-kinetic, genetically engineered capabilities on both the battlefield and in our Homeland.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

184. Blurring Lines Between Competition and Conflict

[Editor’s Note: The United States Army faces multiple, complex challenges in tomorrow’s Operational Environment (OE), confronting strategic competitors in an increasingly contested space across every domain (land, air, maritime, space, and cyberspace). The Mad Scientist Initiative, the U.S. Army Training and Doctrine Command (TRADOC) G-2 Futures, and Army Futures Command (AFC) Future Operational Environment Cell have collaborated with representatives from industry, academia, and the Intelligence Community to explore the blurring lines between competition and conflict, and the character of great power warfare in the future. Today’s post captures our key findings regarding the OE and what will be required to successfully compete, fight, and win in it — Enjoy!].

Alternative Views of Warfare: The U.S. Army’s view of the possible return to Large Scale Combat Operations (LSCO) and capital systems warfare might not be the future of warfare. Near-peer competitors will seek to achieve national objectives through competition short of conflict, and regional competitors and non-state actors will effectively compete and fight with smaller, cheaper, and greater numbers of systems against our smaller number of exquisite systems. However, preparation for LSCO and great state warfare may actually contribute to its prevention.

Competition and Conflict are Blurring: The dichotomy of war and peace is no longer a useful construct for thinking about national security or the development of land force capabilities. There are no longer defined transitions from peace to war and competition to conflict. This state of simultaneous competition and conflict is continuous and dynamic, but not necessarily cyclical. Potential adversaries will seek to achieve their national interest short of conflict and will use a range of actions from cyber to kinetic against unmanned systems walking up to the line of a short or protracted armed conflict. Authoritarian regimes are able to more easily ensure unity of effort and whole-of-government over Western democracies and work to exploit fractures and gaps in decision-making, governance, and policy.

The globalization of the world – in communications, commerce, and belligerence (short of war) – as well as the fragmentation of societies and splintering of identities has created new factions and “tribes,” and opened the aperture on who has offensive capabilities that were previously limited to state actors. Additionally, the concept of competition itself has broadened as social media, digital finance, smart technology, and online essential services add to a growing target area.

Adversaries seek to shape public opinion and influence decisions through targeted information operations campaigns, often relying on weaponized social media. Competitors invest heavily in research and development in burgeoning technology fields Artificial Intelligence (Al), quantum sciences, and biotech – and engage in technology theft to weaken U.S. technological superiority. Cyber attacks and probing are used to undermine confidence in financial institutions and critical government and public functions – Supervisory Control and Data Acquisition (SCADA), voting, banking, and governance. Competition and conflict are occurring in all instruments of power throughout the entirety of the Diplomatic, Information, Military and Economic (DIME) model.

Cyber actions raise the question of what is the threshold to be considered an act of war. If an adversary launches a cyber ­attack against a critical financial institution and an economic crisis results – is it an act of war? There is a similar concern regarding unmanned assets. While the kinetic destruction of an unmanned system may cost millions, no lives are lost. How much damage without human loss of life is acceptable?

Nuclear Deterrence limits Great Power Warfare: Multi-Domain Operations (MDO) is predicated on a return to Great Power warfare. However, nuclear deterrence could make that eventuality less likely. The U.S. may be competing more often below the threshold of conventional war and the decisive battles of the 20th Century (e.g., Midway and Operation Overlord). The two most threatening adversaries – Russia and China – have substantial nuclear arsenals, as does the United States, which will continue to make Great Power conventional warfare a high risk / high cost endeavor. The availability of non-nuclear capabilities that can deliver regional and global effects is a new attribute of the OE. This further complicates the deterrence value of militaries and the escalation theory behind flexible deterrent options. The inherent implications of cyber effects in the real world – especially in economies, government functions, and essential services – further exacerbates the blurring between competition and conflict.

Hemispheric Competition and Conflict: Over the last twenty years, Russia and China have been viewed as regional competitors in Eurasia or South-East Asia. These competitors will seek to undermine and fracture traditional Western institutions, democracies, and alliances. Both are transitioning to a hemispheric threat with a primary focus on challenging the U.S. Army all the way from its home station installations (i.e., the Strategic Support Area) to the Close Area fight. We can expect cyber attacks against critical infrastructure, the use of advanced information warfare such as deep fakes targeting units and families, and the possibility of small scale kinetic attacks during what were once uncontested administrative actions of deployment. There is no institutional memory for this threat and adding time and required speed for deployment is not enough to exercise MDO.

Disposable versus Exquisite: Current thinking espouses technologically advanced and expensive weapons platforms over disposable ones, which brings with it an aversion to employ these exquisite platforms in contested domains and an inability to rapidly reconstitute them once they are committed and subsequently attrited. In LSCO with a near-peer competitor, the ability to reconstitute will be imperative. The Army (and larger DoD) may need to shift away from large and expensive systems to cheap, scalable, and potentially even disposable unmanned systems (UxS). Additionally, the increases in miniaturized computing power in cheaper systems, coupled with advances in machine learning could lead to massed precision rather than sacrificing precision for mass and vice versa.

This challenge is exacerbated by the ability for this new form of mass to quickly aggregate/disaggregate, adapt, self­-organize, self-heal, and reconstitute, making it largely unpredictable and dynamic. Adopting these capabilities could provide the U.S. Army and allied forces with an opportunity to use mass precision to disrupt enemy Observe, Orient, Decide, and Act (OODA) loops, confuse kill chains/webs, overwhelm limited adversary formations, and exploit vulnerabilities in extended logistics tails and advanced but immature communication networks.

Human-Starts-the-Loop: There have been numerous discussions and debate over whether armed forces will continue to have a “man-in-the-loop” regarding Lethal Autonomous Weapons Systems (LAWS). Lethal autonomy in future warfare may instead be “human-starts-the-loop,” meaning that humans will be involved in the development of weapons/targeting systems – establishing rules and scripts – and will initiate the process, but will then allow the system to operate autonomously. It has been stated that it would be ethically disingenuous to remain constrained by “human-on-the-loop” or “human-in-the-­loop” constructs when our adversaries are unlikely to similarly restrict their own autonomous warfighting capabilities. Further, the employment of this approach could impact the Army’s MDO strategy. The effects of “human-starts-the-loop” on the kill chain – shortening, flattening, or otherwise dispersing – would necessitate changes in force structuring that could maximize resource allocation in personnel, platforms, and materiel. This scenario presents the Army with an opportunity to execute MDO successfully with increased cost savings, by: 1) Conducting independent maneuver – more agile and streamlined units moving rapidly; 2) Employing cross-domain fires – efficiency and speed in targeting and execution; 3) Maximizing human potential – putting capable Warfighters in optimal positions; and 4) Fielding in echelons above brigade – flattening command structures and increasing efficiency.

Emulation and the Accumulation of Advantages: China and Russia are emulating many U.S. Department of Defense modernization and training initiatives. China now has Combat Training Centers. Russia has programs that mirror the Army’s Cross Functional Team initiatives and the Artificial Intelligence (AI) Task Force. China and Russia are undergoing their own versions of force modernization to better professionalize the ranks and improve operational reach. Within these different technical spaces, both China and Russia are accumulating advantages that they envision will blunt traditional U.S. combat advantages and the tenets described in MDO. However, both nations remain vulnerable and dependent on U.S. innovations in microelectronics, as well as the challenges of incorporating these technologies into their own doctrine, training, and cultures.

If you enjoyed this post, please also see:

Jomini’s Revenge: Mass Strikes Back! by Zachery Tyson Brown.

Our “Tenth Man” – Challenging our Assumptions about the Operational Environment and Warfare posts, where Part 1 discusses whether the future fight will necessarily even involve LSCO and Part 2 addresses the implications of a changed or changing nature of war.

The Death of Authenticity:  New Era Information Warfare.



183. Ethics, Morals, and Legal Implications

[Editor’s Note: The U.S. Army Futures Command (AFC) and Training and Doctrine Command (TRADOC) co-sponsored the Mad Scientist Disruption and the Operational Environment Conference with the Cockrell School of Engineering at The University of Texas at Austin on 24-25 April 2019 in Austin, Texas. Today’s post is excerpted from this conference’s Final Report and addresses how the speed of technological innovation and convergence continues to outpace human governance. The U.S. Army must not only consider how best to employ these advances in modernizing the force, but also the concomitant ethical, moral, and legal implications their use may present in the Operational Environment (see links to the newly published TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, and the complete Mad Scientist Disruption and the Operational Environment Conference Final Report at the bottom of this post).]

Technological advancement and subsequent employment often outpaces moral, ethical, and legal standards. Governmental and regulatory bodies are then caught between technological progress and the evolution of social thinking. The Disruption and the Operational Environment Conference uncovered and explored several tension points that the Army may be challenged by in the future.


Cubesats in LEO / Source: NASA

Space is one of the least explored domains in which the Army will operate; as such, we may encounter a host of associated ethical and legal dilemmas. In the course of warfare, if the Army or an adversary intentionally or inadvertently destroys commercial communications infrastructure – GPS satellites – the ramifications to the economy, transportation, and emergency services would be dire and deadly. The Army will be challenged to consider how and where National Defense measures in space affect non-combatants and American civilians on the ground.

Per proclaimed Mad Scientists Dr. Moriba Jah and Dr. Diane Howard, there are ~500,000 objects orbiting the Earth posing potential hazards to our space-based services. We are currently able to only track less than one percent of them — those that are the size of a smart phone / softball or larger. / Source: NASA Orbital Debris Office

International governing bodies may have to consider what responsibility space-faring entities – countries, universities, private companies – will have for mitigating orbital congestion caused by excessive launching and the aggressive exploitation of space. If the Army is judicious with its own footprint in space, it could reduce the risk of accidental collisions and unnecessary clutter and congestion. It is extremely expensive to clean up space debris and deconflicting active operations is essential. With each entity acting in their own self-interest, with limited binding law or governance and no enforcement, overuse of space could lead to a “tragedy of the commons” effect.1  The Army has the opportunity to more closely align itself with international partners to develop guidelines and protocols for space operations to avoid potential conflicts and to influence and shape future policy. Without this early intervention, the Army may face ethical and moral challenges in the future regarding its addition of orbital objects to an already dangerously cluttered Low Earth Orbit. What will the Army be responsible for in democratized space? Will there be a moral or ethical limit on space launches?

Autonomy in Robotics

AFC’s Future Force Modernization Enterprise of Cross-Functional Teams, Acquisition Programs of Record, and Research and Development centers executed a radio rodeo with Industry throughout June 2019 to inform the Army of the network requirements needed to enable autonomous vehicle support in contested, multi-domain environments. / Source: Army.mil

Robotics have been pervasive and normalized in military operations in the post-9/11 Operational Environment. However, the burgeoning field of autonomy in robotics with the potential to supplant humans in time-critical decision-making will bring about significant ethical, moral, and legal challenges that the Army, and larger DoD are currently facing. This issue will be exacerbated in the Operational Environment by an increased utilization and reliance on autonomy.

The increasing prevalence of autonomy will raise a number of important questions. At what point is it more ethical to allow a machine to make a decision that may save lives of either combatants or civilians? Where does fault, responsibility, or attribution lie when an autonomous system takes lives? Will defensive autonomous operations – air defense systems, active protection systems – be more ethically acceptable than offensive – airstrikes, fire missions – autonomy? Can Artificial Intelligence/Machine Learning (AI/ML) make decisions in line with Army core values?

Deepfakes and AI-Generated Identities, Personas, and Content

Source: U.S. Air Force

A new era of Information Operations (IO) is emerging due to disruptive technologies such as deepfakes – videos that are constructed to make a person appear to say or do something that they never said or did – and AI Generative Adversarial Networks (GANs) that produce fully original faces, bodies, personas, and robust identities.2  Deepfakes and GANs are alarming to national security experts as they could trigger accidental escalation, undermine trust in authorities, and cause unforeseen havoc. This is amplified by content such as news, sports, and creative writing similarly being generated by AI/ML applications.

This new era of IO has many ethical and moral implications for the Army. In the past, the Army has utilized industrial and early information age IO tools such as leaflets, open-air messaging, and cyber influence mechanisms to shape perceptions around the world. Today and moving forward in the Operational Environment, advances in technology create ethical questions such as: is it ethical or legal to use cyber or digital manipulations against populations of both U.S. allies and strategic competitors? Under what title or authority does the use of deepfakes and AI-generated images fall? How will the Army need to supplement existing policy to include technologies that didn’t exist when it was written?

AI in Formations

With the introduction of decision-making AI, the Army will be faced with questions about trust, man-machine relationships, and transparency. Does AI in cyber require the same moral benchmark as lethal decision-making? Does transparency equal ethical AI? What allowance for error in AI is acceptable compared to humans? Where does the Army allow AI to make decisions – only in non-combat or non-lethal situations?

Commanders, stakeholders, and decision-makers will need to gain a level of comfort and trust with AI entities exemplifying a true man-machine relationship. The full integration of AI into training and combat exercises provides an opportunity to build trust early in the process before decision-making becomes critical and life-threatening. AI often includes unintentional or implicit bias in its programming. Is bias-free AI possible? How can bias be checked within the programming? How can bias be managed once it is discovered and how much will be allowed? Finally, does the bias-checking software contain bias? Bias can also be used in a positive way. Through ML – using data from previous exercises, missions, doctrine, and the law of war – the Army could inculcate core values, ethos, and historically successful decision-making into AI.

If existential threats to the United States increase, so does pressure to use artificial and autonomous systems to gain or maintain overmatch and domain superiority. As the Army explores shifting additional authority to AI and autonomous systems, how will it address the second and third order ethical and legal ramifications? How does the Army rectify its traditional values and ethical norms with disruptive technology that rapidly evolves?

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.
    • Ethics and the Future of War panel, facilitated by LTG Dubik (USA-Ret.) at the Mad Scientist Visualizing Multi Domain Battle 2030-2050 Conference, facilitated at Georgetown University, on 25-26 July 2017.

Just Published! TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, 7 October 2019, describes the conditions Army forces will face and establishes two distinct timeframes characterizing near-term advantages adversaries may have, as well as breakthroughs in technology and convergences in capabilities in the far term that will change the character of warfare. This pamphlet describes both timeframes in detail, accounting for all aspects across the Diplomatic, Information, Military, and Economic (DIME) spheres to allow Army forces to train to an accurate and realistic Operational Environment.

1 Munoz-Patchen, Chelsea, “Regulating the Space Commons: Treating Space Debris as Abandoned Property in Violation of the Outer Space Treaty,” Chicago Journal of International Law, Vol. 19, No. 1, Art. 7, 1 Aug. 2018. https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1741&context=cjil

2 Robitzski, Dan, “Amazing AI Generates Entire Bodies of People Who Don’t Exist,” Futurism.com, 30 Apr. 2019. https://futurism.com/ai-generates-entire-bodies-people-dont-exist

182. “Tenth Man” – Challenging our Assumptions about the Operational Environment and Warfare (Part 2)

[Editor’s Note: Mad Scientist Laboratory is pleased to publish our latest “Tenth Man” post. This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging. The Mad Scientist Laboratory offers it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Operational Environment (OE). We continue our series of “Tenth Man” posts examining the foundational assumptions of The Operational Environment and the Changing Character of Future Warfare, challenging them, reviewing the associated implications, and identifying potential signals and/or indicators of change. Enjoy!]

Assumption:  The character of warfare will change but the nature of war will remain human-centric.

The character of warfare will change in the future OE as it inexorably has since the advent of flint hand axes; iron blades; stirrups; longbows; gunpowder; breech loading, rifled, and automatic guns; mechanized armor; precision-guided munitions; and the Internet of Things. Speed, automation, extended ranges, broad and narrow weapons effects, and increasingly integrated multi-domain conduct, in addition to the complexity of the terrain and social structures in which it occurs, will make mid Twenty-first Century warfare both familiar and utterly alien.

The nature of warfare, however, is assumed to remain human-centric in the future. While humans will increasingly be removed from processes, cycles, and perhaps even decision-making, nearly all content regarding the future OE assumes that humans will remain central to the rationale for war and its most essential elements of execution. The nature of war has remained relatively constant from Thucydides through Clausewitz, and forward to the present. War is still waged because of fear, honor, and interest, and remains an expression of politics by other means. While machines are becoming ever more prevalent across the battlefield – C5ISR, maneuver, and logistics – we cling to the belief that parties will still go to war over human interests; that war will be decided, executed, and controlled by humans.

Implications:  If these assumptions prove false, then the Army’s fundamental understanding of war in the future may be inherently flawed, calling into question established strategies, force structuring, and decision-making models. A changed or changing nature of war brings about a number of implications:

– Humans may not be aware of the outset of war. As algorithmic warfare evolves, might wars be fought unintentionally, with humans not recognizing what has occurred until effects are felt?

– Wars may be fought due to AI-calculated opportunities or threats – economic, political, or even ideological – that are largely imperceptible to human judgement. Imagine that a machine recognizes a strategic opportunity or impetus to engage a nation-state actor that is conventionally (read that humanly) viewed as weak or in a presumed disadvantaged state. The machine launches offensive operations to achieve a favorable outcome or objective that it deemed too advantageous to pass up.

  • – Infliction of human loss, suffering, and disruption to induce coercion and influence may not be conducive to victory. Victory may be simply a calculated or algorithmic outcome that causes an adversary’s machine to decide their own victory is unattainable.

– The actor (nation-state or otherwise) with the most robust kairosthenic power and/or most talented humans may not achieve victory. Even powers enjoying the greatest materiel advantages could see this once reliable measure of dominion mitigated. Winning may be achieved by the actor with the best algorithms or machines.

  • These implications in turn raise several questions for the Army:

– How much and how should the Army recruit and cultivate human talent if war is no longer human-centric?

– How should forces be structured – what is the “right” mix of humans to machines if war is no longer human-centric?

– Will current ethical considerations in kinetic operations be weighed more or less heavily if humans are further removed from the equation? And what even constitutes kinetic operations in such a future?

– Should the U.S. military divest from platforms and materiel solutions (hardware) and re-focus on becoming algorithmically and digitally-centric (software)?


– What is the role for the armed forces in such a world? Will competition and armed conflict increasingly fall within the sphere of cyber forces in the Departments of the Treasury, State, and other non-DoD organizations?

– Will warfare become the default condition if fewer humans get hurt?

– Could an adversary (human or machine) trick us (or our machines) to miscalculate our response?

Signposts / Indicators of Change:

– Proliferation of AI use in the OE, with increasingly less human involvement in autonomous or semi-autonomous systems’ critical functions and decision-making; the development of human-out-of-the-loop systems

– Technology advances to the point of near or actual machine sentience, with commensurate machine speed accelerating the potential for escalated competition and armed conflict beyond transparency and human comprehension.

– Nation-state governments approve the use of lethal autonomy, and this capability is democratized to non-state actors.

– Cyber operations have the same political and economic effects as traditional kinetic warfare, reducing or eliminating the need for physical combat.

– Smaller, less-capable states or actors begin achieving surprising or unexpected victories in warfare.

– Kinetic war becomes less lethal as robots replace human tasks.

– Other departments or agencies stand up quasi-military capabilities, have more active military-liaison organizations, or begin actively engaging in competition and conflict.

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.

… as well as our previous “Tenth Man” blog posts:

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).