[Editor’s Note: In today’s post, Mad Scientist Laboratory explores China’s whole-of-nation approach to exploiting operational environments, synchronizing government, military, and industry activities to change geostrategic power paradigms via competition in 2035. Excerpted from products previously developed and published by the TRADOC G-2’s Operational Environment and Threat Analysis Directorate (see links below), this post describes China’s approach to exploitation and identifies the implications for the U.S. Army — Enjoy!]
TheOperational Environment is envisioned as a continuum, divided into two eras: the Era of Accelerated Human Progress(now through 2035) and the Era of Contested Equality (2035 through 2050). This latter era is marked by significant breakthroughs in technology andconvergences in terms of capabilities, which lead to significant changes in the character of warfare. During this period, traditional aspects of warfare undergo dramatic, almost revolutionary changes which at the end of this timeframe may even challenge the very nature of warfare itself. In this era, no one actor is likely to have any long-term strategic or technological advantage, with aggregate power between the U.S. and its strategic competitors being equivalent, but not necessarily symmetric. Prevailing in this period will depend on an ability to synchronize multi-domain capabilities against an artificial intelligence-enhanced adversary with an overarching capability to visualize and understand the battlespace at even greater ranges and velocities. Equally important will be controlling information and the narrative surrounding the conflict. Adversaries will adopt sophisticated information operations and narrative strategies to change the context of the conflict and thus defeat U.S. political will.
The future strategic environment will be characterized by apersistent state of competition where global competitors seek to exploit the conditions of operational environments to gain advantage. Adversaries understand that the application of any or all elements of national power in competition just below the threshold of armed conflict is an effective strategy against the U.S.
China is rapidly modernizing its armed forces and developing new approaches to warfare. Beijing has invested significant resources into research and development of a wide array of advanced technologies. Coupled with its time-honored practice of reverse engineering technologies or systems it purchases or acquires through espionage, this effort likely will allow China to surpass Russia as our most capable threat sometime around 2030.
China’s Approach to Exploitation
China’s whole-of-nation approach, which involves synchronization of actions across government, military, and industry, will facilitate exploitation of operational environments and enable it to gain global influence through economic exploitation.
China will leverage the international system to advance its own interests while attempting to constrain others, including the U.S.
Preferred Conditions and Methods
The following conditions and methods are conducive to exploitation by China, enabling them to shape the strategic environment in 2035:
Infrastructure Capacity Challenges: China targets undeveloped and fragile environments where their capital investments, technology, and human capital can produce financial gains and generate political influence.
Interconnected Economies: China looks for partners and opportunities to become a significant stakeholder in a wide variety of economies in order to capitalize on its investments as well as generate political influence.
Specialized Economies: China looks for opportunities to partner with specialized markets and leverage their vulnerabilities for gain.
Technology Access Gaps: China targets areas where their capital investments in technology provide partners with key resources and competitive advantages by filling technology gaps.
Implications for the U.S. Army:
Traditional Army threat paradigms may not be sufficient for competition.
The Army could be drawn into unanticipated escalation as a result of China’s activities during the competition phase.
Army military partnerships will likely be undermined by China in 2035.
Army operations and engagements will be increasingly impacted by the pervasiveness of Chinese goods, technology, infrastructure, and systems.
If you enjoyed this post, please see theoriginal paper and associated infographic of the same title, both by the TRADOC G-2’s Operational Environment and Threat Analysis Directorate and hosted on their All Partners Access Network (APAN) site…
… and read the following MadSci Laboratory blog posts:
[Editor’s Note: Mad Scientist Laboratory is pleased to publish today’s post by guest blogger Zachary Kallenborn. In the first of a series of posts, Mr. Kallenborn addresses how the convergence of emerging technologies is eroding barriers to terrorist organizations acquiring the requisite equipment, materiel, and expertise to develop and deliver chemical, biological, radiological, and nuclear (CBRN) agents in an attack. Learn about the challenges that (thankfully) remain and the ramifications for the operational environment. (Note: Some of the embedded links in this post are best accessed using non-DoD networks.)]
On the evening of July 15, 2034, 264 West Point cadets reported to the hospital with a severe, but unknown illness. West Point Military Police (MP) investigated the incident and discovered video footage of two men launching several autonomous drones from a pickup truck near the base, then driving off. A suspicious fire the same night at a local apartment complex revealed remnants of 3D printers and synthetic biology kits. The investigation remains ongoing…
Such a scenario is fantasy, but increasingly plausible.
Various emerging technologies reduce the barriers to chemical, biological, radiological, and nuclear (CBRN) terrorism — bioterrorism in particular. The convergence of these technologies used may allow terrorists to acquire CBRN weapons with minimal identifiable signatures. Although these technologies exist today, their sophistication, availability, and terrorist interest in their use is likely to grow over the coming decades. For example, the first powered model airplane was flown in 1937; however, terrorists did not attempt to use drones until 1994.1 Thankfully, major challenges will still inhibit truly catastrophic CBRN terror.
CBRN weapon acquisition is a difficult task for terrorist organizations. Terrorists must acquire significant specialized equipment, materiel, expertise, and the organizational capabilities to support the acquisition of such weapons and a physical location to assemble them. Even supposed successes likeAum Shinrikyo’s attack on the Tokyo subway were not nearly as impactful as they could have been. Aum’s biological weapons program was also a notable failure. In one instance, a member of the cult fell into a vat of clostridium botulinum (the bacteria that produces the botulinum toxin) and emerged unharmed.2 As a result, only 1-2% of terrorist organizations pursue or use CBRN weapons.3 But these barriers are eroding.
3D printing may ease the acquisition of some equipment and materiel. 3D printers can be used to create equipment components at reduced cost and have been used to create bioreactors, microscopes, and others key elements.4 Bioprinters can also create tissue samples to test weapons agents.5 The digital build-files for 3D printed items can also be sent and received online, perhaps from black market sellers or individuals sympathetic to the terrorist’s ideology.6
Synthetic biology offers improved access to biological weapons agents, especially to otherwise highly controlled agents. Synthetic biology can be used to create new or modify existing organisms.7 According to the World Health Organization, synthetic biology techniques could plausibly allow recreation of the variola virus (smallpox).8 That is especially significant because the virus only exists in two highly secure laboratories.9
Delivery of a CBRN agent can also be a challenge. CBRN agents useful for mass casualty attacks rely on the air to carry the agent to an adversary (nuclear weapons are an obvious exception, but the likelihood of a terrorist organization acquiring a nuclear weapon is extremely low). Poor wind conditions, physical barriers, rain, and other environmental conditions can inhibit delivery. Biological weapons also require spray systems that can create droplets of an appropriate size, so that the agent is light enough to float in the air, but heavy enough to enter the lungs (approximately 1-10 microns).
Drones also make CBRN agent delivery easier. Drones offer terrorists access to the air. Terrorists can use them to fly over physical barriers, such as fencing or walls to carry out an attack. Drones also give terrorists more control over where they launch an attack: they can choose a well-defended position or one proximate to an escape route. Although small drone payload sizes limit the amount of agent that can be delivered, terrorists can acquire multiple drones.
Advances in drone autonomy allow terrorists to control more drones at once.10 Autonomy also allows terrorists to launch more complex attacks, perhaps directing autonomous drones to multiple targets or follow a path through multiple, well-populated areas. Greater autonomy also reduces the risks to the terrorists, because they can flee more readily from the area.
3D printing can also help with CBRN agent delivery. Spray-tanks and nozzles subject to export controls can be 3D printed.11 3D printers can also be used to make drones.12 3D printers also provide customizability to adapt these systems for CBRN agent delivery.
CBRN weapons acquisition also requires significant technical expertise. Terrorist organizations must correctly perform complex scientific procedures, know which procedures to use, know which equipment and materials are needed, and operate the equipment. They must do all of that without harming themselves or others (harming innocents may not seem like a concern for an organization intent on mass harm; however, it would risk exposure of the larger plot.) Much of this knowledge is tacit, meaning that it is based on experience and cannot be easily transferred to other individuals.
Emerging technologies do not drastically reduce this barrier, though experts disagree. For example, genome-synthesis requires significant tacit knowledge that terrorists cannot easily acquire without relevant experience.13 Likewise, 3D printers are unlikely to spit out a completely assembled piece of equipment. Rather, 3D printers may provide parts that need to be assembled into a final result. However, some experts argue that as technologies become more ubiquitous, they will be commercialized and made easier to use.14 While this technology is likely to become more accessible, physical limitations will place an upper bound on how accessible it can become.
The Future Operational Environment
If CBRN terrorism is becoming easier, U.S. forces can be expected to be at greater risk of CBRN attack and face more frequent attacks. An attack with infectious biological weapons from afar would not likely be discovered until well after the attack took place. Although still quite unlikely, a major biological attack could cause massive harm. Timed correctly, a CBRN terror attack could delay deployment of troops to a combat zone, inhibit launch of close-air support assets, or harm morale by delaying delivery of delicious pizza MREs.15 Off the battlefield, troops may have less access to protective gear and be at greater risk of harm. Even a poorly made agent can harm military operations: quarantines must still be established and operations limited until the risk is neutralized or at least determined to be non-harmful.
However, counter-intuitively, terrorist demand for CBRN weapons may actually decrease, because emerging technologies also offer easier pathways to mass casualties. These risks will be explored in the next article in this series.
Zachary Kallenborn is a freelance researcher and analyst, specializing in Chemical, Biological, Radiological, and Nuclear (CBRN) weapons, CBRN terrorism, drone swarms, and emerging technologies writ large. His research has appeared in the Nonproliferation Review, Studies in Conflict and Terrorism, Defense One, the Modern War Institute at West Point, and other outlets. His most recent study, Swarming Destruction: Drone Swarms and CBRN Weapons, examines the threats and opportunities of drone swarms for the full scope of CBRN weapons.
Disclaimer: The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).
7 Committee on Strategies for Identifying and Addressing Potential Biodefense Vulnerabilities Posed by Synthetic Biology, “Biodefense in the Age of Synthetic Biology,” (Washington DC: The National Academies Press, 2018), 9.
[Editor’s Note: Mad Scientist Laboratory welcomes back returning guest blogger Mr. Matthew Ader, whose cautionary post warns of the potential convergence of Islamic terrorism and climate change activism, possibly resonating with western populations that have not been (to date) predisposed to listening to their messaging. (Note: Some of the embedded links in this post are best accessed using non-DoD networks.)]
Climate change is increasingly being viewed not only as an ecological or economic concern, but as adirect security threat. It both endangers vital coastal infrastructure through sea level rise and multiplies existing issues of food insecurity and migration. However, in these analyses, one issue in particular is missed – the likely emergence of transnational terrorist networks which fuse climate grievance with Islamic terrorism.
Ecologically inspired terrorism is, of course, hardly a new concept. There are tens of ecoterrorist organisations, and some have gained substantial notoriety. The model example of this is the Earth Liberation Front, which was highly active in the early 2000s. However, because they tend to operate in developed nations, these groups generally lack the safe areas and large, disenfranchised recruiting bases which empower terrorists elsewhere.
Ecoterrorism, however, is not limited to the developed world – for example, two years ago, an ecoterrorist group detonated a makeshift bomb in Brazil. As the impact of climate change grows ever more severe in the developing world, it is probable that there will be more direct climate-change inspired terrorism. This is especially likely given that the populations of developing nations are increasingly connected to the international information infrastructure – allowing more widespread comprehension of climate change as a global phenomenon with roots in western nations.
These threats pose a new dimension to the terrorist threat. But what is more worrying is the potential for the infection of ecoterrorist groups by radical Islamic terrorist organisations.
Islamcontains a strong thread of environmental stewardship. This is not a call for violence in protection of the Earth, but it has already been exploited by radical groups – for example,Al Shabaab banning plastic bags or the Taliban’s endorsement of afforestation. This gives the groups legitimacy in their area of operations. As climate change worsens and grievance intensifies, it is highly likely that this vein of stewardship of the Earth will strengthen in Islamic terrorist propaganda – both as a way of reinforcing legitimacy and to gain recruits or support.
If radical Islamic terrorists can harness climate change grievance, then the threat they offer against western interests increases substantially. This is for three key reasons:
Firstly, Islamic terrorist groups such as Al Qaeda in the Arabian Peninsula or Daesh tend to have relatively developed infrastructure for propaganda and training. While U.S.-led counterterror operations have proven effective in reducing the threat they pose, the carnage in theBataclan, Manchester Arena, and Nice – to name but a few incidents – clearly indicate that Islamic terrorists can still mount both expeditionary and homegrown terrorist attacks.
Secondly, Islamic terrorist groups have subject matter expertise regarding explosives and strong links withIED supplier networks. The aforementioned Brazilian ecoterrorist group failed to inflict casualties with their crude bomb. If equipped with military-grade high explosive, of the type used by more ‘professional’ terrorist organisations, then the attack could have been much more devastating.
Thirdly, the audience for radical, violent Islamic teaching is very small, and muchof it is in the Middle East. The audience for climate grievance is far larger –70% of Americansaged 18-34 worry a great deal or a fair amount about climate change – andglobal. This is obviously not to suggest that all climate change activists or people concerned about it are putative terrorists.
However, if even 1 in a 1000 of that American number were willing to take more robust action – such as giving support to terrorists, or even carrying out attacks themselves – it would comprise a support base ofapproximately 47,200 people. That presents a significant threat, only made worse by the ‘moral fairness’ of climate terrorism – attacking the U.S. for vague oppression of Muslims plays differently in media and politics than attacking the U.S. because of its very real role as one of the world’s largest polluters.
This is of course a brief overview. However, the possibility of a hybridisation of climate change grievance and radical Islamic terrorism is too dangerous to ignore. More research is required, and urgently, to ascertain the extent of the risk and find ways to mitigate it. The world community was practically blindsided by the emergence of Al Qaeda. It would be unacceptably irresponsible to let such a failure happen again.
– SeeDr. Gary Ackerman‘s presentation and slide deck on “Non-State actors and their uses of emerging technologies” from the Mad Scientist Robotics, Artificial Intelligence & Autonomy Conference, facilitated at Georgia Tech Research Institute (GTRI), on 7-8 March 2017.
Mr. Matthew Ader is a first-year undergraduate taking War Studies at King’s College London.
Disclaimer: Mr. Ader is not affiliated with U.S. Army Training and Doctrine Command, the U.S. Army, or the U.S. Government. This piece is meant to be thought-provoking and does not reflect the current position of the U.S. Army.
[Editor’s Note: Mad Scientist Laboratory is pleased to publish the following post by guest blogger Dr. Jan Kallberg, faculty member, United States Military Academy at West Point, and Research Scientist with the Army Cyber Institute at West Point. His post serves as a cautionary tale regarding our finite intellectual resources and the associated existential threat in failing to protect them!]
Preface: Based on my experience in cybersecurity, migrating to a broader cyber field, there have always been those exceptional individuals that have an unreplicable ability to see the challenge early on, create a technical solution, and know how to play it in the right order for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival ofArtificial Intelligence increases our reliance on these highly capable individuals – because someone must set the rules, the boundaries, and point out the trajectory for Artificial Intelligence at initiation.
As an industrialist society, we tend to see technology and the information that feeds it as the weapons – and ignore the few humans that have a large-scale direct impact. Even if identified as a weapon, how do you make a human mind classified? Can we protect these high-ability individuals that in the digital world are weapons, not as tools but compilers of capability, or are we still focused on the tools? Why do we see only weapons that are steel and electronics and not the weaponized mind as a weapon? I believe firmly that we underestimate the importance of Applicable Intelligence – the ability to play the cyber engagement in the optimal order. Adversaries are often good observers because they are scouting for our weak spots. I set the stage for the following post in 2034, close enough to be realistic and far enough for things to happen when our adversaries are betting that we rely more on a few minds than we are willing to accept.
Post: In a not too distant future, 20th of August 2034, a peer adversary’s first strategic moves are the targeted killings of less than twenty individuals as they go about their daily lives: watching a 3-D printer making a protein sandwich at a breakfast restaurant; stepping out from the downtown Chicago monorail; or taking a taste of a poison-filled retro Jolt Cola. In thegray zone, when the geopolitical temperature increases, but we are still not at war yet, our adversary acts quickly and expedites a limited number of targeted killings within the United States of persons whom are unknown to mass media, the general public, and have only one thing in common – Applicable Intelligence (AI).
The ability to apply is a far greater asset than the technology itself. Cyber and card games have one thing in common, the order you play your cards matters. In cyber, the tools are publicly available, anyone can download them from the Internet and use them, but the weaponization of the tools occurs when used by someone who understands how to play the tools in an optimal order. These minds are different because they see an opportunity to exploit in a digital fog of war where others don’t or can’t see it. They address problems unburdened by traditional thinking, in new innovative ways, maximizing the dual-purpose of digital tools, and can create tangible cyber effects.
It is the Applicable Intelligence (AI) that creates the procedures, the application of tools, and turns simple digital software in sets or combinations as a convergence to digitally lethal weapons. This AI is the intelligence to mix, match, tweak, and arrange dual purpose software. In 2034, it is as if you had the supernatural ability to create a thermonuclear bomb from what you can find at Kroger or Albertson.
Sadly we missed it; we didn’t see it. We never left the 20th century. Our adversary saw it clearly and at the dawn of conflict killed off the weaponized minds, without discretion, and with no concern for international law or morality.
These intellects are weapons of growing strategic magnitude. In 2034, the United States missed the importance of these few intellects. This error left them unprotected.
All of our efforts were instead focusing on what they delivered, the application and the technology, which was hidden in secret vaults and only discussed in sensitive compartmented information facilities. Therefore, we classify to the highest level to ensure the confidentiality and integrity of our cyber capabilities. Meanwhile, the most critical component, the militarized intellect, we put no value to because it is a human. In a society marinated in an engineering mindset, humans are like desk space, electricity, and broadband; it is a commodity that is input in the production of the technical machinery. The marveled technical machinery is the only thing we care about today, 2018, and as it turned out in 2034 as well.
We are stuck in how we think, and we are unable to see it coming, but our adversaries see it. At a systematic level, we are unable to see humans as the weapon itself, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed. As the armory of the war of 1812, as the stockpile of 1943, and as the launch pad of 2034. Arms are made of steel, or fancier metals, with electronics – we failed in 2034 to see weapons made of corn, steak, and an added combative intellect.
General Nakasone stated in 2017, “Our best ones [coders] are 50 or 100 times better than their peers,” and continued “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” In reality, the success of cyber and cyber operations is highly dependent not on the tools or toolsets but instead upon the super-empowered individual that General Nakasone calls “the 50-x coder.”
There were clear signals that we could have noticed before General Nakasone pointed it out clearly in 2017. The United States’ Manhattan Project during World War II had at its peak 125,000 workers on the payroll, but the intellects that drove the project to success and completion were few. The difference with the Manhattan Project and the future of cyber is that we were unable to see the human as a weapon, being locked in by our path dependency as an engineering society where we hail the technology and forget the importance of the humans behind it.
America’s endless love of technical innovations and advanced machinery reflects in a nation that has celebrated mechanical wonders and engineered solutions since its creation. For America, technical wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the intercontinental railroad, the Panama Canal, the manufacturing era, the moon landing, and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps that can solve a problem or act.
The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced. In 2034, the era of digital conflicts and thewar between algorithms with engagements occurring at machine speed with no time for leadership or human interaction, it is the intellects that design and understand how to play it. We didn’t see it.
In 2034, with fewer than twenty bodies piled up after targeted killings, resides the Cyber Pearl Harbor. It was not imploding critical infrastructure, a tsunami of cyber attacks, nor hackers flooding our financial systems, but instead traditional lead and gunpowder. The super-empowered individuals are gone, and we are stuck in a digital war at speeds we don’t understand, unable to play it in the right order, and with limited intellectual torque to see through the fog of war provided by an exploding kaleidoscope of nodes and digital engagements.
Dr. Jan Kallberg is currently an Assistant Professor of Political Science with the Department of Social Sciences, United States Military Academy at West Point, and a Research Scientist with the Army Cyber Institute at West Point. He was earlier a researcher with the Cyber Security Research and Education Institute, The University of Texas at Dallas, and is a part-time faculty member at George Washington University. Dr. Kallberg earned his Ph.D. and MA from the University of Texas at Dallas and earned a JD/LL.M. from Juridicum Law School, Stockholm University. Dr. Kallberg is a certified CISSP, ISACA CISM, and serves as the Managing Editor for the Cyber Defense Review. He has authored papers in the Strategic Studies Quarterly, Joint Forces Quarterly, IEEE IT Professional, IEEE Access, IEEE Security and Privacy, and IEEE Technology and Society.
On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC. Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces. The new and innovative learning capabilities addressed at this conference will enable our Soldiers and Leaders to act quickly and decisively in a changing Operational Environment (OE) with fleeting windows of opportunity and more advanced and lethal technologies.
We have identified the following “Top 10” takeaways related to Learning in 2050:
1. Many learning technologies built around commercial products are available today (Amazon Alexa, Smart Phones, Immersion tech, Avatar experts) for introduction into our training and educational institutions. Many of these technologies are part of the Army’s concept for aSynthetic Training Environment (STE)and there are nascent manifestations already. For these technologies to be widely available to the future Army, the Army of today must be prepared to address:
– The cultural challenges associated with changing the dynamicbetween learners and instructors, teachers, and coaches; and
– The adequate funding to produce capabilities at scale so that digital tutors or other technologies (Augmented Reality [AR] / Virtual Reality [VR], etc.) and skills required in a dynamic future, like critical thinking/group think mitigation, are widely available or perhaps ubiquitous.
2. Personalization and individualization of learning in the future will be paramount, and some training that today takes place in physical schools will be more the exception, with learning occurring at the point of need. This transformation will not be limited to lesson plans or even just learning styles:
– Project-oriented learning; when today’s high school students are building apps, they are asked “What positive change do you want to have?” One example is an open table for Bully Free Tables. In the future, learners will learn through working on projects;
– Project-oriented learning will lead to a convergence of learning and operations, creating a chicken (learning) or the egg (mission/project) relationship; and
– Learning must be adapted to consciously address the desired, or extant, culture.
3. Some jobs and skill sets have not even been articulated yet. Hobbies and recreational activities engaged in by kids and enthusiasts today could become occupations or Military Occupational Specialties (MOS’s) of the future (e.g., drone creator/maintainer, 3-D printing specialist, digital and cyber fortification construction engineer — think Minecraft and Fortnite with real-world physical implications). Some emerging trends inpersonalized warfare, big data, and virtual nations could bring about the necessity for more specialists that don’t currently exist (e.g., data protection and/or data erasure specialists).
4. The New Human (who will be born in 2032 and is the recruit of 2050) will be fundamentally different from the Old Human. The Chief of Staff of the Army (CSA) in 2050 is currently a young Captain in our Army today. While we are arguably cyborgs today (with integrated electronics in our pockets and on our wrists), the New Humans will likely be cyborgs in the truest sense of the word, with some havingembedded sensors. How will those New Humans learn? What will they need to learn? Why would they want to learn something? These are all critical questions the Army will continue to ask over the next several decades.
5. Learning is continuous and self-initiated, while education is a point in time and is “done to you” by someone else. Learning may result in a certificate or degree – similar to education – or can lead to the foundations of a skill or a deeper understanding of operations and activity. How will organizations quantify learning in the future? Will degrees or even certifications still be the benchmark for talent and capability?
6. Learning isn’t slowing down, it’s speeding up. More and more things are becoming instantaneous and humans have no concept of extreme speed. Tesla cars have the ability to update software, with owners getting into a veritably different car each day. What happens to our Soldiers when military vehicles change much more iteratively? This may force a paradigm shift wherein learning means tightening local and global connections (tough to do considering government/military network securities, firewalls, vulnerabilities, and constraints); viewing technology as extended brains all networked together (similar to Dr. Alexander Kott’s look at the Internet of Battlefield Things [IoBT]); and leveraging these capabilities to enable Soldier learning at extremely high speeds.
7. While there are a number of emerging concepts and technologies to improve and accelerate learning (TNT, extended reality, personalized learning models, and intelligent tutors), the focus, training stimuli, data sets, and desired outcomes all have to be properly tuned and aligned or the Learner could end up losing correct behavior habits (developing maladaptive plasticity), developing incorrect or skewed behaviors (per the desired capability), or assuming inert cognitive biases.
8. Geolocation may become increasingly less important when it comes to learning in the future. If Apple required users to go to Silicon Valley to get trained on an iPhone, they would be exponentially less successful. But this is how the Army currently trains. The ubiquity of connectivity, the growth of the Internet of Things (and eventually Internet of Everything), the introduction of universal interfaces (think one XBOX controller capable of controlling 10 different types of vehicles), major advances in modeling and simulations, and social media innovation all converge to minimize the importance of teachers, students, mentors, and learners being collocated at the same physical location.
9. Significant questions have to be asked regarding the specificity of training in children at a young age to the point that we may be overemphasizing STEM from an early age and not helping them learn across a wider spectrum. We need Transdisciplinarity in the coming generations.
10. 3-D reconstructions of bases, training areas, cities, and military objectives coupled with mixed reality, haptic sensing, and intuitive controls have the potential to dramatically change how Soldiers train and learn when it comes to not only single performance tasks (e.g., marksmanship, vehicle driving, reconnaissance, etc.) but also in dense urban operations, multi-unit maneuver, and command and control.
During the next two weeks, we will be posting the videos from each of the Learning in 2050 Conference presentations on the TRADOC G-2 Operational Environment (OE) EnterpriseYouTube Channel and the associated slides on our Mad Scientist APAN site — stay connected here at the Mad Scientist Laboratory.
One of the main thrusts in the Mad Scientist lines of effort is harnessing and cultivating the Intellect of the Nation. In this vein, we are asking Learning in 2050 Conference participants (both in person and online) to share their ideas on the presentations and topic. Please consider:
– What topics were most important to you personally and professionally?
– What were your main takeaways from the event?
– What topics did you want the speakers to extrapolate more on?
– What were the implications for your given occupation/career field from the findings of the event?
Your input will be of critical importance to our analysis and products that will have significant impact on the future of the force in design, structuring, planning, and training! Please submit your input to Mad Scientist at: email@example.com.
[Editor’s Note: In the movie World War Z (I know… the book was way better!), an Israeli security operative describes how Israel prepared for the coming zombie plague. Their strategy was if nine men agreed on an analysis or a course of action,the tenth man had to take an alternative view.
This Devil’s Advocate or contrarian approach serves as a form ofalternative analysis and is a check against group think and mirror imaging. The Mad Scientist Laboratory will begin a series of posts entitled “The Tenth Man” to offer a platform for the contrarians in our network (I know you’re out there!) to share their alternative perspectives and analyses regarding the Future Operational Environment.]
Our foundational assumption about the Future Operational Environment is that the Character of Warfare ischanging due to an exponential convergence of emerging technologies. Artificial Intelligence, Robotics, Autonomy, Quantum Sciences, Nano Materials, and Neuro advances will mean more lethal warfare at machine speed, integrated seamlessly across all five domains – air, land, sea, cyber, and space.
We have consistently seen four main themes used to counter this idea of a changing character of war, driven by technology:
1. Cost of Robotic Warfare: All armies must plan for the need to reconstitute forces. This is particularly ingrained in the U.S. Army’s culture where we have often lost the first battles in any given conflict (e.g., Kasserine Pass in World War II and Task Force Smith in Korea). We cannot afford to have a “one loss” Army where our national wealth and industrial base can not support the reconstitution of a significant part of our Army. A high-cost, roboticized Army might also limit our political leaders’ options for the use of military force due to the risk of loss and associated cost.
2. Technology Hype: Technologists are well aware of the idea of a hype cycle when forecasting emerging technologies. Machine learning was all the rage in the 1970s, but the technology needed to drive these tools did not exist. Improved computing has finally helped us realize this vision, forty years later. The U.S. Army’s experience with the Future Combat System hits a nerve when assumptions of the future require the integration of emerging technologies.
3. Robotic Warfare: A roboticized Army is over-optimized to fight against a peer competitor, which is the least likely mission the Army will face. We build an Army and develop Leaders first and foremost to protect our Nation’s sovereignty. This means having an Army capable of deterring, and failing that, defeating peer competitors. At the same time, this Army must be versatile enough to execute a myriad of additional missions across the full spectrum of conflict. A hyper-connected Army enabled by robots with fewer Soldiers will be challenged in executing missions requiring significant human interactions such as humanitarian relief, building partner capacity, and counter-insurgency operations.
4. Coalition Warfare: A technology-enabled force will exasperate interoperability challenges with both our traditional and new allies. Our Army will not fight unilaterally on future battlefields. We have had difficulties with the interoperability of communications and have had gaps between capabilities that increased mission risks. These risks were offset by the skills our allies brought to the battlefield. We cannot build an Army that does not account for a coalition battlefield and our alliesmay not be able to afford the tech-enabled force envisioned in the Future Operational Environment.
All four of these assumptions are valid and should be further studied as we build the Army of 2028 and the Army of 2050. There are many other contrarian views about the Future Operational Environment, and so we are calling upon our network to put on their red hats and be our “Tenth Man.”
The Mad Scientist team participates in many thought exercises, tabletops, and wargames associated with how we will live, work, and fight in the future. A consistent theme in these events is the idea that a major barrier to the integration ofrobotic systems into Army formations is a lack of trust between humans and machines. This assumption rings true as we hear the media and opinion polls describe how society doesn’t trust some disruptive technologies, like driverless cars or the robots coming for our jobs.
In his recent book,Army of None, Paul Scharre describes an event that nearly led to a nuclear confrontation between the Soviet Union and the United States. On September 26, 1983,LTC Stanislav Petrov, a Soviet Officer serving in a bunker outside Moscow was alerted to a U.S. missile launch by a recently deployed space-based early warning system. The Soviet Officer trusted his “gut” – or experientially informed intuition – that this was a false alarm. His gut was right and the world was saved from an inadvertent nuclear exchange because this officer did not over trust the system. But is this the rule or an exception to how humans interact with technology?
The subject of trust between Soldiers, Soldiers and Leaders, and the Army and society is central to the idea of the Army as a profession. At the most tactical level, trust is seen as essential to combat readiness as Soldiers must trust each other in dangerous situations. Humans naturally learn to trust their peers and subordinates once they have worked with them for a period of time. You learn what someone’s strengths and weaknesses are, what they can handle, and under what conditions they will struggle. This human dynamic does not translate to human-machine interaction and the tendency to anthropomorphize machines could be a huge barrier.
We recommend that the Army explore the possibility that Soldiers and Leaders could over trust AI and robotic systems. Over trust of these systems could blunt human expertise, judgement, and intuition thought to be critical to winning in complex operational environments. Also, over trust might lead to additional adversarial vulnerabilities such as deception and spoofing.
In 2016, a research team at the Georgia Institute of Technology revealed the resultsof a study entitled “Overtrust of Robots in Emergency Evacuation Scenarios”. The research team put 42 test participants into a fire emergency with a robot responsible for escorting them to an emergency exit. As the robot passed obvious exits and got lost, 37 participants continued to follow the robot and an additional 2 stood with the robot and didn’t move towards either exit. The study’s takeaway was that roboticists must think about programs that will help humans establish an “appropriate level of trust” with robot teammates.
InFuture Crimes, Marc Goodman writes of the idea of “In Screen We Trust” and the vulnerabilities this trust builds into our interaction with our automation. His example of the cyber-attack against the Iranian uranium enrichment centrifuges highlights the vulnerability of experts believing or trusting their screens against mounting evidence that something else might be contributing to the failure of centrifuges. These experts over trusted their technology or just did not have an “appropriate level of trust”. What does this have to do with Soldiers on the future battlefield? Well, increasingly we depend on our screens and, in the future, our heads-up displays totranslate the world around us. This translation will only become more demanding on the future battlefield with war at machine speed.
So what should our assumptions be about trust and our robotic teammates on the future battlefield?
1) Soldiers and Leaders will react differently to technology integration.
2) Capability developers must account for trust building factors in physical design, natural language processing, and voice communication.
3) Intuition and judgement remain a critical component of human-machine teaming and operating on the future battlefield. Speed becomes a major challenge as humans become the weak link.
4) Building an “appropriate level of trust” will need to be part of Leader Development and training. Mere expertise in a field does not prevent over trust when interacting with our robotic teammates.
5) Lastly, lack of trust is not a barrier to AI and robotic integration on the future battlefield. These capabilities will exist in our formations as well as those of our adversaries. The formation that develops the best concepts for effective human-machine teaming, with trust being a major component, will have the advantage.
[Editor’s Note: Mad Scientist Laboratory is pleased to present the following guest blog post by MAJ Chris Telley, U.S. Army, assigned to the Naval Postgraduate School, addressing how Artificial Intelligence (AI) must be understood as an Information Operations (IO) tool if U.S. defense professionals are to develop effective countermeasures and ensure our resilience to its employment by potential adversaries.]
AI-enabled IO present a more pressing strategic threat than the physical hazards ofslaughter-bots or even algorithmically-escalatednuclear war. IO areefforts to “influence, disrupt, corrupt, or usurp the decision-making of adversaries and potential adversaries;” here, we’re talking about using AI to do so. AI-guided IO tools can empathize with an audience to say anything, in any way needed, to change the perceptions that drive those physical weapons. Future IO systems will be able to individually monitor and affecttens of thousands of people at once. Defense professionals must understand the fundamental influence potential of these technologies if they are to drive security institutions to counter malign AI use in the information environment.
Programmatic marketing, using consumer’s data habits to drive real time automated bidding onpersonalized advertising, has been used for a few years now. Cambridge Analytica’sFacebooktargeting made international headlines using similar techniques, but digital electioneering is just the tip of the iceberg. An AI trained with data from users’ social media accounts, economic media interactions (Uber, Applepay, etc.), and their devices’ positional data can infer predictive knowledge of its targets. With that knowledge, emerging tools — likeReplika — can truly befriend a person, allowing it to train that individual, for good or ill.
Substantive feedback is required to train an individual’s response; humans tend to respond best to content and feedback with which they agree. That content can be algorithmically mass produced. For years, Narrative Science tools have helped writers create sports stories and stock summaries, but it’s just as easy to use them to create disinformation. That’s just text, though; today, the AI can create fake video. A recent warning, ostensibly from former President Obama, provides an entertaining yet frightening demonstration of how Deepfakes will challenge our presumptions about truth in the coming years. The Defense Advanced Research Projects Agency (DARPA) is funding aproject this summer to determine whether AI-generated Deepfakes will become impossible to distinguish from the real thing, even using other AI systems.
Given that malign actors can now employ AI to lie “at machine speed,” they still have to get the story to an audience. Russianbot armies continue to make headlines doing this very thing. The New York Times maintains about a dozen Twitter feeds and produces around 300 tweets a day, but Russia’s Internet Research Agency (IRA) regularly puts out25,000 tweets in the same twenty-four hours. The IRA’sbots are really just low-tech curators; they collect, interpret, and display desired information to promote the Kremlin’s narratives.
Next-generation bot armies will employ far faster computing techniques and profit from an order of magnitudegreater network speedwhen 5G services are fielded. If “Repetition is a key tenet ofIO execution,” then this machine gun-like ability to fire information at an audience will, with empathetic precision and custom content, provide the means to change a decisive audience’s very reality. No breakthrough science is needed, no bureaucratic project office required. These pieces arealready there, waiting for an adversary to put them together.
The DoD is looking at AI but remains focused on image classificationandswarming quadcopterswhile ignoring the convergent possibilities of predictive audience understanding, tailored content production, and massive scale dissemination. What little digital IO we’ve done, sometimes called social media “WebOps,” has been contractor heavy and prone to naïve missteps. However, groups like USSOCOM’sSOFWERX and the students at the Naval Postgraduate School are advancing the state of our art. At NPS, future senior leaders are working on AI, now. A half-dozen of the school’s departments have stood up classes and events specifically aimed at operationalizing advanced computing. The young defense professionals currently working on AI should grapple with emerging influence tools and form the foundation of the DoD’s future institutional capabilities.
MAJ Chris Telley is an Army information operations officer assigned to the Naval Postgraduate School. His assignments have included theater engagement at U.S. Army Japan and advanced technology integration with the U.S. Air Force. Chris commanded in Afghanistan and served in Iraq as a United States Marine. He tweets at @chris_telley.
This blog post represents the opinions of the author and do not reflect the position of the Army or the United States Government.
The Mad Scientist Initiative brings together cutting-edge leaders and thinkers from the technology industry, research laboratories, academia, and across the military and Government to explore the impact of potentially disruptive technologies. Much like Johannes Gutenberg’s moveable type (illustrated above), these transformational game changers have the potential to impact how we live, create, think, and prosper. Understanding their individual and convergent impacts is essential to continued battlefield dominance in the Future Operational Environment. In accordance with The Operational Environment and the Changing Character of Future Warfare, we have divided this continuum into two distinct timeframes:
The Era of Accelerated Human Progress (Now through 2035):
The period where our adversaries can take advantage of new technologies, new doctrine, and revised strategic concepts to effectively challenge U.S. military forces across multiple domains. Game changers during this era include:
• Robotics: Forty plus countries develop military robots with some level of autonomy. Impact on society, employment. Vulnerable: To Cyber/Electromagnetic (EM) disruption, battery life, ethics without man in the loop. Formats: Unmanned/Autonomous; ground/air vehicles/subsurface/sea systems. Nano-weapons. Examples: (Air) Hunter/killer Unmanned Aerial Vehicle (UAV) swarms; (Ground) Russian Uran: Recon, ATGMs, SAMs.
• Artificial Intelligence: Human-Agent Teaming, where humans and intelligent systems work together to achieve either a physical or mental task. The human and the intelligent system will trade-off cognitive and physical loads in a collaborative fashion.
• Swarms/Semi Autonomous: Massed, coordinated, fast, collaborative, small, stand-off. Overwhelm target systems. Mass or disaggregate.
• Internet of Things (IoT): Trillions of internet linked items create opportunities and vulnerabilities. Explosive growth in low Size Weight and Power (SWaP) connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness). Greater than 100 devices per human. Significant end device processing (sensor analytics, sensor to shooter, supply chain management). Vulnerable: To Cyber/EM/Power disruption. Privacy concerns regarding location and tracking. Sensor to shooter: Accelerate kill chain, data processing, and decision-making.
• Space: Over 50 nations operate in space, increasingly congested and difficult to monitor, endanger Positioning, Navigation, and Timing (PNT)
GPS Jamming/Spoofing: Increasingly sophisticated, used successfully in Ukraine. Anti Satellite: China has tested two direct ascent anti-satellite missiles.
The Era of Contested Equality (2035 through 2050):
The period marked by significant breakthroughs in technology and convergences in terms of capabilities, which lead to significant changes in the character of warfare. During this period, traditional aspects of warfare undergo dramatic, almost revolutionary changes which at the end of this timeframe may even challenge the very nature of warfare itself. Game changers during this era include:
• Hyper Velocity Weapons: Rail Guns (Electrodynamic Kinetic Energy Weapons): Electromagnetic projectile launchers. High velocity/energy and space (Mach 5 or higher). Not powered by explosive. No Propellant: Easier to store and handle. Lower Cost Projectiles: Potentially. Extreme G-force requires sturdy payloads. Limiting factors: Power. Significant IR signature. Materials science. Hyper Glide Vehicles: Less susceptible to anti-ballistic missile countermeasures.
• Directed Energy Weapons: Signature not visible without technology, must dwell on target. Power requirements currently problematic. Potential: Tunable, lethal, and non-lethal. Laser: Directed energy damages intended target. Targets: Counter Aircraft, UAS, Missiles, Projectiles, Sensors, Swarms. Radio Frequency (RF): Attack targets across the frequency spectrum. Targets: Not just RF; Microwave weapons “cook targets,” people, electronics.
• Synthetic Biology: Engineering / modification of biological entities Increased Crop Yield: Potential to reduce food scarcity. Weaponization: Potential for micro-targeting, Seek & destroy microbes that can target DNA. Potentially accessible to super-empowered individuals. Medical Advances: Enhance soldier survivability. Genetic Modification: Disease resistant, potentially designer babies and super athletes/soldiers. Synthetic DNA stores digital data. Data can be used for micro-targeting. CRISPR: Genome editing.
• Information Environment: Use IoT and sensors to harness the flow of information for situational understanding and decision-making advantage.
In envisioning Future Operational Environment possibilities, the Mad Scientist Initiative employs a number of techniques. We have found Crowdsourcing (i.e., the gathering of ideas, thoughts, and concepts from a wide variety of interested individuals assists us in diversifying thoughts and challenging conventional assumptions) to be a particularly effective technique. To that end, we have published our latest, 2-page compendium of Potential Game Changershere — we would like to hear your feedback regarding them. Please let us know your thoughts / observations by posting them in this blog post’s Comment box (found below, in the Leave a Reply section). Alternatively, you can also submit them to us via email at: firstname.lastname@example.org. Thank you in advance for your contributions!
The Mad Scientist Initiative recently facilitated a workshop with thought leaders from across the Department of Defense, the Intelligence Community, other Government agencies, industry, and academia to address the unknown, unknowns (i.e., Black Swans) and the known, knowns (i.e., Pink Flamingos) to synthesize cross-agency thinking about possible disruptions to the Future Operational Environment.
Black Swans: In Nassim Nicholas Taleb’s original context, a black swan (unknown, unknowns) is an event or situation which is unpredictable, but has a major effect. For this conference, we used a looser definition, identifying possibilities that are not likely, but might have significant impacts on how we think about warfighting and security.
Pink Flamingos: Defined by Frank Hoffman, Pink Flamingos are the known, knowns that are often discussed, but ignored by Leaders trapped by organizational cultures and rigid bureaucratic decision-making structures. Peter Schwartz further describes Pink Flamingos as the “inevitable surprise.” Digital photography was a pink flamingo to Kodak.
At the workshop, attendees identified the following Black Swans:
• Naturally Occurring Disaster: These events (i.e., Carrington Event — solar flare frying solid state electronics, super volcano eruptions, earthquake swarms, etc.) would have an enormous impact on the Army and its ability to continue to operate and defend the nation and support national recovery operations. While warning times have increased for many of these events, there are limited measures that can be implemented to mitigate the devastating effects of these events.
• Virtual Nations: While the primacy of Westphalian borders has been challenged and the power of traditional nation-states has been waning over the last decade, some political scientists have assumed that supranational organizations and non-state actors would take their place. One potential black swan is the emergence of virtual nations due to the convergence of blockchain technologies, crypto-currency, and the ability to project power and legitimacy through the virtual world. Virtual nations could be organized based on ideologies, business models, or single interests. Virtual nations could supersede, supplement, or compete with traditional, physical nations. The Army of the future may not be prepared to interact and compete with virtual nations.
• Competition in Venues Other than Warfare (Economic, Technological, Demographic, etc.) Achieving Primacy: In the near future, war in the traditional sense may be less prevalent, while competitions in other areas may be the driving forces behind national oppositions. How does the Army need to prepare for an eventuality where armed conflict is not as important as it once was?
• Alternate Internet — “Alternet”: A distinct entity, separate from the general commercial internet, only accessible with specific corresponding hardware. This technology would allow for unregulated and unmonitored communication and commerce, potentially granting safe haven to criminal and terrorist activities.
At the workshop, attendees identified the following Pink Flamingos:
• Safe at Home: Army installations are no longer the sanctuaries they once were, as adversaries will be able to attack Soldiers and families through social media and other cyberspace means. Additionally, installations no longer merely house, train, and deploy Soldiers — unmanned combat systems are controlled from home installations -— a trend in virtual power that will increase in the future. The Army needs a plan to harden our installations and train Soldiers and families to be resilient for this eventuality.
• Hypersonics: High speed (Mach 5 or higher) and highly maneuverable missiles or glide vehicles that can defeat our air defense systems. The speed of these weapons is unmatched and their maneuverability allows them to keep their targets unknown until only seconds before impact, negating current countermeasures.
• Generalized, Operationalized Artificial Intelligence (AI):Artificial intelligence is one of the most prominent pink flamingos throughout global media and governments. Narrow artificial intelligence is being addressed as rapidly as possible through ventures such as Project MAVEN. However, generalized and operationalized artificial intelligence – that can think, contextualize, and operate like a human – has the potential to disrupt not only operations, but also the military at its very core and foundation.
• Space/Counterspace: Space is becoming increasingly congested, commercialized, and democratized. Disruption, degradation, and denial in space threatens to cripple multi-domain warfare operations. States and non-state actors alike are exploring options to counter one another, compete, and potentially even fight in space.
• Quantum Sciences: Quantum science – communication, computing, and sensing – has the potential to solve some intractable but very specific problem sets. Quantum technology remains in its infancy. However, as the growth of qubits in quantum computing continues to expand, so does the potentiality of traditional encryption being utterly broken. Quantum sensing can allow for much more precise atomic clocks surpassing the precision timing of GPS, as well as quantum imaging that provides better results than classical imaging in a variety of wavelengths.
• Bioweapons/Biohacking: The democratization of bio technology will mean that super-empowered individuals as well as nation states will have the ability to engineer weapons and hacks that can augment friendly human forces or target and degrade enemy human forces (e.g., targeted disease or genetic modifications).
• Personalized Warfare: Warfare is now waged on a personal level, where adversaries can attack the bank accounts of Soldiers’ families, infiltrate their social media, or even target them specifically by their genetics. The Army needs to understand that the individual Soldier can be exploited in many different ways, often through information publicly provided or stolen.
• Deep Fakes/Information Warfare: Information warfare and “fake news” have played a prominent role in global politics over the last several years and could dominate the relationship between societies, governments, politicians, and militaries in the future operational environment. Information operations, thanks to big data and humanity’s ever-growing digital presence, are targeted at an extremely personal and specific level. One of the more concerning aspects of this is an artificial intelligence-based human image/voice synthesis technique known as deep fakes. Deep fakes can essentially put words in the mouths of prominent or trusted politicians and celebrities.
• Multi-Domain Swarming: Swarming is often thought about in terms of unmanned aerial systems (UAS), but one significant pink flamingo is swarming taking place across multiple domains with self-organizing, autonomous aerial, ground, maritime (sub and surface), and even subterranean unmanned systems. U.S. defense systems on a linear modernization and development model will not be capable of dealing with the saturation and complexity issues arising from these multi-domain swarms.
• Lethal Autonomy: An autonomous system with the ability to track, target, and fire without the supervision or authority of a human in/on the loop. The U.S. Army will have to examine its own policy regarding these issues as well as our adversaries, who may be less deterred by ethical/policy issues.
• Tactical Nuclear Exchange: While strategic nuclear war and mutually assured destruction have been discussed and addressed ad nauseam, not enough attention has been given to the potential of a tactical nuclear exchange between state actors. One tactical nuclear attack, while not guaranteeing a nuclear holocaust, would bring about a myriad of problems for U.S. forces worldwide (e.g., the potential for escalation, fallout, contamination of water and air, and disaster response). Additionally, a high altitude nuclear burst’s electromagnetic pulse has the potential to fry solid state electronics across a wide-area, with devastating results to the affected nation’s electrical grid, essential government services, and food distribution networks.
Leaders must anticipate these future possibilities in determining the character of future conflicts and in force design and equipping decisions. Using a mental model of black swans and pink flamingos provides a helpful framework for assessing the risks associated with these decisions.