104. Critical Thinking: The Neglected Skill Required to Win Future Conflicts

[Editor’s Note: As addressed in last week’s post, entitled The Human Targeting Solution: An AI Story, the incorporation of Artificial Intelligence (AI) as a warfighting capability has the potential to revolutionize combat, accelerating the future fight to machine speeds.  That said, the advanced algorithms underpinning these AI combat multipliers remain dependent on the accuracy and currency of their data feeds. In the aforementioned post, the protagonist’s challenge in overriding the AI-prescribed optimal (yet flawed) targeting solution illustrates the inherent tension between human critical thinking and the benefits of AI.

Today’s guest blog post, submitted by MAJ Cynthia Dehne, expands upon this theme, addressing human critical thinking as the often neglected, yet essential skill required to successfully integrate and employ emergent technologies while simultaneously understanding their limitations on future battlefields.  Warfare will remain an intrinsically human endeavor, the fusion of deliberate and calculating human intellect with ever more lethal technological advances. ]

The future character of war will be influenced by emerging technologies such as AI, robotics, computing, and synthetic biology. Cutting-edge technologies will become increasingly cheaper and readily available, introducing a wider range of actors on the battlefield. Moreover, nation-state actors are no longer the drivers of cutting-edge technology — militaries are leveraging the private sector who are leading research and development in emergent technologies. Proliferation of these cheap, accessible technologies will allow both peer competitors and non-state actors to wage serious threats in the future operational environment.  Due to the abundance of new players on the battlefield combined with emerging technologies, future conflicts will be won by those who both possess “critical thinking” skills and can integrate technology seamlessly to inform decision-making in war instead of relying on technology to win war. Achieving success in the future eras of accelerated human progress and contested equality will require the U.S. Army to develop Soldiers who are adept at seamlessly employing technology on the battlefield while continuously exercising critical thinking skills.

The Foundation for Critical Thinking defines critical thinking as “the art of analyzing and evaluating thinking with a view to improve it.” 1 Furthermore, they assert that a well cultivated critical thinker can do the following: raise vital questions and problems and formulate them clearly and precisely; gather and assess relevant information, using abstract ideas to interpret it effectively; come to well-reasoned conclusions and solutions, testing them against relevant criteria and standards; think open-mindedly within alternative systems of thought, recognizing and assessing, as needed, their assumptions, implications, and practical consequences; and communicate effectively with others in figuring out solutions to complex problems.2

Many experts in education and psychology argue that critical thinking skills are declining. In 2017, Dr. Stephen Camarata wrote about the emerging crisis in critical thinking and college students’ struggles to tackle real world problem solving. He emphasized the essential need for critical thinking and asserted that “a young adult whose brain has been “wired’ to be innovative, think critically, and problem solve is at a tremendous competitive advantage in today’s increasingly complex and competitive world.”3 Although most government agencies, policy makers, and businesses deem critical thinking important, STEM fields continue to be prioritized. However, if creative thinking skills are not fused with STEM, then there will continue to be a decline in those equipped with well-rounded critical thinking abilities. In 2017, Mark Cuban opined during an interview with Bloomberg TV that the nature of work is changing and the future skill that will be more in-demand will be “creative thinking.” Specifically, he stated “I personally think there’s going to be a greater demand in 10 years for liberal arts majors than there were for programming majors and maybe even engineering.”4 Additionally, Forbes magazine published an article in 2018 declaring that “creativity is the skill of the future.”5

Employing future technologies effectively will be key to winning war, but it is only one aspect. During the Vietnam War, the U.S. relied heavily on technology but were defeated by an enemy who leveraged simple guerilla tactics combined with minimal military technology. Emerging technologies will be vital to inform decision-making, but will not negate battlefield friction. Carl von Clausewitz ascertained that although everything is simple in war, the simplest things become difficult and accumulate and create friction.6 Historically, a lack of information caused friction and uncertainty. However, complexity is a driver of friction in current warfare and will heavily influence future warfare. Complex, high-tech weapon systems will dominate the future battlefield and create added friction. Interdependent systems linking communications and warfighting functions will introduce more friction which will require highly skilled thinkers to navigate.

The newly published U.S. Army in Multi-Domain Operations 2028 concept “describes how Army forces fight across all domains, the electromagnetic spectrum (EMS), and the information environment and at echelon7  to “enable the Joint Force to compete with China and Russia below armed conflict, penetrate and dis-integrate their anti-access and area denial systems and ultimately defeat them in armed conflict and consolidate gains, and then return to competition.” Even with technological advances and intelligence improvement, elements of friction will be present in future wars. Both great armies and asymmetric threats have vulnerabilities, due to small things in terms of friction that morph into larger issues capable of crippling a fighting force. Therefore, success in future war is dependent on military commanders that understand these elements and how to overcome friction. Future technologies must be fused with critical thinking to mitigate friction and achieve strategic success. The U.S. Army must simultaneously emphasize integrating critical thinking in doctrine and exercises when training Soldiers on new technologies.

Soldiers should be creative, innovative thinkers; the Army must foster critically thinking as an essential skill.  The Insight Assessment emphasizes that “weakness in critical thinking skill results in loss of opportunities, of financial resources, of relationships, and even loss of life. There is probably no other attribute more worthy of measure than critical thinking skills.”9 Gaining and maintaining competitive advantage over adversaries in a complex, fluid future operational environment requires Soldiers to be both skilled in technology and experts in critical thinking.

If you enjoyed this post, please also see:

Mr. Chris Taylor’s presentation on Problem Solving in the Wild, from the Mad Scientist Learning in 2050 Conference at Georgetown University, 8-9 August 2018;

and the following Mad Scientist Laboratory blog posts:

TRADOC 2028

Making the Future More Personal: The Oft-Forgotten Human Driver in Future’s Analysis

 MAJ Cynthia Dehne is in the U.S. Army Reserve, assigned to the TRADOC G-2 and has operational experience in Afghanistan, Iraq, Kuwait, and Qatar. She is a graduate of the U.S. Army Command and General Staff College and holds masters degrees in International Relations and in Diplomacy and International Commerce.


1 Paul, Richard, and Elder, Linda. Critical Thinking Concepts and Tools. Dillon Beach, CA: Foundation for Critical Thinking, 2016, p. 2.

2 Paul, R., and Elder, L. Foundation for Critical Thinking. Dillon Beach, CA: Foundation for Critical Thinking, 2016, p. 2.

3 Camarata, Stephen. “The Emerging Crisis in Critical Thinking.” Psychology Today, March 21, 2017. Accessed October 10, 2018, from https://www.psychologytoday.com/us/blog/the-intuitive-parent/201703/the-emerging-crisis-in-critical-thinking.

4 Wile, Rob. “Mark Cuban Says This Will Be the No.1 Job Skill in 10 Years.” Time, February 20, 2017. Accessed October 11, 2018. http://time.com/money/4676298/mark-cuban-best-job-skill/.

5 Powers, Anna. “Creativity Is The Skill Of The Future.” Forbes, April 30, 2018. Accessed October 14, 2018. https://www.forbes.com/sites/annapowers/2018/04/30/creativity-is-the-skill-of-the-future/#3dd533f04fd4.

6 Clausewitz, Carl von, Michael Howard, Peter Paret, and Bernard Brodie. On War. Princeton, N.J.: Princeton University Press, 1984, p. 119.

7 U.S. Army. The U.S. Army in Multi-Domain Operations 2028, Department of the Army. TRADOC Pamphlet 525-3-1, December 6, 2018, p. 5.

8 U.S. Army. The U.S. Army in Multi-Domain Operations 2028, Department of the Army. TRADOC Pamphlet 525-3-1, December 6, 2018, p. 15.

9 Insight Assessment. “Risks Associated with Weak Critical Thinkers.” Insight Assessment, 2018. Accessed October 22, 2018, from https://www.insightassessment.com/Uses/Risks-Associated-with-Weak-Critical-Thinkers.

99. “The Queue”

[Editor’s Note: Mad Scientist Laboratory is pleased to present our October edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

1. Table of Disruptive Technologies, by Tech Foresight, Imperial College London, www.imperialtechforesight.com, January 2018.

This innovative Table of Disruptive Technologies, derived from Chemistry’s familiar Periodic Table, lists 100 technological innovations organized into a two-dimensional table, with the x-axis representing Time (Sooner to Later) and the y-axis representing the Potential for Socio-Economic Disruption (Low to High). These technologies are organized into three time horizons, with Current (Horizon 1 – Green) happening now, Near Future (Horizon 2 – Yellow) occurring in 10-20 years, and Distant Future (Horizon 3 – Fuchsia) occurring 20+ years out. The outermost band of Ghost Technologies (Grey) represents fringe science and technologies that, while highly improbable, still remain within the realm of the possible and thus are “worth watching.” In addition to the time horizons, each of these technologies has been assigned a number corresponding to an example listed to the right of the Table; and a two letter code corresponding to five broad themes: DE – Data Ecosystems, SP – Smart Planet, EA – Extreme Automation, HA – Human Augmentation, and MI – Human Machine Interactions. Regular readers of the Mad Scientist Laboratory will find many of these Potential Game Changers familiar, albeit assigned to far more conservative time horizons (e.g., our community of action believes Swarm Robotics [Sr, number 38], Quantum Safe Cryptography [Qs, number 77], and Battlefield Robots [Br, number 84] will all be upon us well before 2038). That said, we find this Table to be a useful tool in exploring future possibilities and will add it to our “basic load” of disruptive technology references, joining the annual Gartner Hype Cycle of Emerging Technologies.

2. The inventor of the web says the internet is broken — but he has a plan to fix it, by Elizabeth Schulze, Cnbc.com, 5 November 2018.

Tim Berners-Lee, who created the World Wide Web in 1989, has said recently that he thinks his original vision is being distorted due to concerns about privacy, access, and fake news. Berners-Lee envisioned the web as a place that is free, open, and constructive, and for most of his invention’s life, he believed that to be true. However, he now feels that the web has undergone a change for the worse. He believes the World Wide Web should be a protected basic human right. In order to accomplish this, he has created the “Contract for the Web” which contains his principles to protect web access and privacy. Berners-Lee’s “World Wide Web Foundation estimates that 1.5 billion… people live in a country with no comprehensive law on personal data protection. The contract requires governments to treat privacy as a fundamental human right, an idea increasingly backed by big tech leaders like Apple CEO Tim Cook and Microsoft CEO Satya Nadella.” This idea for a free and open web stands in contrast to recent news about China and Russia potentially branching off from the main internet and forming their own filtered and censored Alternative Internet, or Alternet, with tightly controlled access. Berners-Lee’s contract aims at unifying all users under one over-arching rule of law, but without China and Russia, we will likely have a splintered and non-uniform Web that sees only an increase in fake news, manipulation, privacy concerns, and lack of access.

3. Chinese ‘gait recognition’ tech IDs people by how they walk, Associated Press News, 6 November 2018.

Source: AP

The Future Operational Environment’s “Era of Contested Equality” (i.e., 2035 through 2050) will be marked by significant breakthroughs in technology and convergences, resulting in revolutionary changes. Under President Xi Jinping‘s leadership, China is becoming a major engine of global innovation, second only to the United States. China’s national strategy of “innovation-driven development” places innovation at the forefront of economic and military development.

Early innovation successes in artificial intelligence, sensors, robotics, and biometrics are being fielded to better control the Chinese population. Many of these capabilities will be tech inserted into Chinese command and control functions and intelligence, security, and reconnaissance networks redefining the timeless competition of finders vs. hiders. These breakthroughs represent homegrown Chinese innovation and are taking place now.

A recent example is the employment of ‘gait recognition’ software capable of identifying people by how they walk. Watrix, a Chinese technology startup, is selling the software to police services in Beijing and Shanghai as a further push to develop an artificial intelligence and data drive surveillance network. Watrix reports the capability can identify people up to 165 feet away without a view of their faces. This capability also fills in the sensor gap where high-resolution imagery is required for facial recognition software.

4. VR Boosts Workouts by Unexpectedly Reducing Pain During Exercise, by Emma Betuel, Inverse.com, 4 October 2018.

Tricking the brain can be fairly low tech, according to Dr. Alexis Mauger, senior lecturer at the University of Kent’s School of Sport and Exercise Sciences. Research has shown that students who participated in a Virtual Reality-based exercise were able to withstand pain a full minute longer on average than their control group counterparts. Dr. Mauger hypothesized that this may be due to a lack of visual cues normally associated with strenuous exercise. In the case of the specific research, participants were asked to hold a dumbbell out in front of them for as long as they could. The VR group didn’t see their forearms shake with exhaustion or their hands flush with color as blood rushed to their aching biceps; that is, they didn’t see the stimuli that could be perceived as signals of pain and exertion. These results could have significant and direct impact on Army training. While experiencing pain and learning through negative outcomes is essential in certain training scenarios, VR could be used to train Soldiers past where they would normally be physically able to train. This could not only save the Army time and money but also provide a boost to exercises as every bit of effectiveness normally left at the margins could now be acquired.

5. How Teaching AI to be Curious Helps Machines Learn for Themselves, by James Vincent, The Verge, 1 November 2018, Reviewed by Ms. Marie Murphy.

Presently, there are two predominant techniques for machine learning: machines analyzing large sets of data from which they extrapolate patterns and apply them to analogous scenarios; and giving the machine a dynamic environment in which it is rewarded for positive outcomes and penalized for negative ones, facilitating learning through trial and error.

In programmed curiosity, the machine is innately motivated to “explore for exploration’s sake.” The example used to illustrate the concept of learning through curiosity details a machine learning project called “OpenAI” which is learning to win a video game in which the reward is not only staying alive but also exploring all areas of the level. This method has yielded better results than the data-heavy and time-consuming traditional methods. Applying this methodology for machine learning in military training scenarios would reduce the human labor required to identify and program every possible outcome because the computer finds new ones on its own, reducing the time between development and implementation of a program. This approach is also more “humanistic,” as it allows the computer leeway to explore its virtual surroundings and discover new avenues like people do. By training AI in this way, the military can more realistically model various scenarios for training and strategic purposes.

6. EU digital tax plan flounders as states ready national moves, by Francesco Guarascio, Reuters.com, 6 November 2018.

A European Union plan to tax internet firms like Google and Facebook on their turnover is on the verge of collapsing. As the plan must be agreed to by all 28 EU countries (a tall order given that it is opposed by a number of them), the EU is announcing national initiatives instead. The proposal calls for EU states to charge a 3 percent levy on the digital revenues of large firms. The plan aims at changing tax rules that have let some of the world’s biggest companies pay unusually low rates of corporate tax on their earnings. These firms, mostly from the U.S., are accused of averting tax by routing their profits to the bloc’s low-tax states.

This is not just about taxation. This is about the issue of citizenship itself. What does it mean for virtual nations – cyber communities which have gained power, influence, or capital comparable to that of a nation-state – that fall outside the traditional rule of law? The legal framework of virtual citizenship turn upside down and globalize the logic of the special economic zone — a geographical space of exception, where the usual rules of state and finance do not apply. How will these entities be taxed or declare revenue?

Currently, for the online world, geography and physical infrastructure remain crucial to control and management. What happens when it is democratized, virtualized, and then control and management change? Google and Facebook still build data centers in Scandinavia and the Pacific Northwest, which are close to cheap hydroelectric power and natural cooling. When looked at in terms of who the citizen is, population movement, and stateless populations, what will the “new normal” be?

7. Designer babies aren’t futuristic. They’re already here, by Laura Hercher, MIT Technology Review, 22 October 2018.

In this article, subtitled “Are we designing inequality into our genes?” Ms. Hercher echoes what proclaimed Mad Scientist Hank Greely briefed at the Bio Convergence and Soldier 2050 Conference last March – advances in human genetics will be applied initially in order to have healthier babies via the genetic sequencing and the testing of embryos. Embryo editing will enable us to tailor / modify embryos to design traits, initially to treat diseases, but this will also provide us with the tools to enhance humans genetically. Ms. Hercher warns us that “If the use of pre-implantation testing grows and we don’t address the disparities in who can access these treatments, we risk creating a society where some groups, because of culture or geography or poverty, bear a greater burden of genetic disease.” A valid concern, to be sure — but who will ensure fair access to these treatments? A new Government agency? And if so, how long after ceding this authority to the Government would we see politically-expedient changes enacted, justified for the betterment of society and potentially perverting its original intent? The possibilities need not be as horrific as Aldous Huxley’s Brave New World, populated with castes of Deltas and Epsilon-minus semi-morons. It is not inconceivable that enhanced combat performance via genetic manipulation could follow, resulting in a permanent caste of warfighters, distinct genetically from their fellow citizens, with the associated societal implications.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

95. Takeaways Learned about the Future of the AI Battlefield

On 10 October 2018, the U.S. Army Training and Doctrine Command (TRADOC) G-2’s Mad Scientist Initiative launched a crowdsourcing exercise to explore the possibilities and impacts of Artificial Intelligence (AI) on the future battlefield. For good reason, much has been made of AI and Machine Learning (ML) and their use in enabling lethal autonomy on the battlefield. While this is an important topic, AI’s potential application is much broader and farther, enabling future warfare at machine speed and disrupting human-centered battlefield rhythms.

Mad Scientist received submissions from approximately 115 participants, affiliated with military units, Government agencies, private tech companies, academia, and a number of non-DoD/Government associated sources. These submissions were diverse and rich in depth, clarity, and quality.  We distilled them into the following eight cross-cutting takeaways impacting every aspect of the future battlefield:

  • Invisible AI:

AI will be so pervasive across the battlefield that most of its functions and processes will take place without warfighters and commanders noticing. There won’t be an On/Off button per se, similar to cellular service, smart device functions, or cyber operations. The wide proliferation of AI entities from devices to platforms to even wearables means it will not be an isolated domain, but rather will permeate ubiquitously and seamlessly across the battlefield.

  • Speed it Up:

AI will not only speed up existing processes and cycles – i.e., the military decision-making process (MDMP), the intelligence cycle, the targeting cycle – but it will also likely transform them. Many of these cycles and processes have evolved and proven their effectiveness in a human-centric environment. Some contain consecutive steps that may no longer be necessary when tasks are assigned to intelligent machines. Critical, time-sensitive, but often tedious work that is carried out by hundreds of military staff members in many hours could be accomplished in minutes by AI, leading to flattened command structures, smaller staffs, and significant demand and signature reduction on the battlefield. All of this will result in battlefield optimization and will induce hyperactivity in combat – rapidly changing battlefield rhythms.

  • Coup d’œil / Freeing up Warfighters and Commanders:

AI intelligence systems and entities conducting machine speed collection, collation, and analysis of battlefield information will free up warfighters and commanders to do what they do best — fight and make decisions, respectively. Commanders can focus on the battle with coup d’œil, or the “stroke of an eye,” maintaining situational awareness without consuming precious time crunching data. Additionally, AI’s ability to quickly sift through and analyze the plethora of input received from across the battlefield, fused with the lessons learned data from thousands of previous engagements, will lessen the commander’s dependence on having had direct personal combat experience with conditions similar to his current fight when making command decisions.

  • Spectrum Management and Common Operational Picture (COP):

The future battlefield will be increasingly complex with the cyber, air, and space domains, as well as electromagnetic spectrum becoming difficult to see, manage, and deconflict. Exacerbating this problem is the enormous growth of the Internet of Things – eventually the Internet of Everything – and even more importantly, the Internet of Battlefield Things. AI will be critical in processing and sustaining a clear COP in this overwhelmingly data-rich environment of sensors, emitters, systems, and networks.

  • Learning Things and Collaborative Entities:

AI will facilitate a host of new learning things on the battlefield – i.e., weapon systems, munitions, vehicles, wearables [exo-skeletons] – and a multitude of collaborative entities – sensors, systems, and platforms. This battlespace of learning things will not supplant our need for Soldiers that use and operate them, but it will enhance them as Warfighters.

  • Resilient and Layered AI:  

 In order to effectively utilize AI across the battlefield, the Army will need resilient and layered AI, including on-board services, localized collaborative systems, and cloud services that do not rely on persistent connectivity. Some AI entities will need to be proliferated at the tactical level, creating a veritable network that can still effectively operate with degraded/disrupted nodes.

  • New Required Capabilities and Skillsets:

The advent of AI across the battlefield will require a multitude of new capabilities and skillsets to implement, maintain, and maximize AI entities. As with the contemporary drive to recruit Cyber talent into the ranks, the Army must plan on competing with the private sector for the most talented and capable recruits in new AI job fields.

  • Adversarial Risk:

A capability / vulnerability paradox is inherent with AI, with its machine speed capabilities being vulnerable to the vast array of data input sources that it needs to operate. AI’s underpinning data and algorithms are vulnerable to spoofing, degradation, or other forms of subversion. This could lead to the erosion of Soldier and Leader trust in AI, and also necessitates more transparency to strengthen the man-machine relationship. Enemies will seek to exploit this relationship and trust.

Conclusion:

These takeaways illustrate the many ways AI can be implemented across the future battlefield. Machine speed warfare will be enabled by AI; it will not be limited to just lethal autonomy. The functions of so many other parts of combat – C2, ISR, sustainment, medical, etc. – can be accelerated and improved; not just “warheads on foreheads.”

While we explored where AI could enhance battlefield operations, there are also implicit considerations that must be accounted for in the future. These include the ethical dilemmas and concerns associated with employing AI in so many different ways. Lethal autonomy is a hot button issue due to its life or death implications. However, AI assisting other warfighter functions will also have significant impacts on the battlefield.

A second major consideration is what impact AI has on Army learning and training. The Army will not only have to incorporate the subject of AI in its learning but will also utilize AI in its learning. Additionally, AI will be required to support Field Training Exercises and other major training events to work through all of the second and third order effects resulting from a much more compressed battle rhythm.

Mad Scientist is extremely appreciative of all the feedback and submissions received. We intend for this product to be used in future wargaming events, future horizon scanning, and the general framing of future thinking and planning for the development and use of AI systems and entities.

If you enjoyed this blog post, please read the entire Crowdsourcing the Future of the AI Battlefield paper, including the highlights of ideas binned into categories supporting upcoming Army Wargames on Intelligence, Surveillance, and Reconnaissance; Logistics; and Command and Control.

92. Ground Warfare in 2050: How It Might Look

[Editor’s Note: Mad Scientist Laboratory is pleased to review proclaimed Mad Scientist Dr. Alexander Kott’s paper, Ground Warfare in 2050: How It Might Look, published by the US Army Research Laboratory in August 2018. This paper offers readers with a technological forecast of autonomous intelligent agents and robots and their potential for employment on future battlefields in the year 2050. In this post, Mad Scientist reviews Dr. Kott’s conclusions and provides links to our previously published posts that support his findings.]

In his paper, Dr. Kott addresses two major trends (currently under way) that will continue to affect combat operations for the foreseeable future. They are:

•  The employment of small aerial drones for Intelligence, Surveillance, and Reconnaissance (ISR) will continue, making concealment difficult and eliminating distance from opposing forces as a means of counter-detection. This will require the development and use of decoy capabilities (also intelligent robotic devices). This counter-reconnaissance fight will feature prominently on future battlefields between autonomous sensors and countermeasures – “a robot-on-robot affair.”

See our related discussions regarding Concealment in the Fundamental Questions Affecting Army Modernization post and Finders vs Hiders in our Timeless Competitions post.

  The continued proliferation of intelligent munitions, operating at greater distances, collaborating in teams to seek out and destroy designated targets, and able to defeat armored and other hardened targets, as well as defiladed and entrenched targets.

See our descriptions of the future recon / strike complex in our Advanced Engagement Battlespace and the “Hyperactive Battlefield” post, and Robotics and Swarms / Semi Autonomous capabilities in our Potential Game Changers post.

These two trends will, in turn, drive the following forecasted developments:

  Increasing reliance on unmanned systems, “with humans becoming a minority within the overall force, being further dispersed across the battlefield.”

See Mr. Jeff Becker’s post on The Multi-Domain “Dragoon” Squad: A Hyper-enabled Combat System, and Mr. Mike Matson’s Demons in the Tall Grass, both of which envision future tactical units employing greater numbers of autonomous combat systems; as well as Mr. Sam Bendett’s post on Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward, addressing the contemporary hurdles that one of our strategic competitors must address in operationalizing Unmanned Ground Vehicles.

•  Intelligent munitions will be neutralized “primarily by missiles and only secondarily by armor and entrenchments. Specialized autonomous protection vehicles will be required that will use their extensive load of antimissiles to defeat the incoming intelligent munitions.”

See our discussion of what warfare at machine-speed looks like in our Advanced Engagement Battlespace and the “Hyperactive Battlefield”.

Source: Fausto De Martini / Kill Command

  Forces will exploit “very complex terrain, such as dense forest and urban environments” for cover and concealment, requiring the development of highly mobile “ground robots with legs and limbs,” able to negotiate this congested landscape.

 

See our Megacities: Future Challenges and Responses and Integrated Sensors: The Critical Element in Future Complex Environment Warfare posts that address future complex operational environments.

Source: www.defenceimages.mod.uk

  The proliferation of autonomous combat systems on the battlefield will generate an additional required capability — “a significant number of specialized robotic vehicles that will serve as mobile power generation plants and charging stations.”

See our discussion of future Power capabilities on our Potential Game Changers handout.

 “To gain protection from intelligent munitions, extended subterranean tunnels and facilities will become important. This in turn will necessitate the tunnel-digging robotic machines, suitably equipped for battlefield mobility.”

See our discussion of Multi-Domain Swarming in our Black Swans and Pink Flamingos post.

  All of these autonomous, yet simultaneously integrated and networked battlefield systems will be vulnerable to Cyber-Electromagnetic Activities (CEMA). Consequently, the battle within the Cyber domain will “be fought largely by various autonomous cyber agents that will attack, defend, and manage the overall network of exceptional complexity and dynamics.”

See MAJ Chris Telley’s post addressing Artificial Intelligence (AI) as an Information Operations tool in his Influence at Machine Speed: The Coming of AI-Powered Propaganda.

 The “high volume and velocity of information produced and demanded by the robot-intensive force” will require an increasingly autonomous Command and Control (C2) system, with humans increasingly being on, rather than in, the loop.

See Mr. Ian Sullivan’s discussion of AI vs. AI and how the decisive edge accrues to the combatant with more autonomous decision-action concurrency in his Lessons Learned in Assessing the Operational Environment post.

If you enjoyed reading this post, please watch Dr. Alexander Kott’s presentation, “The Network is the Robot,” from the Mad Scientist Robotics, Artificial Intelligence, and Autonomy: Visioning Multi-Domain Warfare in 2030-2050 Conference, co-sponsored by the Georgia Tech Research Institute (GTRI), in Atlanta, Georgia, 7-8 March 2017.

Dr. Alexander Kott serves as the ARL’s Chief Scientist. In this role he provides leadership in development of ARL technical strategy, maintaining technical quality of ARL research, and representing ARL to external technical community. He published over 80 technical papers and served as the initiator, co-author and primary editor of over ten books, including most recently Cyber Defense and Situational Awareness (2015) and Cyber Security of SCADA and other Industrial Control Systems (2016), and the forthcoming Cyber Resilience of Systems and Networks (2019).

90. “The Tenth Man” — War’s Changing Nature in an AI World

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish yet another in our series of “The Tenth Man” posts (read our previous posts here and here). This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging.  The Mad Scientist Laboratory offers it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Future Operational Environment. Today’s post is by guest blogger Dr. Peter Layton, challenging the commonly held belief of the persistent and abiding nature of war.]

There’s a debate underway about the nature of war. Some say it’s immutable, others say hogwash; ironically both sides quote Clausewitz for support.[i] Interestingly, Secretary of Defense Mattis, once an ‘immutable’ defender, has now declared he’s not sure anymore, given recent Artificial intelligence (AI) developments.[ii]

 

At the core of the immutable case is the belief that war has always been violent, chaotic, destructive, and murderous – and will thus always be so. Buried within this is the view that wars are won by infantry occupying territory; as Admiral Wylie opined “the ultimate determinant in war is a man on the scene with a gun.”[iii] It is the clash of infantry forces that is decisive, with both sides experiencing the deadly violence of war in a manner that would have been comprehendible by Athenian hoplites 2,500 years ago.

Technology though really has changed this. Firstly, the lethality of modern weapons has emptied out the battlefield.[iv] What can be ‘seen’ by sensors of diverse types can be targeted by increasingly precise direct and indirect fires. The Russo-Ukraine war in the Donbas hints that in future wars between state-based military forces, tactical units will need to remain unseen to survive and that they will now ‘occupy’ territory principally through long-range firepower.[v] Secondly, Phillip Meilinger makes a strong case that drone crews firing missiles at insurgents from 3,000 miles away or navies blockading countries and staving their people into submission do not experience war the same as those hoplite infantry did years ago.[vi] The experience of violence in some wars has become one-sided, while wars are now increasingly waged against civilians well behind any defensive front lines.

Source: Griffith Asia Institute

AI may deepen both trends. AI has the potential to sharply enhance the defense continuing to empty out the battlefield, turning it into a no-man’s zone where automated systems and semi-autonomous devices wage attrition warfare.[vii]   If both sides have intelligent machines, war may become simply a case of machines being violent to other machines. In a re-run of World War One, strategic stalemate would seem the likely outcome with neither side able to win meaningful battlefield victories.[viii]

If so, the second aspect of war’s changing nature comes into play. If a nation’s borders cannot be penetrated and its critical centers of gravity attacked using kinetic means, perhaps non-kinetic means are the offensive style of the future.  Indeed, World War One’s battlefield stalemate was resolved as the naval blockade caused significant civilian starvation and the collapse of the homefront.

The application of information warfare by strategic competitors against the US political system hints at new cyber techniques that AI may greatly enhance.[ix] Instead of destroying another’s capabilities and national infrastructures, they might be exploited and used as bearers to spread confusion and dissent amongst the populace. In this century, starvation may not be necessary to collapse the homefront; AI may offer more efficacious methods. War may no longer be violent and murderous but it may still be as Clausewitz wrote a “true political instrument.”[x] Secretary Mattis may be right; perhaps war’s nature is not immutable but rather ripe for our disruption and innovation.

If you enjoyed this guest post, please also read proclaimed Mad Scientist Dr. Lydia Kostopoulos’ paper addressing this topic, entitled War is Having an Identity Crisis, hosted by our colleagues at Small Wars Journal.

Dr. Peter Layton is a Visiting Fellow at the Griffith Asia Institute, Griffith University. A former RAAF Group Captain, he has extensive defense experience, including in the Pentagon and at National Defense University. He holds a doctorate in grand strategy. He is the author of the book ‘Grand Strategy.’

 


[i] For the immutable, see Rob Taber (2018), Character vs. Nature of Warfare: What We Can Learn (Again) from Clausewitz, Mad Scientist Laboratory, 27 August 2018.  For the mutable, see Phillip S. Meilinger (2010), The Mutable Nature of War, Air & Space Power Journal, Winter 2010, pp 25-28. For Clausewitz (both sides), see Dr. A.J. Echevarria II (2012), Clausewitz and Contemporary War: The Debate over War’s Nature, 2nd Annual Terrorism & Global Security Conference 2012.

[ii] Aaron Mehta (2018), AI makes Mattis question ‘fundamental’ beliefs about war, C4ISRNET, 17 February 2018.

[iii] J.C. Wylie (1967), Military Strategy: A General Theory of Power Control, New Brunswick, Rutgers University Press, p. 85.

[iv] James J Schneider (1987), The theory of the empty battlefield, The RUSI Journal, Vol. 132, Issue 3, pp. 37-44.

[v] Brandon Morgan (2018), Artillery in Tomorrow’s Battlefield: Maximizing the Power of the King of Battle, Modern War Institute, 25 September 2018.

[vi] The Mutable Nature of War: The Author Replies, Air & Space Power Journal, Summer 2011, pp 21-22.  And also: Phillip S. Meilinger (2010), The Mutable Nature of War, Air & Space Power Journal, Winter 2010, pp 25-28.

[vii] Peter Layton (2018), Our New Model Robot Armies, Small Wars Journal, 7 August 2018.

[viii] Peter Layton (2018), Algorithm Warfare: Applying Artificial Intelligence to Warfighting, Canberra: Air Power Development Centre, pp. 31-32.

[ix] Renee Diresta (2018), The Information War Is On. Are We Ready For It? , Wired, 3 August.

[x] Carl Von Clausewitz, On War, Edited and Translated by Michael Howard and Peter Paret (1984), Princeton: Princeton University Press, p.87.

87. LikeWar — The Weaponization of Social Media

[Editor’s Note: Regular readers will note that one of our enduring themes is the Internet’s emergence as a central disruptive innovation. With the publication of proclaimed Mad Scientist P.W. Singer and co-author Emerson T. Brooking’s LikeWar – The Weaponization of Social Media, Mad Scientist Laboratory addresses what is arguably the most powerful manifestation of the internet — Social Media — and how it is inextricably linked to the future of warfare. Messrs. Singer and Brooking’s new book is essential reading if today’s Leaders (both in and out of uniform) are to understand, defend against, and ultimately wield the non-kinetic, yet violently manipulative effects of Social Media.]

“The modern internet is not just a network, but an ecosystem of 4 billion souls…. Those who can manipulate this swirling tide, steer its direction and flow, can…. accomplish astonishing evil. They can foment violence, stoke hate, sow falsehoods, incite wars, and even erode the pillars of democracy itself.”

As noted in The Operational Environment and the Changing Character of Future Warfare, Social Media and the Internet of Things have spawned a revolution that has connected “all aspects of human engagement where cognition, ideas, and perceptions, are almost instantaneously available.” While this connectivity has been a powerfully beneficial global change agent, it has also amplified human foibles and biases. Authors Singer and Brookings note that humans by nature are social creatures that tend to gravitate into like-minded groups. We “Like” and share things online that resonate with our own beliefs. We also tend to believe what resonates with us and our community of friends.

Whether the cause is dangerous (support for a terrorist group), mundane (support for a political party), or inane (belief that the earth is flat), social media guarantees that you can find others who share your views and even be steered to them by the platforms’ own algorithms… As groups of like-minded people clump together, they grow to resemble fanatical tribes, trapped in echo chambers of their own design.”

Weaponization of Information

The advent of Social Media less than 20 years ago has changed how we wage war.

Attacking an adversary’s most important center of gravity — the spirit of its people — no longer requires massive bombing runs or reams of propaganda. All it takes is a smartphone and a few idle seconds. And anyone can do it.”

Nation states and non-state actors alike are leveraging social media to manipulate like-minded populations’ cognitive biases to influence the dynamics of conflict. This continuous on-line fight for your mind represents “not a single information war but thousands and potentially millions of them.”

 

LikeWar provides a host of examples describing how contemporary belligerents are weaponizing Social Media to augment their operations in the physical domain. Regarding the battle to defeat ISIS and re-take Mosul, authors Singer and Brookings note that:

Social media had changed not just the message, but the dynamics of conflict. How information was being accessed, manipulated, and spread had taken on new power. Who was involved in the fight, where they were located, and even how they achieved victory had been twisted and transformed. Indeed, if what was online could swing the course of a battle — or eliminate the need for battle entirely — what, exactly, could be considered ‘war’ at all?

Even American gang members are entering the fray as super-empowered individuals, leveraging social media to instigate killings via “Facebook drilling” in Chicago or “wallbanging” in Los Angeles.

And it is only “a handful of Silicon Valley engineers,” with their brother and sister technocrats in Beijing, St. Petersburg, and a few other global hubs of Twenty-first Century innovation that are forging and then unleashing the code that is democratizing this virtual warfare.

Artificial Intelligence (AI)-Enabled Information Operations

Seeing is believing, right? Not anymore! Previously clumsy efforts to photo-shop images and fabricate grainy videos and poorly executed CGI have given way to sophisticated Deepfakes, using AI algorithms to create nearly undetectable fake images, videos, and audio tracks that then go viral on-line to dupe, deceive, and manipulate. This year, FakeApp was launched as free software, enabling anyone with an artificial neural network and a graphics processor to create and share bogus videos via Social Media. Each Deepfake video that:

“… you watch, like, or share represents a tiny ripple on the information battlefield, privileging one side at the expense of others. Your online attention and actions are thus both targets and ammunition in an unending series of skirmishes.”

Just as AI is facilitating these distortions in reality, the race is on to harness AI to detect and delete these fakes and prevent “the end of truth.”

If you enjoyed this post:

– Listen to the accompanying playlist composed by P.W. Singer while reading LikeWar.

– Watch P.W. Singer’s presentation on Meta Trends – Technology, and a New Kind of Race from Day 2 of the Mad Scientist Strategic Security Environment in 2025 and Beyond Conference at Georgetown University, 9 August 2016.

– Read more about virtual warfare in the following Mad Scientist Laboratory blog posts:

— MAJ Chris Telley’s Influence at Machine Speed: The Coming of AI-Powered Propaganda

— COL(R) Stefan J. Banach’s Virtual War – A Revolution in Human Affairs (Parts I and II)

— Mad Scientist Intiative’s Personalized Warfare

— Ms. Marie Murphy’s Virtual Nations: An Emerging Supranational Cyber Trend

— Lt Col Jennifer Snow’s Alternet: What Happens When the Internet is No Longer Trusted?

83. A Primer on Humanity: Iron Man versus Terminator

[Editor’s Note: Mad Scientist Laboratory is pleased to present a post by guest blogger MAJ(P) Kelly McCoy, U.S. Army Training and Doctrine Command (TRADOC), with a theme familiar to anyone who has ever debated super powers in a schoolyard during recess. Yet despite its familiarity, it remains a serious question as we seek to modernize the U.S. Army in light of our pacing threat adversaries. The question of “human-in-the-loop” versus “human-out-of-the-loop” is an extremely timely and cogent question.]

Iron Man versus Terminator — who would win? It is a debate that challenges morality, firepower, ingenuity, and pop culture prowess. But when it comes down to brass tacks, who would really win and what does that say about us?

Mad Scientist maintains that:

  • Today: Mano a mano, Iron Man’s human ingenuity, grit, and irrationality would carry the day; however…
  • In the Future: Facing the entire Skynet distributed neural net, Iron Man’s human-in-the-loop would be overwhelmed by a coordinated, swarming attack of Terminators.
Soldier in Iron Man-like exoskeleton prototype suit

Iron Man is the super-empowered human utilizing Artificial Intelligence (AI) — Just A Rather Very Intelligent System or JARVIS — to augment the synthesizing of data and robotics to increase strength, speed, and lethality. Iron Man utilizes autonomous systems, but maintains a human-in-the- loop for lethality decisions. Conversely, the Terminator is pure machine – with AI at the helm for all decision-making. Terminators are built for specific purposes – and for this case let’s assume these robotic soldiers are designed specifically for urban warfare. Finally, strength, lethality, cyber vulnerabilities, and modularity of capabilities between Iron Man and Terminator are assumed to be relatively equal to each other.

Up front, Iron Man is constrained by individual human bias, retention and application of training, and physical and mental fatigue. Heading into the fight, the human behind a super powered robotic enhancing suit will make decisions based on their own biases. How does one respond to too much information or not enough? How do they react when needing to respond while wrestling with the details of what needs to be remembered at the right time and space? Compounding this is the retention and application of the individual human’s training leading up to this point. Have they successfully undergone enough repetitions to mitigate their biases and arrive at the best solution and response? Finally, our most human vulnerability is physical and mental fatigue. Without adding in psychoactive drugs, how would you respond to taking the Graduate Record Examinations (GRE) while simultaneously winning a combatives match? How long would you last before you are mentally and physically exhausted?

Terminator / Source: http://pngimg.com/download/29789

What the human faces is a Terminator who removes bias and optimizes responses through machine learning, access to a network of knowledge, options, and capabilities, and relentless speed to process information. How much better would a Soldier be with their biases removed and the ability to apply the full library of lessons learned? To process the available information that contextualizes environment without cognitive overload. Arriving at the optimum decision, based on the outcomes of thousands of scenarios.

Iron Man arrives to this fight with irrationality and ingenuity; the ability to quickly adapt to complex problems and environments; tenacity; and morality that is uniquely human. Given this, the Terminator is faced with an adversary who can not only adapt, but also persevere with utter unpredictability. And here the Terminator’s weaknesses come to light. Their algorithms are matched to an environment – but environments can change and render algorithms obsolete. Their energy sources are finite – where humans can run on empty, Terminators power off. Finally, there are always glitches and vulnerabilities. Autonomous systems depend on the environment that it is coded for – if you know how to corrupt the environment, you can corrupt the system.

Ultimately the question of Iron Man versus Terminator is a question of time and human value and worth. In time, it is likely that the Iron Man will fall in the first fight. However, the victor is never determined in the first fight, but the last. If you believe in human ingenuity, grit, irrationality, and consideration, the last fight is the true test of what it means to be human.

Note:  Nothing in this blog is intended as an implied or explicit endorsement of the “Iron Man” or “Terminator” franchises on the part of the Department of Defense, the U.S. Army, or TRADOC.

Kelly McCoy is a U.S. Army strategist officer and a member of the Military Leadership Circle. A blessed husband and proud father, when he has time he is either brewing beer, roasting coffee, or maintaining his blog (Drink Beer; Kill War at: https://medium.com/@DrnkBrKllWr). The views expressed in this article belong to the author alone and do not represent the Department of Defense.

79. Character vs. Nature of Warfare: What We Can Learn (Again) from Clausewitz

[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by guest blogger LTC Rob Taber, U.S. Army Training and Doctrine Command (TRADOC) G-2 Futures Directorate, clarifying the often confused character and nature of warfare, and addressing their respective mutability.]

No one is arguing that warfare is not changing. Where people disagree, however, is whether the nature of warfare, the character of warfare, or both are changing.

Source:  Office of the Director of National Intelligence

Take, for example, the National Intelligence Council’s assertion in “Global Trends: Paradox of Progress.” They state, “The nature of conflict is changing. The risk of conflict will increase due to diverging interests among major powers, an expanding terror threat, continued instability in weak states, and the spread of lethal, disruptive technologies. Disrupting societies will become more common, with long-range precision weapons, cyber, and robotic systems to target infrastructure from afar, and more accessible technology to create weapons of mass destruction.”[I]

Additionally, Brad D. Williams, in an introduction to an interview he conducted with Amir Husain, asserts, “Generals and military theorists have sought to characterize the nature of war for millennia, and for long periods of time, warfare doesn’t dramatically change. But, occasionally, new methods for conducting war cause a fundamental reconsideration of its very nature and implications.”[II] Williams then cites “cavalry, the rifled musket and Blitzkrieg as three historical examples”[III] from Husain and General John R. Allen’s (ret.) article, “On Hyperwar.”

Unfortunately, the NIC and Mr. Williams miss the reality that the nature of war is not changing, and it is unlikely to ever change. While these authors may have simply interchanged “nature” when they meant “character,” it is important to be clear on the difference between the two and the implications for the military. To put it more succinctly, words have meaning.

The nature of something is the basic make up of that thing. It is, at core, what that “thing” is. The character of something is the combination of all the different parts and pieces that make up that thing. In the context of warfare, it is useful to ask every doctrine writer’s personal hero, Carl Von Clausewitz, what his views are on the matter.

Source: Tetsell’s Blog. https://tetsell.wordpress.com/2014/10/13/clausewitz/

He argues that war is “subjective,”[IV]an act of policy,”[V] and “a pulsation of violence.”[VI] Put another way, the nature of war is chaotic, inherently political, and violent. Clausewitz then states that despite war’s “colorful resemblance to a game of chance, all the vicissitudes of its passion, courage, imagination, and enthusiasm it includes are merely its special characteristics.”[VII] In other words, all changes in warfare are those smaller pieces that evolve and interact to make up the character of war.

The argument that artificial intelligence (AI) and other technologies will enable military commanders to have “a qualitatively unsurpassed level of situational awareness and understanding heretofore unavailable to strategic commander[s][VIII] is a grand claim, but one that has been made many times in the past, and remains unfulfilled. The chaos of war, its fog, friction, and chance will likely never be deciphered, regardless of what technology we throw at it. While it is certain that AI-enabled technologies will be able to gather, assess, and deliver heretofore unimaginable amounts of data, these technologies will remain vulnerable to age-old practices of denial, deception, and camouflage.

 

The enemy gets a vote, and in this case, the enemy also gets to play with their AI-enabled technologies that are doing their best to provide decision advantage over us. The information sphere in war will be more cluttered and more confusing than ever.

Regardless of the tools of warfare, be they robotic, autonomous, and/or AI-enabled, they remain tools. And while they will be the primary tools of the warfighter, the decision to enable the warfighter to employ those tools will, more often than not, come from political leaders bent on achieving a certain goal with military force.

Drone Wars are Coming / Source: USNI Proceedings, July 2017, Vol. 143 / 7 /  1,373

Finally, the violence of warfare will not change. Certainly robotics and autonomy will enable machines that can think and operate without humans in the loop. Imagine the future in which the unmanned bomber gets blown out of the sky by the AI-enabled directed energy integrated air defense network. That’s still violence. There are still explosions and kinetic energy with the potential for collateral damage to humans, both combatants and civilians.

Source: Lockheed Martin

Not to mention the bomber carried a payload meant to destroy something in the first place. A military force, at its core, will always carry the mission to kill things and break stuff. What will be different is what tools they use to execute that mission.

To learn more about the changing character of warfare:

– Read the TRADOC G-2’s The Operational Environment and the Changing Character of Warfare paper.

– Watch The Changing Character of Future Warfare video.

Additionally, please note that the content from the Mad Scientist Learning in 2050 Conference at Georgetown University, 8-9 August 2018, is now posted and available for your review:

– Read the Top Ten” Takeaways from the Learning in 2050 Conference.

– Watch videos of each of the conference presentations on the TRADOC G-2 Operational Environment (OE) Enterprise YouTube Channel here.

– Review the conference presentation slides (with links to the associated videos) on the Mad Scientist All Partners Access Network (APAN) site here.

LTC Rob Taber is currently the Deputy Director of the Futures Directorate within the TRADOC G-2. He is an Army Strategic Intelligence Officer and holds a Master of Science of Strategic Intelligence from the National Intelligence University. His operational assignments include 1st Infantry Division, United States European Command, and the Defense Intelligence Agency.

Note:  The featured graphic at the top of this post captures U.S. cavalrymen on General John J. Pershing’s Punitive Expedition into Mexico in 1916.  Less than two years later, the United States would find itself fully engaged in Europe in a mechanized First World War.  (Source:  Tom Laemlein / Armor Plate Press, courtesy of Neil Grant, The Lewis Gun, Osprey Publishing, 2014, page 19)

_______________________________________________________

[I] National Intelligence Council, “Global Trends: Paradox of Progress,” January 2017, https://www.dni.gov/files/documents/nic/GT-Full-Report.pdf, p. 6.
[II] Brad D. Williams, “Emerging ‘Hyperwar’ Signals ‘AI-Fueled, machine waged’ Future of Conflict,” Fifth Domain, August 7, 2017, https://www.fifthdomain.com/dod/2017/08/07/emerging-hyperwar-signals-ai-fueled-machine-waged-future-of-conflict/.
[III] Ibid.
[VI] Carl Von Clausewitz, On War, ed. Michael Howard and Peter Paret (Princeton: Princeton University Press, 1976), 85.
[V] Ibid, 87.
[VI] Ibid.
[VII] Ibid, 86.
[VIII] John Allen, Amir Hussain, “On Hyper-War,” Fortuna’s Corner, July 10, 2017, https://fortunascorner.com/2017/07/10/on-hyper-war-by-gen-ret-john-allenusmc-amir-hussain/.

78. The Classified Mind – The Cyber Pearl Harbor of 2034

[Editor’s Note: Mad Scientist Laboratory is pleased to publish the following post by guest blogger Dr. Jan Kallberg, faculty member, United States Military Academy at West Point, and Research Scientist with the Army Cyber Institute at West Point. His post serves as a cautionary tale regarding our finite intellectual resources and the associated existential threat in failing to protect them!]

Preface: Based on my experience in cybersecurity, migrating to a broader cyber field, there have always been those exceptional individuals that have an unreplicable ability to see the challenge early on, create a technical solution, and know how to play it in the right order for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival of Artificial Intelligence increases our reliance on these highly capable individuals – because someone must set the rules, the boundaries, and point out the trajectory for Artificial Intelligence at initiation.

Source: https://thebulletin.org/2017/10/neuroscience-and-the-new-weapons-of-the-mind/

As an industrialist society, we tend to see technology and the information that feeds it as the weapons – and ignore the few humans that have a large-scale direct impact. Even if identified as a weapon, how do you make a human mind classified? Can we protect these high-ability individuals that in the digital world are weapons, not as tools but compilers of capability, or are we still focused on the tools? Why do we see only weapons that are steel and electronics and not the weaponized mind as a weapon?  I believe firmly that we underestimate the importance of Applicable Intelligence – the ability to play the cyber engagement in the optimal order.  Adversaries are often good observers because they are scouting for our weak spots. I set the stage for the following post in 2034, close enough to be realistic and far enough for things to happen when our adversaries are betting that we rely more on a few minds than we are willing to accept.

Post:  In a not too distant future, 20th of August 2034, a peer adversary’s first strategic moves are the targeted killings of less than twenty individuals as they go about their daily lives:  watching a 3-D printer making a protein sandwich at a breakfast restaurant; stepping out from the downtown Chicago monorail; or taking a taste of a poison-filled retro Jolt Cola. In the gray zone, when the geopolitical temperature increases, but we are still not at war yet, our adversary acts quickly and expedites a limited number of targeted killings within the United States of persons whom are unknown to mass media, the general public, and have only one thing in common – Applicable Intelligence (AI).

The ability to apply is a far greater asset than the technology itself. Cyber and card games have one thing in common, the order you play your cards matters. In cyber, the tools are publicly available, anyone can download them from the Internet and use them, but the weaponization of the tools occurs when used by someone who understands how to play the tools in an optimal order. These minds are different because they see an opportunity to exploit in a digital fog of war where others don’t or can’t see it. They address problems unburdened by traditional thinking, in new innovative ways, maximizing the dual-purpose of digital tools, and can create tangible cyber effects.

It is the Applicable Intelligence (AI) that creates the procedures, the application of tools, and turns simple digital software in sets or combinations as a convergence to digitally lethal weapons. This AI is the intelligence to mix, match, tweak, and arrange dual purpose software. In 2034, it is as if you had the supernatural ability to create a thermonuclear bomb from what you can find at Kroger or Albertson.

Sadly we missed it; we didn’t see it. We never left the 20th century. Our adversary saw it clearly and at the dawn of conflict killed off the weaponized minds, without discretion, and with no concern for international law or morality.

These intellects are weapons of growing strategic magnitude. In 2034, the United States missed the importance of these few intellects. This error left them unprotected.

All of our efforts were instead focusing on what they delivered, the application and the technology, which was hidden in secret vaults and only discussed in sensitive compartmented information facilities. Therefore, we classify to the highest level to ensure the confidentiality and integrity of our cyber capabilities. Meanwhile, the most critical component, the militarized intellect, we put no value to because it is a human. In a society marinated in an engineering mindset, humans are like desk space, electricity, and broadband; it is a commodity that is input in the production of the technical machinery. The marveled technical machinery is the only thing we care about today, 2018, and as it turned out in 2034 as well.

We are stuck in how we think, and we are unable to see it coming, but our adversaries see it. At a systematic level, we are unable to see humans as the weapon itself, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed. As the armory of the war of 1812, as the stockpile of 1943, and as the launch pad of 2034. Arms are made of steel, or fancier metals, with electronics – we failed in 2034 to see weapons made of corn, steak, and an added combative intellect.

General Nakasone stated in 2017, “Our best ones [coders] are 50 or 100 times better than their peers,” and continued “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” In reality, the success of cyber and cyber operations is highly dependent not on the tools or toolsets but instead upon the super-empowered individual that General Nakasone calls “the 50-x coder.”

Manhattan Project K-25 Gaseous Diffusion Process Building, Oak Ridge, TN / Source: atomicarchive.com

There were clear signals that we could have noticed before General Nakasone pointed it out clearly in 2017. The United States’ Manhattan Project during World War II had at its peak 125,000 workers on the payroll, but the intellects that drove the project to success and completion were few. The difference with the Manhattan Project and the future of cyber is that we were unable to see the human as a weapon, being locked in by our path dependency as an engineering society where we hail the technology and forget the importance of the humans behind it.

J. Robert Oppenheimer – the militarized intellect behind the  Manhattan Project / Source: Life Magazine

America’s endless love of technical innovations and advanced machinery reflects in a nation that has celebrated mechanical wonders and engineered solutions since its creation. For America, technical wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the intercontinental railroad, the Panama Canal, the manufacturing era, the moon landing, and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps that can solve a problem or act.

The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced. In 2034, the era of digital conflicts and the war between algorithms with engagements occurring at machine speed with no time for leadership or human interaction, it is the intellects that design and understand how to play it. We didn’t see it.

In 2034, with fewer than twenty bodies piled up after targeted killings, resides the Cyber Pearl Harbor. It was not imploding critical infrastructure, a tsunami of cyber attacks, nor hackers flooding our financial systems, but instead traditional lead and gunpowder. The super-empowered individuals are gone, and we are stuck in a digital war at speeds we don’t understand, unable to play it in the right order, and with limited intellectual torque to see through the fog of war provided by an exploding kaleidoscope of nodes and digital engagements.

Source: Shutterstock

If you enjoyed this post, read our Personalized Warfare post.

Dr. Jan Kallberg is currently an Assistant Professor of Political Science with the Department of Social Sciences, United States Military Academy at West Point, and a Research Scientist with the Army Cyber Institute at West Point. He was earlier a researcher with the Cyber Security Research and Education Institute, The University of Texas at Dallas, and is a part-time faculty member at George Washington University. Dr. Kallberg earned his Ph.D. and MA from the University of Texas at Dallas and earned a JD/LL.M. from Juridicum Law School, Stockholm University. Dr. Kallberg is a certified CISSP, ISACA CISM, and serves as the Managing Editor for the Cyber Defense Review. He has authored papers in the Strategic Studies Quarterly, Joint Forces Quarterly, IEEE IT Professional, IEEE Access, IEEE Security and Privacy, and IEEE Technology and Society.

77. “The Tenth Man” — Russia’s Era Military Innovation Technopark

[Editor’s Note: Mad Scientist Laboratory is pleased to publish the second in our series of “The Tenth Man” posts (read the first one here). This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging. The Mad Scientist Laboratory offers it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Future Operational Environment.

Today’s post is by guest blogger Mr. Ray Finch addressing Russia’s on-going efforts to develop a military innovation center —  Era Military Innovation Technopark — near the city of Anapa (Krasnodar Region) on the northern coast of the Black Sea.  Per The Operational Environment and the Changing Character of Future Warfare, “Russia can be considered our ‘pacing threat,’ and will be our most capable potential foe for at least the first half of the Era of Accelerated Human Progress [now through 2035]. It will remain a key adversary through the Era of Contested Equality [2035-2050].” So any Russian attempts at innovation to create “A Militarized Silicon Valley in Russia” should be sounding alarms throughout the NATO Alliance, right?  Well, maybe not….]

(Please note that several of Mr. Finch’s embedded links in the post below are best accessed using non-DoD networks.)

Only a Mad Russian Scientist could write the paragraph below:

Russia Resurgent, Source: Bill Butcher, The Economist

If all goes according to plan, in October 2035 the Kremlin will host a gala birthday party to commemorate President Putin’s 83d birthday. Ever since the Russian leader began receiving special biosynthetic plasma developed by military scientists at the country’s premier Era Technopolis Center in Anapa, the president’s health and overall fitness now resembles that of a 45-year old. This development was just one in a series of innovations which have helped to transform – not just the Kremlin leader – but the entire country.  By focusing its best and brightest on new technologies, Russia has become the global leader in information and telecommunication systems, artificial intelligence, robotic complexes, supercomputers, technical vision and pattern recognition, information security, nanotechnology and nanomaterials, energy tech and technology life support cycle, as well as bioengineering, biosynthetic, and biosensor technologies. In many respects, Russia is now the strongest country in the world.

While this certainly echoes the current Kremlin propaganda, a more sober analysis regarding the outcomes of the Era Military Innovation Technopark in Anapa (Krasnodar Region) ought to consider those systemic factors which will likely retard its future development. Below are five reasons why Putin and Russia will likely have less to celebrate in 2035.

President Putin and Defense Minister Shoigu being briefed on Technopark-Era, Kremlin, 23 Feb 2018. Source: http://kremlin.ru/events/president/news/56923, CC BY 4.0.

You can’t have milk without a cow

The primary reason that the Kremlin’s attempt to create breakthrough innovations at the Era Technopark will result in disappointment stems from the lack of a robust social structure to support such innovations. And it’s not simply the absence of good roads or adequate healthcare. As the renowned MIT scientist, Dr. Loren R. Graham recently pointed out, the Kremlin leadership wants to enjoy the “milk” of technology, without worrying about supporting the system needed to support a “cow.” Graham elaborates on his observation by pointing out that even though Russian scientists have often been at the forefront of technological innovations, the country’s poor legal system prevents these discoveries from ever bearing fruit. Stifling bureaucracy and a broken legal system prevent Russian scientists and innovators from profiting from their discoveries. This dilemma leads to the second factor.

Brain drain

Despite all of the Kremlin’s patriotic hype over the past several years, many young and talented Russians are voting with their feet and pursuing careers abroad. As the senior Russian analyst, Dr. Gordon M. Hahn noted, “instead of voting for pro-democratic forces and/or fomenting unrest, Russia’s discontented, highly educated, highly skilled university graduates tend to move abroad to find suitable work.” And even though the US is maligned on a daily basis in the Kremlin-supported Russian media, many of these smart, young Russians are moving to America. Indeed, according to a recent Radio Free Europe/Radio Liberty (RFE/RL) report, “the number of asylum applications by Russian citizens in the United States hit a 24-year high in 2017, jumping nearly 40 percent from the previous year and continuing an upward march that began after Russian President Vladimir Putin returned to the Kremlin in 2012.” These smart, young Russians believe that their country is headed in the wrong direction and are looking for opportunities elsewhere.

Everything turns out to be a Kalashnikov

There’s no doubt that Russian scientists and technicians are capable of creating effective weapon systems. President Putin’s recent display of military muscle-power was not a mere campaign stratagem, but rather a reminder to his Western “partners” that since Russia remains armed to the teeth, his country deserves respect. And there’s little question that the new Era Technopark will help to create advanced weapon systems of “which there is no analogous version in the world.” But that’s just the point. While Russia is famous for its tanks, artillery, and rocket systems, it has struggled to create anything which might be qualified as a technological marvel in the civilian sector. As some Russian observers have put it, “no matter what the state tries to develop, it ends up being a Kalashnikov.”

Soviet AK-47. Type 2 made from 1951 to 1954/55. Source: http://www.dodmedia.osd.mil Public Domain

The Boss knows what’s best

The current Kremlin leadership now parades itself as being at the forefront of a global conservative and traditional movement. In their favorite narrative, the conniving US is forever trying to weaken Russia (and other autocratic countries) by infecting them with a liberal bacillus, often referred to as a “color revolution.” In their rendition, Russia was contaminated by this democratic disease during the 1990s, only to find itself weakened and taken advantage of by America.

Since then, the Kremlin leadership has retained the form of democracy, but has removed its essence. Elections are held, ballots are cast, but the winner is pre-determined from above. So far, the Russian population has played along with this charade, but at some point, perhaps in an economic crisis, the increasingly plugged-in Russian population might demand a more representative form of government. Regardless, while this top-down, conservative model is ideal for maintaining control and staging major events, it lacks the essential freedom inherent within innovation. Moreover, such a quasi-autocratic system tends to promote Russia’s most serious challenge.

The cancer of corruption

Despite the façade of a uniformed, law-governed state, Russia continues to rank near the bottom on the global corruption index. According to a recent Russian report, “90 percent of entrepreneurs have encountered corruption at least once.” Private Russian companies will likely think twice before deciding to invest in the Era Technopark, unless of course, the Kremlin makes them an offer they cannot refuse. Moreover, as suggested earlier, the young Era scientists may not be fully committed, understanding that the “milk” of their technological discoveries will likely by expropriated by their uniformed bosses.

Technopark Era is not scheduled to be fully operational until 2020, and the elevated rhetoric over its innovative mandate will likely prompt concern among some US defense officials. While the center could advance Russian military technology over the next 15-25 years, it is doubtful that Era will usher in a new era for Russia.

If you enjoyed this edition of the “Tenth Man”:

– Learn more about Russia’s Era Military Innovation Technopark in the April 2018 edition of the TRADOC G-2’s Foreign Military Studies Office (FMSO) OE Watch, Volume 8, Issue 4, pages 10-11.

– Read Mad Scientist Sam Bendett‘s guest blog post on Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward.

Ray Finch works as a Eurasian Analyst at the Foreign Military Studies Office. He’s a former Army officer (Artillery and Russian FAO).