104. Critical Thinking: The Neglected Skill Required to Win Future Conflicts

[Editor’s Note: As addressed in last week’s post, entitled The Human Targeting Solution: An AI Story, the incorporation of Artificial Intelligence (AI) as a warfighting capability has the potential to revolutionize combat, accelerating the future fight to machine speeds.  That said, the advanced algorithms underpinning these AI combat multipliers remain dependent on the accuracy and currency of their data feeds. In the aforementioned post, the protagonist’s challenge in overriding the AI-prescribed optimal (yet flawed) targeting solution illustrates the inherent tension between human critical thinking and the benefits of AI.

Today’s guest blog post, submitted by MAJ Cynthia Dehne, expands upon this theme, addressing human critical thinking as the often neglected, yet essential skill required to successfully integrate and employ emergent technologies while simultaneously understanding their limitations on future battlefields.  Warfare will remain an intrinsically human endeavor, the fusion of deliberate and calculating human intellect with ever more lethal technological advances. ]

The future character of war will be influenced by emerging technologies such as AI, robotics, computing, and synthetic biology. Cutting-edge technologies will become increasingly cheaper and readily available, introducing a wider range of actors on the battlefield. Moreover, nation-state actors are no longer the drivers of cutting-edge technology — militaries are leveraging the private sector who are leading research and development in emergent technologies. Proliferation of these cheap, accessible technologies will allow both peer competitors and non-state actors to wage serious threats in the future operational environment.  Due to the abundance of new players on the battlefield combined with emerging technologies, future conflicts will be won by those who both possess “critical thinking” skills and can integrate technology seamlessly to inform decision-making in war instead of relying on technology to win war. Achieving success in the future eras of accelerated human progress and contested equality will require the U.S. Army to develop Soldiers who are adept at seamlessly employing technology on the battlefield while continuously exercising critical thinking skills.

The Foundation for Critical Thinking defines critical thinking as “the art of analyzing and evaluating thinking with a view to improve it.” 1 Furthermore, they assert that a well cultivated critical thinker can do the following: raise vital questions and problems and formulate them clearly and precisely; gather and assess relevant information, using abstract ideas to interpret it effectively; come to well-reasoned conclusions and solutions, testing them against relevant criteria and standards; think open-mindedly within alternative systems of thought, recognizing and assessing, as needed, their assumptions, implications, and practical consequences; and communicate effectively with others in figuring out solutions to complex problems.2

Many experts in education and psychology argue that critical thinking skills are declining. In 2017, Dr. Stephen Camarata wrote about the emerging crisis in critical thinking and college students’ struggles to tackle real world problem solving. He emphasized the essential need for critical thinking and asserted that “a young adult whose brain has been “wired’ to be innovative, think critically, and problem solve is at a tremendous competitive advantage in today’s increasingly complex and competitive world.”3 Although most government agencies, policy makers, and businesses deem critical thinking important, STEM fields continue to be prioritized. However, if creative thinking skills are not fused with STEM, then there will continue to be a decline in those equipped with well-rounded critical thinking abilities. In 2017, Mark Cuban opined during an interview with Bloomberg TV that the nature of work is changing and the future skill that will be more in-demand will be “creative thinking.” Specifically, he stated “I personally think there’s going to be a greater demand in 10 years for liberal arts majors than there were for programming majors and maybe even engineering.”4 Additionally, Forbes magazine published an article in 2018 declaring that “creativity is the skill of the future.”5

Employing future technologies effectively will be key to winning war, but it is only one aspect. During the Vietnam War, the U.S. relied heavily on technology but were defeated by an enemy who leveraged simple guerilla tactics combined with minimal military technology. Emerging technologies will be vital to inform decision-making, but will not negate battlefield friction. Carl von Clausewitz ascertained that although everything is simple in war, the simplest things become difficult and accumulate and create friction.6 Historically, a lack of information caused friction and uncertainty. However, complexity is a driver of friction in current warfare and will heavily influence future warfare. Complex, high-tech weapon systems will dominate the future battlefield and create added friction. Interdependent systems linking communications and warfighting functions will introduce more friction which will require highly skilled thinkers to navigate.

The newly published U.S. Army in Multi-Domain Operations 2028 concept “describes how Army forces fight across all domains, the electromagnetic spectrum (EMS), and the information environment and at echelon7  to “enable the Joint Force to compete with China and Russia below armed conflict, penetrate and dis-integrate their anti-access and area denial systems and ultimately defeat them in armed conflict and consolidate gains, and then return to competition.” Even with technological advances and intelligence improvement, elements of friction will be present in future wars. Both great armies and asymmetric threats have vulnerabilities, due to small things in terms of friction that morph into larger issues capable of crippling a fighting force. Therefore, success in future war is dependent on military commanders that understand these elements and how to overcome friction. Future technologies must be fused with critical thinking to mitigate friction and achieve strategic success. The U.S. Army must simultaneously emphasize integrating critical thinking in doctrine and exercises when training Soldiers on new technologies.

Soldiers should be creative, innovative thinkers; the Army must foster critically thinking as an essential skill.  The Insight Assessment emphasizes that “weakness in critical thinking skill results in loss of opportunities, of financial resources, of relationships, and even loss of life. There is probably no other attribute more worthy of measure than critical thinking skills.”9 Gaining and maintaining competitive advantage over adversaries in a complex, fluid future operational environment requires Soldiers to be both skilled in technology and experts in critical thinking.

If you enjoyed this post, please also see:

Mr. Chris Taylor’s presentation on Problem Solving in the Wild, from the Mad Scientist Learning in 2050 Conference at Georgetown University, 8-9 August 2018;

and the following Mad Scientist Laboratory blog posts:

TRADOC 2028

Making the Future More Personal: The Oft-Forgotten Human Driver in Future’s Analysis

 MAJ Cynthia Dehne is in the U.S. Army Reserve, assigned to the TRADOC G-2 and has operational experience in Afghanistan, Iraq, Kuwait, and Qatar. She is a graduate of the U.S. Army Command and General Staff College and holds masters degrees in International Relations and in Diplomacy and International Commerce.


1 Paul, Richard, and Elder, Linda. Critical Thinking Concepts and Tools. Dillon Beach, CA: Foundation for Critical Thinking, 2016, p. 2.

2 Paul, R., and Elder, L. Foundation for Critical Thinking. Dillon Beach, CA: Foundation for Critical Thinking, 2016, p. 2.

3 Camarata, Stephen. “The Emerging Crisis in Critical Thinking.” Psychology Today, March 21, 2017. Accessed October 10, 2018, from https://www.psychologytoday.com/us/blog/the-intuitive-parent/201703/the-emerging-crisis-in-critical-thinking.

4 Wile, Rob. “Mark Cuban Says This Will Be the No.1 Job Skill in 10 Years.” Time, February 20, 2017. Accessed October 11, 2018. http://time.com/money/4676298/mark-cuban-best-job-skill/.

5 Powers, Anna. “Creativity Is The Skill Of The Future.” Forbes, April 30, 2018. Accessed October 14, 2018. https://www.forbes.com/sites/annapowers/2018/04/30/creativity-is-the-skill-of-the-future/#3dd533f04fd4.

6 Clausewitz, Carl von, Michael Howard, Peter Paret, and Bernard Brodie. On War. Princeton, N.J.: Princeton University Press, 1984, p. 119.

7 U.S. Army. The U.S. Army in Multi-Domain Operations 2028, Department of the Army. TRADOC Pamphlet 525-3-1, December 6, 2018, p. 5.

8 U.S. Army. The U.S. Army in Multi-Domain Operations 2028, Department of the Army. TRADOC Pamphlet 525-3-1, December 6, 2018, p. 15.

9 Insight Assessment. “Risks Associated with Weak Critical Thinkers.” Insight Assessment, 2018. Accessed October 22, 2018, from https://www.insightassessment.com/Uses/Risks-Associated-with-Weak-Critical-Thinkers.

100. Prediction Machines: The Simple Economics of Artificial Intelligence

[Editor’s Note: Mad Scientist Laboratory is pleased to review Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Harvard Business Review Press, 17 April 2018.  While economics is not a perfect analog to warfare, this book will enhance our readers’ understanding of narrow Artificial Intelligence (AI) and its tremendous potential to change the character of future warfare by disrupting human-centered battlefield rhythms and facilitating combat at machine speed.]

This insightful book by economists Ajay Agrawal, Joshua Gans, and Avi Goldfarb penetrates the hype often associated with AI by describing its base functions and roles and providing the economic framework for its future applications.  Of particular interest is their perspective of AI entities as prediction machines. In simplifying and de-mything our understanding of AI and Machine Learning (ML) as prediction tools, akin to computers being nothing more than extremely powerful mathematics machines, the authors effectively describe the economic impacts that these prediction machines will have in the future.

The book addresses the three categories of data underpinning AI / ML:

Training: This is the Big Data that trains the underlying AI algorithms in the first place. Generally, the bigger and most robust the data set is, the more effective the AI’s predictive capability will be. Activities such as driving (with millions of iterations every day) and online commerce (with similar large numbers of transactions) in defined environments lend themselves to efficient AI applications.

Input: This is the data that the AI will be taking in, either from purposeful, active injects or passively from the environment around it. Again, defined environments are far easier to cope with in this regard.

Feedback: This data comes from either manual inputs by users and developers or from AI understanding what effects took place from its previous applications. While often overlooked, this data is critical to iteratively enhancing and refining the AI’s performance as well as identifying biases and askew decision-making. AI is not a static, one-off product; much like software, it must be continually updated, either through injects or learning.

The authors explore narrow AI rather than a general, super, or “strong” AI.  Proclaimed Mad Scientist Paul Scharre and Michael Horowitz define narrow AI as follows:

their expertise is confined to a single domain, as opposed to hypothetical future “general” AI systems that could apply expertise more broadly. Machines – at least for now – lack the general-purpose reasoning that humans use to flexibly perform a range of tasks: making coffee one minute, then taking a phone call from work, then putting on a toddler’s shoes and putting her in the car for school.”  – from Artificial Intelligence What Every Policymaker Needs to Know, Center for New American Security, 19 June 2018

These narrow AI applications could have significant implications for U.S. Armed Forces personnel, force structure, operations, and processes. While economics is not a direct analogy to warfare, there are a number of aspects that can be distilled into the following ramifications:

Internet of Battle Things (IOBT) / Source: Alexander Kott, ARL

1. The battlefield is dynamic and has innumerable variables that have great potential to mischaracterize the ground truth with limited, purposely subverted, or “dirty” input data. Additionally, the relative short duration of battles and battlefield activities means that AI would not receive consistent, plentiful, and defined data, similar to what it would receive in civilian transportation and economic applications.

2. The U.S. military will not be able to just “throw AI on it” and achieve effective results. The effective application of AI will require a disciplined and comprehensive review of all warfighting functions to determine where AI can best augment and enhance our current Soldier-centric capabilities (i.e., identify those workflows and processes – Intelligence and Targeting Cycles – that can be enhanced with the application of AI).  Leaders will also have to assess where AI can replace Soldiers in workflows and organizational architecture, and whether AI necessitates the discarding or major restructuring of either.  Note that Goldman-Sachs is in the process of conducting this type of self-evaluation right now.

3. Due to its incredible “thirst” for Big Data, AI/ML will necessitate tradeoffs between security and privacy (the former likely being more important to the military) and quantity and quality of data.

 

4. In the near to mid-term future, AI/ML will not replace Leaders, Soldiers, and Analysts, but will allow them to focus on the big issues (i.e., “the fight”) by freeing them from the resource-intensive (i.e., time and manpower) mundane and rote tasks of data crunching, possibly facilitating the reallocation of manpower to growing need areas in data management, machine training, and AI translation.

This book is a must-read for those interested in obtaining a down-to-earth assessment on the state of narrow AI and its potential applications to both economics and warfare.

If you enjoyed this review, please also read the following Mad Scientist Laboratory blog posts:

Takeaways Learned about the Future of the AI Battlefield

Leveraging Artificial Intelligence and Machine Learning to Meet Warfighter Needs

… and watch the following presentations from the Mad Scientist Robotics, AI, and Autonomy – Visioning Multi-Domain Battle in 2030-2050 Conference, 7-8 March 2017, co-sponsored by Georgia Tech Research Institute:

Artificial Intelligence and Machine Learning: Potential Application in Defense Today and Tomorrow,” presented by Mr. Louis Maziotta, Armament Research, Development, and Engineering Center (ARDEC).

Unmanned and Autonomous Systems, presented by Paul Scharre, CNAS.

95. Takeaways Learned about the Future of the AI Battlefield

On 10 October 2018, the U.S. Army Training and Doctrine Command (TRADOC) G-2’s Mad Scientist Initiative launched a crowdsourcing exercise to explore the possibilities and impacts of Artificial Intelligence (AI) on the future battlefield. For good reason, much has been made of AI and Machine Learning (ML) and their use in enabling lethal autonomy on the battlefield. While this is an important topic, AI’s potential application is much broader and farther, enabling future warfare at machine speed and disrupting human-centered battlefield rhythms.

Mad Scientist received submissions from approximately 115 participants, affiliated with military units, Government agencies, private tech companies, academia, and a number of non-DoD/Government associated sources. These submissions were diverse and rich in depth, clarity, and quality.  We distilled them into the following eight cross-cutting takeaways impacting every aspect of the future battlefield:

  • Invisible AI:

AI will be so pervasive across the battlefield that most of its functions and processes will take place without warfighters and commanders noticing. There won’t be an On/Off button per se, similar to cellular service, smart device functions, or cyber operations. The wide proliferation of AI entities from devices to platforms to even wearables means it will not be an isolated domain, but rather will permeate ubiquitously and seamlessly across the battlefield.

  • Speed it Up:

AI will not only speed up existing processes and cycles – i.e., the military decision-making process (MDMP), the intelligence cycle, the targeting cycle – but it will also likely transform them. Many of these cycles and processes have evolved and proven their effectiveness in a human-centric environment. Some contain consecutive steps that may no longer be necessary when tasks are assigned to intelligent machines. Critical, time-sensitive, but often tedious work that is carried out by hundreds of military staff members in many hours could be accomplished in minutes by AI, leading to flattened command structures, smaller staffs, and significant demand and signature reduction on the battlefield. All of this will result in battlefield optimization and will induce hyperactivity in combat – rapidly changing battlefield rhythms.

  • Coup d’œil / Freeing up Warfighters and Commanders:

AI intelligence systems and entities conducting machine speed collection, collation, and analysis of battlefield information will free up warfighters and commanders to do what they do best — fight and make decisions, respectively. Commanders can focus on the battle with coup d’œil, or the “stroke of an eye,” maintaining situational awareness without consuming precious time crunching data. Additionally, AI’s ability to quickly sift through and analyze the plethora of input received from across the battlefield, fused with the lessons learned data from thousands of previous engagements, will lessen the commander’s dependence on having had direct personal combat experience with conditions similar to his current fight when making command decisions.

  • Spectrum Management and Common Operational Picture (COP):

The future battlefield will be increasingly complex with the cyber, air, and space domains, as well as electromagnetic spectrum becoming difficult to see, manage, and deconflict. Exacerbating this problem is the enormous growth of the Internet of Things – eventually the Internet of Everything – and even more importantly, the Internet of Battlefield Things. AI will be critical in processing and sustaining a clear COP in this overwhelmingly data-rich environment of sensors, emitters, systems, and networks.

  • Learning Things and Collaborative Entities:

AI will facilitate a host of new learning things on the battlefield – i.e., weapon systems, munitions, vehicles, wearables [exo-skeletons] – and a multitude of collaborative entities – sensors, systems, and platforms. This battlespace of learning things will not supplant our need for Soldiers that use and operate them, but it will enhance them as Warfighters.

  • Resilient and Layered AI:  

 In order to effectively utilize AI across the battlefield, the Army will need resilient and layered AI, including on-board services, localized collaborative systems, and cloud services that do not rely on persistent connectivity. Some AI entities will need to be proliferated at the tactical level, creating a veritable network that can still effectively operate with degraded/disrupted nodes.

  • New Required Capabilities and Skillsets:

The advent of AI across the battlefield will require a multitude of new capabilities and skillsets to implement, maintain, and maximize AI entities. As with the contemporary drive to recruit Cyber talent into the ranks, the Army must plan on competing with the private sector for the most talented and capable recruits in new AI job fields.

  • Adversarial Risk:

A capability / vulnerability paradox is inherent with AI, with its machine speed capabilities being vulnerable to the vast array of data input sources that it needs to operate. AI’s underpinning data and algorithms are vulnerable to spoofing, degradation, or other forms of subversion. This could lead to the erosion of Soldier and Leader trust in AI, and also necessitates more transparency to strengthen the man-machine relationship. Enemies will seek to exploit this relationship and trust.

Conclusion:

These takeaways illustrate the many ways AI can be implemented across the future battlefield. Machine speed warfare will be enabled by AI; it will not be limited to just lethal autonomy. The functions of so many other parts of combat – C2, ISR, sustainment, medical, etc. – can be accelerated and improved; not just “warheads on foreheads.”

While we explored where AI could enhance battlefield operations, there are also implicit considerations that must be accounted for in the future. These include the ethical dilemmas and concerns associated with employing AI in so many different ways. Lethal autonomy is a hot button issue due to its life or death implications. However, AI assisting other warfighter functions will also have significant impacts on the battlefield.

A second major consideration is what impact AI has on Army learning and training. The Army will not only have to incorporate the subject of AI in its learning but will also utilize AI in its learning. Additionally, AI will be required to support Field Training Exercises and other major training events to work through all of the second and third order effects resulting from a much more compressed battle rhythm.

Mad Scientist is extremely appreciative of all the feedback and submissions received. We intend for this product to be used in future wargaming events, future horizon scanning, and the general framing of future thinking and planning for the development and use of AI systems and entities.

If you enjoyed this blog post, please read the entire Crowdsourcing the Future of the AI Battlefield paper, including the highlights of ideas binned into categories supporting upcoming Army Wargames on Intelligence, Surveillance, and Reconnaissance; Logistics; and Command and Control.

92. Ground Warfare in 2050: How It Might Look

[Editor’s Note: Mad Scientist Laboratory is pleased to review proclaimed Mad Scientist Dr. Alexander Kott’s paper, Ground Warfare in 2050: How It Might Look, published by the US Army Research Laboratory in August 2018. This paper offers readers with a technological forecast of autonomous intelligent agents and robots and their potential for employment on future battlefields in the year 2050. In this post, Mad Scientist reviews Dr. Kott’s conclusions and provides links to our previously published posts that support his findings.]

In his paper, Dr. Kott addresses two major trends (currently under way) that will continue to affect combat operations for the foreseeable future. They are:

•  The employment of small aerial drones for Intelligence, Surveillance, and Reconnaissance (ISR) will continue, making concealment difficult and eliminating distance from opposing forces as a means of counter-detection. This will require the development and use of decoy capabilities (also intelligent robotic devices). This counter-reconnaissance fight will feature prominently on future battlefields between autonomous sensors and countermeasures – “a robot-on-robot affair.”

See our related discussions regarding Concealment in the Fundamental Questions Affecting Army Modernization post and Finders vs Hiders in our Timeless Competitions post.

  The continued proliferation of intelligent munitions, operating at greater distances, collaborating in teams to seek out and destroy designated targets, and able to defeat armored and other hardened targets, as well as defiladed and entrenched targets.

See our descriptions of the future recon / strike complex in our Advanced Engagement Battlespace and the “Hyperactive Battlefield” post, and Robotics and Swarms / Semi Autonomous capabilities in our Potential Game Changers post.

These two trends will, in turn, drive the following forecasted developments:

  Increasing reliance on unmanned systems, “with humans becoming a minority within the overall force, being further dispersed across the battlefield.”

See Mr. Jeff Becker’s post on The Multi-Domain “Dragoon” Squad: A Hyper-enabled Combat System, and Mr. Mike Matson’s Demons in the Tall Grass, both of which envision future tactical units employing greater numbers of autonomous combat systems; as well as Mr. Sam Bendett’s post on Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward, addressing the contemporary hurdles that one of our strategic competitors must address in operationalizing Unmanned Ground Vehicles.

•  Intelligent munitions will be neutralized “primarily by missiles and only secondarily by armor and entrenchments. Specialized autonomous protection vehicles will be required that will use their extensive load of antimissiles to defeat the incoming intelligent munitions.”

See our discussion of what warfare at machine-speed looks like in our Advanced Engagement Battlespace and the “Hyperactive Battlefield”.

Source: Fausto De Martini / Kill Command

  Forces will exploit “very complex terrain, such as dense forest and urban environments” for cover and concealment, requiring the development of highly mobile “ground robots with legs and limbs,” able to negotiate this congested landscape.

 

See our Megacities: Future Challenges and Responses and Integrated Sensors: The Critical Element in Future Complex Environment Warfare posts that address future complex operational environments.

Source: www.defenceimages.mod.uk

  The proliferation of autonomous combat systems on the battlefield will generate an additional required capability — “a significant number of specialized robotic vehicles that will serve as mobile power generation plants and charging stations.”

See our discussion of future Power capabilities on our Potential Game Changers handout.

 “To gain protection from intelligent munitions, extended subterranean tunnels and facilities will become important. This in turn will necessitate the tunnel-digging robotic machines, suitably equipped for battlefield mobility.”

See our discussion of Multi-Domain Swarming in our Black Swans and Pink Flamingos post.

  All of these autonomous, yet simultaneously integrated and networked battlefield systems will be vulnerable to Cyber-Electromagnetic Activities (CEMA). Consequently, the battle within the Cyber domain will “be fought largely by various autonomous cyber agents that will attack, defend, and manage the overall network of exceptional complexity and dynamics.”

See MAJ Chris Telley’s post addressing Artificial Intelligence (AI) as an Information Operations tool in his Influence at Machine Speed: The Coming of AI-Powered Propaganda.

 The “high volume and velocity of information produced and demanded by the robot-intensive force” will require an increasingly autonomous Command and Control (C2) system, with humans increasingly being on, rather than in, the loop.

See Mr. Ian Sullivan’s discussion of AI vs. AI and how the decisive edge accrues to the combatant with more autonomous decision-action concurrency in his Lessons Learned in Assessing the Operational Environment post.

If you enjoyed reading this post, please watch Dr. Alexander Kott’s presentation, “The Network is the Robot,” from the Mad Scientist Robotics, Artificial Intelligence, and Autonomy: Visioning Multi-Domain Warfare in 2030-2050 Conference, co-sponsored by the Georgia Tech Research Institute (GTRI), in Atlanta, Georgia, 7-8 March 2017.

Dr. Alexander Kott serves as the ARL’s Chief Scientist. In this role he provides leadership in development of ARL technical strategy, maintaining technical quality of ARL research, and representing ARL to external technical community. He published over 80 technical papers and served as the initiator, co-author and primary editor of over ten books, including most recently Cyber Defense and Situational Awareness (2015) and Cyber Security of SCADA and other Industrial Control Systems (2016), and the forthcoming Cyber Resilience of Systems and Networks (2019).

87. LikeWar — The Weaponization of Social Media

[Editor’s Note: Regular readers will note that one of our enduring themes is the Internet’s emergence as a central disruptive innovation. With the publication of proclaimed Mad Scientist P.W. Singer and co-author Emerson T. Brooking’s LikeWar – The Weaponization of Social Media, Mad Scientist Laboratory addresses what is arguably the most powerful manifestation of the internet — Social Media — and how it is inextricably linked to the future of warfare. Messrs. Singer and Brooking’s new book is essential reading if today’s Leaders (both in and out of uniform) are to understand, defend against, and ultimately wield the non-kinetic, yet violently manipulative effects of Social Media.]

“The modern internet is not just a network, but an ecosystem of 4 billion souls…. Those who can manipulate this swirling tide, steer its direction and flow, can…. accomplish astonishing evil. They can foment violence, stoke hate, sow falsehoods, incite wars, and even erode the pillars of democracy itself.”

As noted in The Operational Environment and the Changing Character of Future Warfare, Social Media and the Internet of Things have spawned a revolution that has connected “all aspects of human engagement where cognition, ideas, and perceptions, are almost instantaneously available.” While this connectivity has been a powerfully beneficial global change agent, it has also amplified human foibles and biases. Authors Singer and Brookings note that humans by nature are social creatures that tend to gravitate into like-minded groups. We “Like” and share things online that resonate with our own beliefs. We also tend to believe what resonates with us and our community of friends.

Whether the cause is dangerous (support for a terrorist group), mundane (support for a political party), or inane (belief that the earth is flat), social media guarantees that you can find others who share your views and even be steered to them by the platforms’ own algorithms… As groups of like-minded people clump together, they grow to resemble fanatical tribes, trapped in echo chambers of their own design.”

Weaponization of Information

The advent of Social Media less than 20 years ago has changed how we wage war.

Attacking an adversary’s most important center of gravity — the spirit of its people — no longer requires massive bombing runs or reams of propaganda. All it takes is a smartphone and a few idle seconds. And anyone can do it.”

Nation states and non-state actors alike are leveraging social media to manipulate like-minded populations’ cognitive biases to influence the dynamics of conflict. This continuous on-line fight for your mind represents “not a single information war but thousands and potentially millions of them.”

 

LikeWar provides a host of examples describing how contemporary belligerents are weaponizing Social Media to augment their operations in the physical domain. Regarding the battle to defeat ISIS and re-take Mosul, authors Singer and Brookings note that:

Social media had changed not just the message, but the dynamics of conflict. How information was being accessed, manipulated, and spread had taken on new power. Who was involved in the fight, where they were located, and even how they achieved victory had been twisted and transformed. Indeed, if what was online could swing the course of a battle — or eliminate the need for battle entirely — what, exactly, could be considered ‘war’ at all?

Even American gang members are entering the fray as super-empowered individuals, leveraging social media to instigate killings via “Facebook drilling” in Chicago or “wallbanging” in Los Angeles.

And it is only “a handful of Silicon Valley engineers,” with their brother and sister technocrats in Beijing, St. Petersburg, and a few other global hubs of Twenty-first Century innovation that are forging and then unleashing the code that is democratizing this virtual warfare.

Artificial Intelligence (AI)-Enabled Information Operations

Seeing is believing, right? Not anymore! Previously clumsy efforts to photo-shop images and fabricate grainy videos and poorly executed CGI have given way to sophisticated Deepfakes, using AI algorithms to create nearly undetectable fake images, videos, and audio tracks that then go viral on-line to dupe, deceive, and manipulate. This year, FakeApp was launched as free software, enabling anyone with an artificial neural network and a graphics processor to create and share bogus videos via Social Media. Each Deepfake video that:

“… you watch, like, or share represents a tiny ripple on the information battlefield, privileging one side at the expense of others. Your online attention and actions are thus both targets and ammunition in an unending series of skirmishes.”

Just as AI is facilitating these distortions in reality, the race is on to harness AI to detect and delete these fakes and prevent “the end of truth.”

If you enjoyed this post:

– Listen to the accompanying playlist composed by P.W. Singer while reading LikeWar.

– Watch P.W. Singer’s presentation on Meta Trends – Technology, and a New Kind of Race from Day 2 of the Mad Scientist Strategic Security Environment in 2025 and Beyond Conference at Georgetown University, 9 August 2016.

– Read more about virtual warfare in the following Mad Scientist Laboratory blog posts:

— MAJ Chris Telley’s Influence at Machine Speed: The Coming of AI-Powered Propaganda

— COL(R) Stefan J. Banach’s Virtual War – A Revolution in Human Affairs (Parts I and II)

— Mad Scientist Intiative’s Personalized Warfare

— Ms. Marie Murphy’s Virtual Nations: An Emerging Supranational Cyber Trend

— Lt Col Jennifer Snow’s Alternet: What Happens When the Internet is No Longer Trusted?

83. A Primer on Humanity: Iron Man versus Terminator

[Editor’s Note: Mad Scientist Laboratory is pleased to present a post by guest blogger MAJ(P) Kelly McCoy, U.S. Army Training and Doctrine Command (TRADOC), with a theme familiar to anyone who has ever debated super powers in a schoolyard during recess. Yet despite its familiarity, it remains a serious question as we seek to modernize the U.S. Army in light of our pacing threat adversaries. The question of “human-in-the-loop” versus “human-out-of-the-loop” is an extremely timely and cogent question.]

Iron Man versus Terminator — who would win? It is a debate that challenges morality, firepower, ingenuity, and pop culture prowess. But when it comes down to brass tacks, who would really win and what does that say about us?

Mad Scientist maintains that:

  • Today: Mano a mano, Iron Man’s human ingenuity, grit, and irrationality would carry the day; however…
  • In the Future: Facing the entire Skynet distributed neural net, Iron Man’s human-in-the-loop would be overwhelmed by a coordinated, swarming attack of Terminators.
Soldier in Iron Man-like exoskeleton prototype suit

Iron Man is the super-empowered human utilizing Artificial Intelligence (AI) — Just A Rather Very Intelligent System or JARVIS — to augment the synthesizing of data and robotics to increase strength, speed, and lethality. Iron Man utilizes autonomous systems, but maintains a human-in-the- loop for lethality decisions. Conversely, the Terminator is pure machine – with AI at the helm for all decision-making. Terminators are built for specific purposes – and for this case let’s assume these robotic soldiers are designed specifically for urban warfare. Finally, strength, lethality, cyber vulnerabilities, and modularity of capabilities between Iron Man and Terminator are assumed to be relatively equal to each other.

Up front, Iron Man is constrained by individual human bias, retention and application of training, and physical and mental fatigue. Heading into the fight, the human behind a super powered robotic enhancing suit will make decisions based on their own biases. How does one respond to too much information or not enough? How do they react when needing to respond while wrestling with the details of what needs to be remembered at the right time and space? Compounding this is the retention and application of the individual human’s training leading up to this point. Have they successfully undergone enough repetitions to mitigate their biases and arrive at the best solution and response? Finally, our most human vulnerability is physical and mental fatigue. Without adding in psychoactive drugs, how would you respond to taking the Graduate Record Examinations (GRE) while simultaneously winning a combatives match? How long would you last before you are mentally and physically exhausted?

Terminator / Source: http://pngimg.com/download/29789

What the human faces is a Terminator who removes bias and optimizes responses through machine learning, access to a network of knowledge, options, and capabilities, and relentless speed to process information. How much better would a Soldier be with their biases removed and the ability to apply the full library of lessons learned? To process the available information that contextualizes environment without cognitive overload. Arriving at the optimum decision, based on the outcomes of thousands of scenarios.

Iron Man arrives to this fight with irrationality and ingenuity; the ability to quickly adapt to complex problems and environments; tenacity; and morality that is uniquely human. Given this, the Terminator is faced with an adversary who can not only adapt, but also persevere with utter unpredictability. And here the Terminator’s weaknesses come to light. Their algorithms are matched to an environment – but environments can change and render algorithms obsolete. Their energy sources are finite – where humans can run on empty, Terminators power off. Finally, there are always glitches and vulnerabilities. Autonomous systems depend on the environment that it is coded for – if you know how to corrupt the environment, you can corrupt the system.

Ultimately the question of Iron Man versus Terminator is a question of time and human value and worth. In time, it is likely that the Iron Man will fall in the first fight. However, the victor is never determined in the first fight, but the last. If you believe in human ingenuity, grit, irrationality, and consideration, the last fight is the true test of what it means to be human.

Note:  Nothing in this blog is intended as an implied or explicit endorsement of the “Iron Man” or “Terminator” franchises on the part of the Department of Defense, the U.S. Army, or TRADOC.

Kelly McCoy is a U.S. Army strategist officer and a member of the Military Leadership Circle. A blessed husband and proud father, when he has time he is either brewing beer, roasting coffee, or maintaining his blog (Drink Beer; Kill War at: https://medium.com/@DrnkBrKllWr). The views expressed in this article belong to the author alone and do not represent the Department of Defense.

80. “The Queue”

[Editor’s Note:  Mad Scientist Laboratory is pleased to present our August edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

Gartner Hype Cycle / Source:  Nicole Saraco Loddo, Gartner

1.5 Trends Emerge in the Gartner Hype Cycle for Emerging Technologies,” by Kasey Panetta, Gartner, 16 August 2018.

Gartner’s annual hype cycle highlights many of the technologies and trends explored by the Mad Scientist program over the last two years. This year’s cycle added 17 new technologies and organized them into five emerging trends: 1) Democratized Artificial Intelligence (AI), 2) Digitalized Eco-Systems, 3) Do-It-Yourself Bio-Hacking, 4) Transparently Immersive Experiences, and 5) Ubiquitous Infrastructure. Of note, many of these technologies have a 5–10 year horizon until the Plateau of Productivity. If this time horizon is accurate, we believe these emerging technologies and five trends will have a significant role in defining the Character of Future War in 2035 and should have modernization implications for the Army of 2028. For additional information on the disruptive technologies identified between now and 2035, see the Era of Accelerated Human Progress portion of our Potential Game Changers broadsheet.

[Gartner disclaimer:  Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.]

Artificial Intelligence by GLAS-8 / Source: Flickr

2.Should Evil AI Research Be Published? Five Experts Weigh In,” by Dan Robitzski, Futurism, 27 August 2018.

The following rhetorical (for now) question was posed to the “AI Race and Societal Impacts” panel during last month’s The Joint Multi-Conference on Human-Level Artificial Intelligence in Prague, The Czech Republic:

“Let’s say you’re an AI scientist, and you’ve found the holy grail of your field — you figured out how to build an artificial general intelligence (AGI). That’s a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas — things no computer can do today.

That’s great, right? Except for one big catch: your AGI system is evil or could only be used for malicious purposes.

So, now a conundrum. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil? Do you file a patent so that no one else (except for you) could bring such an algorithm into existence? Or do you sit on your research, protecting the world from your creation but also passing up on the astronomical paycheck that would surely arrive in the wake of such a discovery?”

The panel’s responses ranged from controlling — “Don’t publish it!” and treat it like a grenade, “one would not hand it to a small child, but maybe a trained soldier could be trusted with it”; to the altruistic — “publish [it]… immediately” and “there is no evil technology, but there are people who would misuse it. If that AGI algorithm was shared with the world, people might be able to find ways to use it for good”; to the entrepreneurial – “sell the evil AGI to [me]. That way, they wouldn’t have to hold onto the ethical burden of such a powerful and scary AI — instead, you could just pass it to [me and I will] take it from there.

While no consensus of opinion was arrived at, the panel discussion served a useful exercise in illustrating how AI differs from previous eras’ game changing technologies. Unlike Nuclear, Biological, and Chemical weapons, no internationally agreed to and implemented control protocols can be applied to AI, as there are no analogous gas centrifuges, fissile materials, or triggering mechanisms; no restricted access pathogens; no proscribed precursor chemicals to control. Rather, when AGI is ultimately achieved, it is likely to be composed of nothing more than diffuse code; a digital will’o wisp that can permeate across the global net to other nations, non-state actors, and super-empowered individuals, with the potential to facilitate unprecedentedly disruptive Information Operation (IO) campaigns and Virtual Warfare, revolutionizing human affairs. The West would be best served in emulating the PRC with its Military-Civil Fusion Centers and integrate the resources of the State with the innovation of industry to achieve their own AGI solutions soonest. The decisive edge will “accrue to the side with more autonomous decision-action concurrency on the Hyperactive Battlefield” — the best defense against a nefarious AGI is a friendly AGI!

Scales Sword Of Justice / Source: https://www.maxpixel.net/

3.Can Justice be blind when it comes to machine learning? Researchers present findings at ICML 2018,” The Alan Turing Institute, 11 July 2018.

Can justice really be blind? The International Conference on Machine Learning (ICML) was held in Stockholm, Sweden, in July 2018. This conference explored the notion of machine learning fairness and proposed new methods to help regulators provide better oversight and practitioners to develop fair and privacy-preserving data analyses. Like ethical discussions taking place within the DoD, there are rising legal concerns that commercial machine learning systems (e.g., those associated with car insurance pricing) might illegally or unfairly discriminate against certain subgroups of the population. Machine learning will play an important role in assisting battlefield decisions (e.g., the targeting cycle and commander’s decisions) – especially lethal decisions. There is a common misperception that machines will make unbiased and fair decisions, divorced from human bias. Yet the issue of machine learning bias is significant because humans, with their host of cognitive biases, code the very programming that will enable machines to learn and make decisions. Making the best, unbiased decisions will become critical in AI-assisted warfighting. We must ensure that machine-based learning outputs are verified and understood to preclude the inadvertent introduction of human biases.  Read the full report here.

Robot PNG / Source: pngimg.com

4.Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans,” by Katyanna Quach, The Register, 3 August 2018.

In a study published by PLOS ONE, researchers found that a robot’s personality affected a human’s decision-making. In the study, participants were asked to dialogue with a robot that was either sociable (chatty) or functional (focused). At the end of the study, the researchers let the participants know that they could switch the robot off if they wanted to. At that moment, the robot would make an impassioned plea to the participant to resist shutting them down. The participants’ actions were then recorded. Unexpectedly, there were  a large number of participants who resisted shutting down the functional robots after they made their plea, as opposed to the sociable ones. This is significant. It shows, beyond the unexpected result, that decision-making is affected by robotic personality. Humans will form an emotional connection to artificial entities despite knowing they are robotic if they mimic and emulate human behavior. If the Army believes its Soldiers will be accompanied and augmented heavily by robots in the near future, it must also understand that human-robot interaction will not be the same as human-computer interaction. The U.S. Army must explore how attain the appropriate level of trust between Soldiers and their robotic teammates on the future battlefield. Robots must be treated more like partners than tools, with trust, cooperation, and even empathy displayed.

IoT / Source: Pixabay

5.Spending on Internet of Things May More Than Double to Over Half a Trillion Dollars,” by Aaron Pressman, Fortune, 8 August 2018.

While the advent of the Internet brought home computing and communication even deeper into global households, the revolution of smart phones brought about the concept of constant personal interconnectivity. Today and into the future, not only are humans being connected to the global commons via their smart devices, but a multitude of devices, vehicles, and various accessories are being integrated into the Internet of Things (IoT). Previously, the IoT was addressed as a game changing technology. The IoT is composed of trillions of internet-linked items, creating opportunities and vulnerabilities. There has been explosive growth in low Size Weight and Power (SWaP) and connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness).

Large companies are expected to quickly grow their spending on Internet-connected devices (i.e., appliances, home devices [such as Google Home, Alexa, etc.], various sensors) to approximately $520 billion. This is a massive investment into what will likely become the Internet of Everything (IoE). While growth is focused on known devices, it is likely that it will expand to embedded and wearable sensors – think clothing, accessories, and even sensors and communication devices embedded within the human body. This has two major implications for the Future Operational Environment (FOE):

– The U.S. military is already struggling with the balance between collecting, organizing, and using critical data, allowing service members to use personal devices, and maintaining operations and network security and integrity (see banning of personal fitness trackers recently). A segment of the IoT sensors and devices may be necessary or critical to the function and operation of many U.S. Armed Forces platforms and weapons systems, inciting some critical questions about supply chain security, system vulnerabilities, and reliance on micro sensors and microelectronics

– The U.S. Army of the future will likely have to operate in and around dense urban environments, where IoT devices and sensors will be abundant, degrading blue force’s ability to sense the battlefield and “see” the enemy, thereby creating a veritable needle in a stack of needles.

6.Battlefield Internet: A Plan for Securing Cyberspace,” by Michèle Flournoy and Michael Sulmeyer, Foreign Affairs, September/October 2018. Review submitted by Ms. Marie Murphy.

With the possibility of a “cyber Pearl Harbor” becoming increasingly imminent, intelligence officials warn of the rising danger of cyber attacks. Effects of these attacks have already been felt around the world. They have the power to break the trust people have in institutions, companies, and governments as they act in the undefined gray zone between peace and all-out war. The military implications are quite clear: cyber attacks can cripple the military’s ability to function from a command and control aspect to intelligence communications and materiel and personnel networks. Besides the military and government, private companies’ use of the internet must be accounted for when discussing cyber security. Some companies have felt the effects of cyber attacks, while others are reluctant to invest in cyber protection measures. In this way, civilians become affected by acts of cyber warfare, and attacks on a country may not be directed at the opposing military, but the civilian population of a state, as in the case of power and utility outages seen in eastern Europe. Any actor with access to the internet can inflict damage, and anyone connected to the internet is vulnerable to attack, so public-private cooperation is necessary to most effectively combat cyber threats.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

79. Character vs. Nature of Warfare: What We Can Learn (Again) from Clausewitz

[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by guest blogger LTC Rob Taber, U.S. Army Training and Doctrine Command (TRADOC) G-2 Futures Directorate, clarifying the often confused character and nature of warfare, and addressing their respective mutability.]

No one is arguing that warfare is not changing. Where people disagree, however, is whether the nature of warfare, the character of warfare, or both are changing.

Source:  Office of the Director of National Intelligence

Take, for example, the National Intelligence Council’s assertion in “Global Trends: Paradox of Progress.” They state, “The nature of conflict is changing. The risk of conflict will increase due to diverging interests among major powers, an expanding terror threat, continued instability in weak states, and the spread of lethal, disruptive technologies. Disrupting societies will become more common, with long-range precision weapons, cyber, and robotic systems to target infrastructure from afar, and more accessible technology to create weapons of mass destruction.”[I]

Additionally, Brad D. Williams, in an introduction to an interview he conducted with Amir Husain, asserts, “Generals and military theorists have sought to characterize the nature of war for millennia, and for long periods of time, warfare doesn’t dramatically change. But, occasionally, new methods for conducting war cause a fundamental reconsideration of its very nature and implications.”[II] Williams then cites “cavalry, the rifled musket and Blitzkrieg as three historical examples”[III] from Husain and General John R. Allen’s (ret.) article, “On Hyperwar.”

Unfortunately, the NIC and Mr. Williams miss the reality that the nature of war is not changing, and it is unlikely to ever change. While these authors may have simply interchanged “nature” when they meant “character,” it is important to be clear on the difference between the two and the implications for the military. To put it more succinctly, words have meaning.

The nature of something is the basic make up of that thing. It is, at core, what that “thing” is. The character of something is the combination of all the different parts and pieces that make up that thing. In the context of warfare, it is useful to ask every doctrine writer’s personal hero, Carl Von Clausewitz, what his views are on the matter.

Source: Tetsell’s Blog. https://tetsell.wordpress.com/2014/10/13/clausewitz/

He argues that war is “subjective,”[IV]an act of policy,”[V] and “a pulsation of violence.”[VI] Put another way, the nature of war is chaotic, inherently political, and violent. Clausewitz then states that despite war’s “colorful resemblance to a game of chance, all the vicissitudes of its passion, courage, imagination, and enthusiasm it includes are merely its special characteristics.”[VII] In other words, all changes in warfare are those smaller pieces that evolve and interact to make up the character of war.

The argument that artificial intelligence (AI) and other technologies will enable military commanders to have “a qualitatively unsurpassed level of situational awareness and understanding heretofore unavailable to strategic commander[s][VIII] is a grand claim, but one that has been made many times in the past, and remains unfulfilled. The chaos of war, its fog, friction, and chance will likely never be deciphered, regardless of what technology we throw at it. While it is certain that AI-enabled technologies will be able to gather, assess, and deliver heretofore unimaginable amounts of data, these technologies will remain vulnerable to age-old practices of denial, deception, and camouflage.

 

The enemy gets a vote, and in this case, the enemy also gets to play with their AI-enabled technologies that are doing their best to provide decision advantage over us. The information sphere in war will be more cluttered and more confusing than ever.

Regardless of the tools of warfare, be they robotic, autonomous, and/or AI-enabled, they remain tools. And while they will be the primary tools of the warfighter, the decision to enable the warfighter to employ those tools will, more often than not, come from political leaders bent on achieving a certain goal with military force.

Drone Wars are Coming / Source: USNI Proceedings, July 2017, Vol. 143 / 7 /  1,373

Finally, the violence of warfare will not change. Certainly robotics and autonomy will enable machines that can think and operate without humans in the loop. Imagine the future in which the unmanned bomber gets blown out of the sky by the AI-enabled directed energy integrated air defense network. That’s still violence. There are still explosions and kinetic energy with the potential for collateral damage to humans, both combatants and civilians.

Source: Lockheed Martin

Not to mention the bomber carried a payload meant to destroy something in the first place. A military force, at its core, will always carry the mission to kill things and break stuff. What will be different is what tools they use to execute that mission.

To learn more about the changing character of warfare:

– Read the TRADOC G-2’s The Operational Environment and the Changing Character of Warfare paper.

– Watch The Changing Character of Future Warfare video.

Additionally, please note that the content from the Mad Scientist Learning in 2050 Conference at Georgetown University, 8-9 August 2018, is now posted and available for your review:

– Read the Top Ten” Takeaways from the Learning in 2050 Conference.

– Watch videos of each of the conference presentations on the TRADOC G-2 Operational Environment (OE) Enterprise YouTube Channel here.

– Review the conference presentation slides (with links to the associated videos) on the Mad Scientist All Partners Access Network (APAN) site here.

LTC Rob Taber is currently the Deputy Director of the Futures Directorate within the TRADOC G-2. He is an Army Strategic Intelligence Officer and holds a Master of Science of Strategic Intelligence from the National Intelligence University. His operational assignments include 1st Infantry Division, United States European Command, and the Defense Intelligence Agency.

Note:  The featured graphic at the top of this post captures U.S. cavalrymen on General John J. Pershing’s Punitive Expedition into Mexico in 1916.  Less than two years later, the United States would find itself fully engaged in Europe in a mechanized First World War.  (Source:  Tom Laemlein / Armor Plate Press, courtesy of Neil Grant, The Lewis Gun, Osprey Publishing, 2014, page 19)

_______________________________________________________

[I] National Intelligence Council, “Global Trends: Paradox of Progress,” January 2017, https://www.dni.gov/files/documents/nic/GT-Full-Report.pdf, p. 6.
[II] Brad D. Williams, “Emerging ‘Hyperwar’ Signals ‘AI-Fueled, machine waged’ Future of Conflict,” Fifth Domain, August 7, 2017, https://www.fifthdomain.com/dod/2017/08/07/emerging-hyperwar-signals-ai-fueled-machine-waged-future-of-conflict/.
[III] Ibid.
[VI] Carl Von Clausewitz, On War, ed. Michael Howard and Peter Paret (Princeton: Princeton University Press, 1976), 85.
[V] Ibid, 87.
[VI] Ibid.
[VII] Ibid, 86.
[VIII] John Allen, Amir Hussain, “On Hyper-War,” Fortuna’s Corner, July 10, 2017, https://fortunascorner.com/2017/07/10/on-hyper-war-by-gen-ret-john-allenusmc-amir-hussain/.

78. The Classified Mind – The Cyber Pearl Harbor of 2034

[Editor’s Note: Mad Scientist Laboratory is pleased to publish the following post by guest blogger Dr. Jan Kallberg, faculty member, United States Military Academy at West Point, and Research Scientist with the Army Cyber Institute at West Point. His post serves as a cautionary tale regarding our finite intellectual resources and the associated existential threat in failing to protect them!]

Preface: Based on my experience in cybersecurity, migrating to a broader cyber field, there have always been those exceptional individuals that have an unreplicable ability to see the challenge early on, create a technical solution, and know how to play it in the right order for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival of Artificial Intelligence increases our reliance on these highly capable individuals – because someone must set the rules, the boundaries, and point out the trajectory for Artificial Intelligence at initiation.

Source: https://thebulletin.org/2017/10/neuroscience-and-the-new-weapons-of-the-mind/

As an industrialist society, we tend to see technology and the information that feeds it as the weapons – and ignore the few humans that have a large-scale direct impact. Even if identified as a weapon, how do you make a human mind classified? Can we protect these high-ability individuals that in the digital world are weapons, not as tools but compilers of capability, or are we still focused on the tools? Why do we see only weapons that are steel and electronics and not the weaponized mind as a weapon?  I believe firmly that we underestimate the importance of Applicable Intelligence – the ability to play the cyber engagement in the optimal order.  Adversaries are often good observers because they are scouting for our weak spots. I set the stage for the following post in 2034, close enough to be realistic and far enough for things to happen when our adversaries are betting that we rely more on a few minds than we are willing to accept.

Post:  In a not too distant future, 20th of August 2034, a peer adversary’s first strategic moves are the targeted killings of less than twenty individuals as they go about their daily lives:  watching a 3-D printer making a protein sandwich at a breakfast restaurant; stepping out from the downtown Chicago monorail; or taking a taste of a poison-filled retro Jolt Cola. In the gray zone, when the geopolitical temperature increases, but we are still not at war yet, our adversary acts quickly and expedites a limited number of targeted killings within the United States of persons whom are unknown to mass media, the general public, and have only one thing in common – Applicable Intelligence (AI).

The ability to apply is a far greater asset than the technology itself. Cyber and card games have one thing in common, the order you play your cards matters. In cyber, the tools are publicly available, anyone can download them from the Internet and use them, but the weaponization of the tools occurs when used by someone who understands how to play the tools in an optimal order. These minds are different because they see an opportunity to exploit in a digital fog of war where others don’t or can’t see it. They address problems unburdened by traditional thinking, in new innovative ways, maximizing the dual-purpose of digital tools, and can create tangible cyber effects.

It is the Applicable Intelligence (AI) that creates the procedures, the application of tools, and turns simple digital software in sets or combinations as a convergence to digitally lethal weapons. This AI is the intelligence to mix, match, tweak, and arrange dual purpose software. In 2034, it is as if you had the supernatural ability to create a thermonuclear bomb from what you can find at Kroger or Albertson.

Sadly we missed it; we didn’t see it. We never left the 20th century. Our adversary saw it clearly and at the dawn of conflict killed off the weaponized minds, without discretion, and with no concern for international law or morality.

These intellects are weapons of growing strategic magnitude. In 2034, the United States missed the importance of these few intellects. This error left them unprotected.

All of our efforts were instead focusing on what they delivered, the application and the technology, which was hidden in secret vaults and only discussed in sensitive compartmented information facilities. Therefore, we classify to the highest level to ensure the confidentiality and integrity of our cyber capabilities. Meanwhile, the most critical component, the militarized intellect, we put no value to because it is a human. In a society marinated in an engineering mindset, humans are like desk space, electricity, and broadband; it is a commodity that is input in the production of the technical machinery. The marveled technical machinery is the only thing we care about today, 2018, and as it turned out in 2034 as well.

We are stuck in how we think, and we are unable to see it coming, but our adversaries see it. At a systematic level, we are unable to see humans as the weapon itself, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed. As the armory of the war of 1812, as the stockpile of 1943, and as the launch pad of 2034. Arms are made of steel, or fancier metals, with electronics – we failed in 2034 to see weapons made of corn, steak, and an added combative intellect.

General Nakasone stated in 2017, “Our best ones [coders] are 50 or 100 times better than their peers,” and continued “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” In reality, the success of cyber and cyber operations is highly dependent not on the tools or toolsets but instead upon the super-empowered individual that General Nakasone calls “the 50-x coder.”

Manhattan Project K-25 Gaseous Diffusion Process Building, Oak Ridge, TN / Source: atomicarchive.com

There were clear signals that we could have noticed before General Nakasone pointed it out clearly in 2017. The United States’ Manhattan Project during World War II had at its peak 125,000 workers on the payroll, but the intellects that drove the project to success and completion were few. The difference with the Manhattan Project and the future of cyber is that we were unable to see the human as a weapon, being locked in by our path dependency as an engineering society where we hail the technology and forget the importance of the humans behind it.

J. Robert Oppenheimer – the militarized intellect behind the  Manhattan Project / Source: Life Magazine

America’s endless love of technical innovations and advanced machinery reflects in a nation that has celebrated mechanical wonders and engineered solutions since its creation. For America, technical wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the intercontinental railroad, the Panama Canal, the manufacturing era, the moon landing, and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps that can solve a problem or act.

The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced. In 2034, the era of digital conflicts and the war between algorithms with engagements occurring at machine speed with no time for leadership or human interaction, it is the intellects that design and understand how to play it. We didn’t see it.

In 2034, with fewer than twenty bodies piled up after targeted killings, resides the Cyber Pearl Harbor. It was not imploding critical infrastructure, a tsunami of cyber attacks, nor hackers flooding our financial systems, but instead traditional lead and gunpowder. The super-empowered individuals are gone, and we are stuck in a digital war at speeds we don’t understand, unable to play it in the right order, and with limited intellectual torque to see through the fog of war provided by an exploding kaleidoscope of nodes and digital engagements.

Source: Shutterstock

If you enjoyed this post, read our Personalized Warfare post.

Dr. Jan Kallberg is currently an Assistant Professor of Political Science with the Department of Social Sciences, United States Military Academy at West Point, and a Research Scientist with the Army Cyber Institute at West Point. He was earlier a researcher with the Cyber Security Research and Education Institute, The University of Texas at Dallas, and is a part-time faculty member at George Washington University. Dr. Kallberg earned his Ph.D. and MA from the University of Texas at Dallas and earned a JD/LL.M. from Juridicum Law School, Stockholm University. Dr. Kallberg is a certified CISSP, ISACA CISM, and serves as the Managing Editor for the Cyber Defense Review. He has authored papers in the Strategic Studies Quarterly, Joint Forces Quarterly, IEEE IT Professional, IEEE Access, IEEE Security and Privacy, and IEEE Technology and Society.

76. “Top Ten” Takeaways from the Learning in 2050 Conference

On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC.  Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces.  The new and innovative learning capabilities addressed at this conference will enable our Soldiers and Leaders to act quickly and decisively in a changing Operational Environment (OE) with fleeting windows of opportunity and more advanced and lethal technologies.

We have identified the following “Top 10” takeaways related to Learning in 2050:

1. Many learning technologies built around commercial products are available today (Amazon Alexa, Smart Phones, Immersion tech, Avatar experts) for introduction into our training and educational institutions. Many of these technologies are part of the Army’s concept for a Synthetic Training Environment (STE) and there are nascent manifestations already.  For these technologies to be widely available to the future Army, the Army of today must be prepared to address:

– The collection and exploitation of as much data as possible;

– The policy concerns with security and privacy;

 – The cultural challenges associated with changing the dynamic between learners and instructors, teachers, and coaches; and

– The adequate funding to produce capabilities at scale so that digital tutors or other technologies (Augmented Reality [AR] / Virtual Reality [VR], etc.) and skills required in a dynamic future, like critical thinking/group think mitigation, are widely available or perhaps ubiquitous.

2. Personalization and individualization of learning in the future will be paramount, and some training that today takes place in physical schools will be more the exception, with learning occurring at the point of need. This transformation will not be limited to lesson plans or even just learning styles:

Intelligent tutors, Artificial Intelligence (AI)-driven instruction, and targeted mentoring/tutoring;

– Tailored timing and pacing of learning (when, where, and for what duration best suits the individual learner or group of learners?);

– Collaborative learners will be teams partnering to learn;

Targeted Neuroplasticity Training / Source: DARPA

– Various media and technologies that enable enhanced or accelerated learning (Targeted Neuroplasticity Training (TNT), haptic sensors, AR/VR, lifelong personal digital learning partners, pharmaceuticals, etc.) at scale;

– Project-oriented learning; when today’s high school students are building apps, they are asked “What positive change do you want to have?” One example is an open table for Bully Free Tables. In the future, learners will learn through working on projects;

– Project-oriented learning will lead to a convergence of learning and operations, creating a chicken (learning) or the egg (mission/project) relationship; and

– Learning must be adapted to consciously address the desired, or extant, culture.

Drones Hanger / Source: Oshanin

3. Some jobs and skill sets have not even been articulated yet. Hobbies and recreational activities engaged in by kids and enthusiasts today could become occupations or Military Occupational Specialties (MOS’s) of the future (e.g., drone creator/maintainer, 3-D printing specialist, digital and cyber fortification construction engineer — think Minecraft and Fortnite with real-world physical implications). Some emerging trends in personalized warfare, big data, and virtual nations could bring about the necessity for more specialists that don’t currently exist (e.g., data protection and/or data erasure specialists).

Mechanical Animal / Source: Pinterest

4. The New Human (who will be born in 2032 and is the recruit of 2050) will be fundamentally different from the Old Human. The Chief of Staff of the Army (CSA) in 2050 is currently a young Captain in our Army today. While we are arguably cyborgs today (with integrated electronics in our pockets and on our wrists), the New Humans will likely be cyborgs in the truest sense of the word, with some having embedded sensors. How will those New Humans learn? What will they need to learn? Why would they want to learn something? These are all critical questions the Army will continue to ask over the next several decades.

Source: iLearn

5. Learning is continuous and self-initiated, while education is a point in time and is “done to you” by someone else. Learning may result in a certificate or degree – similar to education – or can lead to the foundations of a skill or a deeper understanding of operations and activity. How will organizations quantify learning in the future? Will degrees or even certifications still be the benchmark for talent and capability?

Source: The Data Feed Toolbox

6. Learning isn’t slowing down, it’s speeding up. More and more things are becoming instantaneous and humans have no concept of extreme speed. Tesla cars have the ability to update software, with owners getting into a veritably different car each day. What happens to our Soldiers when military vehicles change much more iteratively? This may force a paradigm shift wherein learning means tightening local and global connections (tough to do considering government/military network securities, firewalls, vulnerabilities, and constraints); viewing technology as extended brains all networked together (similar to Dr. Alexander Kott’s look at the Internet of Battlefield Things [IoBT]); and leveraging these capabilities to enable Soldier learning at extremely high speeds.

Source: Connecting Universes

7. While there are a number of emerging concepts and technologies to improve and accelerate learning (TNT, extended reality, personalized learning models, and intelligent tutors), the focus, training stimuli, data sets, and desired outcomes all have to be properly tuned and aligned or the Learner could end up losing correct behavior habits (developing maladaptive plasticity), developing incorrect or skewed behaviors (per the desired capability), or assuming inert cognitive biases.

Source: TechCrunch

8. Geolocation may become increasingly less important when it comes to learning in the future. If Apple required users to go to Silicon Valley to get trained on an iPhone, they would be exponentially less successful. But this is how the Army currently trains. The ubiquity of connectivity, the growth of the Internet of Things (and eventually Internet of Everything), the introduction of universal interfaces (think one XBOX controller capable of controlling 10 different types of vehicles), major advances in modeling and simulations, and social media innovation all converge to minimize the importance of teachers, students, mentors, and learners being collocated at the same physical location.

Transdisciplinarity at Work / Source: https://www.cetl.hku.hk

9. Significant questions have to be asked regarding the specificity of training in children at a young age to the point that we may be overemphasizing STEM from an early age and not helping them learn across a wider spectrum. We need Transdisciplinarity in the coming generations.

10. 3-D reconstructions of bases, training areas, cities, and military objectives coupled with mixed reality, haptic sensing, and intuitive controls have the potential to dramatically change how Soldiers train and learn when it comes to not only single performance tasks (e.g., marksmanship, vehicle driving, reconnaissance, etc.) but also in dense urban operations, multi-unit maneuver, and command and control.

Heavy Duty by rOEN911 / Source: DeviantArt

During the next two weeks, we will be posting the videos from each of the Learning in 2050 Conference presentations on the TRADOC G-2 Operational Environment (OE) Enterprise YouTube Channel and the associated slides on our Mad Scientist APAN site — stay connected here at the Mad Scientist Laboratory.

One of the main thrusts in the Mad Scientist lines of effort is harnessing and cultivating the Intellect of the Nation. In this vein, we are asking Learning in 2050 Conference participants (both in person and online) to share their ideas on the presentations and topic. Please consider:

– What topics were most important to you personally and professionally?

– What were your main takeaways from the event?

– What topics did you want the speakers to extrapolate more on?

– What were the implications for your given occupation/career field from the findings of the event?

Your input will be of critical importance to our analysis and products that will have significant impact on the future of the force in design, structuring, planning, and training!  Please submit your input to Mad Scientist at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil.