136. Future Threats: Climate Change and Islamic Terror

[Editor’s Note:  Mad Scientist Laboratory welcomes back returning guest blogger Mr. Matthew Ader, whose cautionary post warns of the potential convergence of Islamic terrorism and climate change activism, possibly resonating with western populations that have not been (to date) predisposed to listening to their messaging. (Note:  Some of the embedded links in this post are best accessed using non-DoD networks.)]

Source:  NASA

Climate change is increasingly being viewed not only as an ecological or economic concern, but as a direct security threat. It both endangers vital coastal infrastructure through sea level rise and multiplies existing issues of food insecurity and migration. However, in these analyses, one issue in particular is missed – the likely emergence of transnational terrorist networks which fuse climate grievance with Islamic terrorism.

Earth Liberation Front (ELF) logo / Source: Wikimedia Commons

Ecologically inspired terrorism is, of course, hardly a new concept. There are tens of ecoterrorist organisations, and some have gained substantial notoriety. The model example of this is the Earth Liberation Front, which was highly active in the early 2000s. However, because they tend to operate in developed nations, these groups generally lack the safe areas and large, disenfranchised recruiting bases which empower terrorists elsewhere.

Ecoterrorism, however, is not limited to the developed world – for example, two years ago, an ecoterrorist group detonated a makeshift bomb in Brazil. As the impact of climate change grows ever more severe in the developing world, it is probable that there will be more direct climate-change inspired terrorism. This is especially likely given that the populations of developing nations are increasingly connected to the international information infrastructure – allowing more widespread comprehension of climate change as a global phenomenon with roots in western nations.

Map of the Earth with a six-meter sea level rise represented in red / Source:  NASA

These threats pose a new dimension to the terrorist threat. But what is more worrying is the potential for the infection of ecoterrorist groups by radical Islamic terrorist organisations.

Islam contains a strong thread of environmental stewardship. This is not a call for violence in protection of the Earth, but it has already been exploited by radical groups – for example, Al Shabaab banning plastic bags or the Taliban’s endorsement of afforestation. This gives the groups legitimacy in their area of operations. As climate change worsens and grievance intensifies, it is highly likely that this vein of stewardship of the Earth will strengthen in Islamic terrorist propaganda – both as a way of reinforcing legitimacy and to gain recruits or support.

If radical Islamic terrorists can harness climate change grievance, then the threat they offer against western interests increases substantially. This is for three key reasons:

Image from Islamic State propaganda video / Source:  Wikipedia

Firstly, Islamic terrorist groups such as Al Qaeda in the Arabian Peninsula or Daesh tend to have relatively developed infrastructure for propaganda and training. While U.S.-led counterterror operations have proven effective in reducing the threat they pose, the carnage in the Bataclan, Manchester Arena, and Nice – to name but a few incidents – clearly indicate that Islamic terrorists can still mount both expeditionary and homegrown terrorist attacks.

Improvised Explosive Device (IED) / Source:  IDF – Wikimedia Commons

Secondly, Islamic terrorist groups have subject matter expertise regarding explosives and strong links with IED supplier networks. The aforementioned Brazilian ecoterrorist group failed to inflict casualties with their crude bomb. If equipped with military-grade high explosive, of the type used by more ‘professional’ terrorist organisations, then the attack could have been much more devastating.


Thirdly, the audience for radical, violent Islamic teaching is very small, and much of it is in the Middle East. The audience for climate grievance is far larger – 70% of Americans aged 18-34 worry a great deal or a fair amount about climate change – and global. This is obviously not to suggest that all climate change activists or people concerned about it are putative terrorists.

People’s Climate March 2017 in Washington DC / Source: Wikimedia Commons

However, if even 1 in a 1000 of that American number were willing to take more robust action – such as giving support to terrorists, or even carrying out attacks themselves – it would comprise a support base of approximately 47,200 people. That presents a significant threat, only made worse by the ‘moral fairness’ of climate terrorism – attacking the U.S. for vague oppression of Muslims plays differently in media and politics than attacking the U.S. because of its very real role as one of the world’s largest polluters.

This is of course a brief overview. However, the possibility of a hybridisation of climate change grievance and radical Islamic terrorism is too dangerous to ignore. More research is required, and urgently, to ascertain the extent of the risk and find ways to mitigate it. The world community was practically blindsided by the emergence of Al Qaeda. It would be unacceptably irresponsible to let such a failure happen again.

If you enjoyed this post, please also:

Read Mr. Ader‘s previously published blog posts:

War Laid Bare

Decision in the 21st Century

– See Dr. Gary Ackerman‘s presentation and slide deck on “Non-State actors and their uses of emerging technologies” from the Mad Scientist Robotics, Artificial Intelligence & Autonomy Conference, facilitated at Georgia Tech Research Institute (GTRI), on 7-8 March 2017.

– Review the following additional blog posts:

Trouble in Paradise: The Technological Upheaval of Modern Political and Economic Systems, by Ms. Marie Murphy, and

Emergent Threat Posed by Super-Empowered Individuals.

Crank up Neil Young‘s Mother Earth!

Mr. Matthew Ader is a first-year undergraduate taking War Studies at King’s College London.

Disclaimer: Mr. Ader is not affiliated with U.S. Army Training and Doctrine Command, the U.S. Army, or the U.S. Government. This piece is meant to be thought-provoking and does not reflect the current position of the U.S. Army.

78. The Classified Mind – The Cyber Pearl Harbor of 2034

[Editor’s Note: Mad Scientist Laboratory is pleased to publish the following post by guest blogger Dr. Jan Kallberg, faculty member, United States Military Academy at West Point, and Research Scientist with the Army Cyber Institute at West Point. His post serves as a cautionary tale regarding our finite intellectual resources and the associated existential threat in failing to protect them!]

Preface: Based on my experience in cybersecurity, migrating to a broader cyber field, there have always been those exceptional individuals that have an unreplicable ability to see the challenge early on, create a technical solution, and know how to play it in the right order for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival of Artificial Intelligence increases our reliance on these highly capable individuals – because someone must set the rules, the boundaries, and point out the trajectory for Artificial Intelligence at initiation.

Source: https://thebulletin.org/2017/10/neuroscience-and-the-new-weapons-of-the-mind/

As an industrialist society, we tend to see technology and the information that feeds it as the weapons – and ignore the few humans that have a large-scale direct impact. Even if identified as a weapon, how do you make a human mind classified? Can we protect these high-ability individuals that in the digital world are weapons, not as tools but compilers of capability, or are we still focused on the tools? Why do we see only weapons that are steel and electronics and not the weaponized mind as a weapon?  I believe firmly that we underestimate the importance of Applicable Intelligence – the ability to play the cyber engagement in the optimal order.  Adversaries are often good observers because they are scouting for our weak spots. I set the stage for the following post in 2034, close enough to be realistic and far enough for things to happen when our adversaries are betting that we rely more on a few minds than we are willing to accept.

Post:  In a not too distant future, 20th of August 2034, a peer adversary’s first strategic moves are the targeted killings of less than twenty individuals as they go about their daily lives:  watching a 3-D printer making a protein sandwich at a breakfast restaurant; stepping out from the downtown Chicago monorail; or taking a taste of a poison-filled retro Jolt Cola. In the gray zone, when the geopolitical temperature increases, but we are still not at war yet, our adversary acts quickly and expedites a limited number of targeted killings within the United States of persons whom are unknown to mass media, the general public, and have only one thing in common – Applicable Intelligence (AI).

The ability to apply is a far greater asset than the technology itself. Cyber and card games have one thing in common, the order you play your cards matters. In cyber, the tools are publicly available, anyone can download them from the Internet and use them, but the weaponization of the tools occurs when used by someone who understands how to play the tools in an optimal order. These minds are different because they see an opportunity to exploit in a digital fog of war where others don’t or can’t see it. They address problems unburdened by traditional thinking, in new innovative ways, maximizing the dual-purpose of digital tools, and can create tangible cyber effects.

It is the Applicable Intelligence (AI) that creates the procedures, the application of tools, and turns simple digital software in sets or combinations as a convergence to digitally lethal weapons. This AI is the intelligence to mix, match, tweak, and arrange dual purpose software. In 2034, it is as if you had the supernatural ability to create a thermonuclear bomb from what you can find at Kroger or Albertson.

Sadly we missed it; we didn’t see it. We never left the 20th century. Our adversary saw it clearly and at the dawn of conflict killed off the weaponized minds, without discretion, and with no concern for international law or morality.

These intellects are weapons of growing strategic magnitude. In 2034, the United States missed the importance of these few intellects. This error left them unprotected.

All of our efforts were instead focusing on what they delivered, the application and the technology, which was hidden in secret vaults and only discussed in sensitive compartmented information facilities. Therefore, we classify to the highest level to ensure the confidentiality and integrity of our cyber capabilities. Meanwhile, the most critical component, the militarized intellect, we put no value to because it is a human. In a society marinated in an engineering mindset, humans are like desk space, electricity, and broadband; it is a commodity that is input in the production of the technical machinery. The marveled technical machinery is the only thing we care about today, 2018, and as it turned out in 2034 as well.

We are stuck in how we think, and we are unable to see it coming, but our adversaries see it. At a systematic level, we are unable to see humans as the weapon itself, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed. As the armory of the war of 1812, as the stockpile of 1943, and as the launch pad of 2034. Arms are made of steel, or fancier metals, with electronics – we failed in 2034 to see weapons made of corn, steak, and an added combative intellect.

General Nakasone stated in 2017, “Our best ones [coders] are 50 or 100 times better than their peers,” and continued “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” In reality, the success of cyber and cyber operations is highly dependent not on the tools or toolsets but instead upon the super-empowered individual that General Nakasone calls “the 50-x coder.”

Manhattan Project K-25 Gaseous Diffusion Process Building, Oak Ridge, TN / Source: atomicarchive.com

There were clear signals that we could have noticed before General Nakasone pointed it out clearly in 2017. The United States’ Manhattan Project during World War II had at its peak 125,000 workers on the payroll, but the intellects that drove the project to success and completion were few. The difference with the Manhattan Project and the future of cyber is that we were unable to see the human as a weapon, being locked in by our path dependency as an engineering society where we hail the technology and forget the importance of the humans behind it.

J. Robert Oppenheimer – the militarized intellect behind the  Manhattan Project / Source: Life Magazine

America’s endless love of technical innovations and advanced machinery reflects in a nation that has celebrated mechanical wonders and engineered solutions since its creation. For America, technical wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the intercontinental railroad, the Panama Canal, the manufacturing era, the moon landing, and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps that can solve a problem or act.

The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced. In 2034, the era of digital conflicts and the war between algorithms with engagements occurring at machine speed with no time for leadership or human interaction, it is the intellects that design and understand how to play it. We didn’t see it.

In 2034, with fewer than twenty bodies piled up after targeted killings, resides the Cyber Pearl Harbor. It was not imploding critical infrastructure, a tsunami of cyber attacks, nor hackers flooding our financial systems, but instead traditional lead and gunpowder. The super-empowered individuals are gone, and we are stuck in a digital war at speeds we don’t understand, unable to play it in the right order, and with limited intellectual torque to see through the fog of war provided by an exploding kaleidoscope of nodes and digital engagements.

Source: Shutterstock

If you enjoyed this post, read our Personalized Warfare post.

Dr. Jan Kallberg is currently an Assistant Professor of Political Science with the Department of Social Sciences, United States Military Academy at West Point, and a Research Scientist with the Army Cyber Institute at West Point. He was earlier a researcher with the Cyber Security Research and Education Institute, The University of Texas at Dallas, and is a part-time faculty member at George Washington University. Dr. Kallberg earned his Ph.D. and MA from the University of Texas at Dallas and earned a JD/LL.M. from Juridicum Law School, Stockholm University. Dr. Kallberg is a certified CISSP, ISACA CISM, and serves as the Managing Editor for the Cyber Defense Review. He has authored papers in the Strategic Studies Quarterly, Joint Forces Quarterly, IEEE IT Professional, IEEE Access, IEEE Security and Privacy, and IEEE Technology and Society.

76. “Top Ten” Takeaways from the Learning in 2050 Conference

On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC.  Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces.  The new and innovative learning capabilities addressed at this conference will enable our Soldiers and Leaders to act quickly and decisively in a changing Operational Environment (OE) with fleeting windows of opportunity and more advanced and lethal technologies.

We have identified the following “Top 10” takeaways related to Learning in 2050:

1. Many learning technologies built around commercial products are available today (Amazon Alexa, Smart Phones, Immersion tech, Avatar experts) for introduction into our training and educational institutions. Many of these technologies are part of the Army’s concept for a Synthetic Training Environment (STE) and there are nascent manifestations already.  For these technologies to be widely available to the future Army, the Army of today must be prepared to address:

– The collection and exploitation of as much data as possible;

– The policy concerns with security and privacy;

 – The cultural challenges associated with changing the dynamic between learners and instructors, teachers, and coaches; and

– The adequate funding to produce capabilities at scale so that digital tutors or other technologies (Augmented Reality [AR] / Virtual Reality [VR], etc.) and skills required in a dynamic future, like critical thinking/group think mitigation, are widely available or perhaps ubiquitous.

2. Personalization and individualization of learning in the future will be paramount, and some training that today takes place in physical schools will be more the exception, with learning occurring at the point of need. This transformation will not be limited to lesson plans or even just learning styles:

Intelligent tutors, Artificial Intelligence (AI)-driven instruction, and targeted mentoring/tutoring;

– Tailored timing and pacing of learning (when, where, and for what duration best suits the individual learner or group of learners?);

– Collaborative learners will be teams partnering to learn;

Targeted Neuroplasticity Training / Source: DARPA

– Various media and technologies that enable enhanced or accelerated learning (Targeted Neuroplasticity Training (TNT), haptic sensors, AR/VR, lifelong personal digital learning partners, pharmaceuticals, etc.) at scale;

– Project-oriented learning; when today’s high school students are building apps, they are asked “What positive change do you want to have?” One example is an open table for Bully Free Tables. In the future, learners will learn through working on projects;

– Project-oriented learning will lead to a convergence of learning and operations, creating a chicken (learning) or the egg (mission/project) relationship; and

– Learning must be adapted to consciously address the desired, or extant, culture.

Drones Hanger / Source: Oshanin

3. Some jobs and skill sets have not even been articulated yet. Hobbies and recreational activities engaged in by kids and enthusiasts today could become occupations or Military Occupational Specialties (MOS’s) of the future (e.g., drone creator/maintainer, 3-D printing specialist, digital and cyber fortification construction engineer — think Minecraft and Fortnite with real-world physical implications). Some emerging trends in personalized warfare, big data, and virtual nations could bring about the necessity for more specialists that don’t currently exist (e.g., data protection and/or data erasure specialists).

Mechanical Animal / Source: Pinterest

4. The New Human (who will be born in 2032 and is the recruit of 2050) will be fundamentally different from the Old Human. The Chief of Staff of the Army (CSA) in 2050 is currently a young Captain in our Army today. While we are arguably cyborgs today (with integrated electronics in our pockets and on our wrists), the New Humans will likely be cyborgs in the truest sense of the word, with some having embedded sensors. How will those New Humans learn? What will they need to learn? Why would they want to learn something? These are all critical questions the Army will continue to ask over the next several decades.

Source: iLearn

5. Learning is continuous and self-initiated, while education is a point in time and is “done to you” by someone else. Learning may result in a certificate or degree – similar to education – or can lead to the foundations of a skill or a deeper understanding of operations and activity. How will organizations quantify learning in the future? Will degrees or even certifications still be the benchmark for talent and capability?

Source: The Data Feed Toolbox

6. Learning isn’t slowing down, it’s speeding up. More and more things are becoming instantaneous and humans have no concept of extreme speed. Tesla cars have the ability to update software, with owners getting into a veritably different car each day. What happens to our Soldiers when military vehicles change much more iteratively? This may force a paradigm shift wherein learning means tightening local and global connections (tough to do considering government/military network securities, firewalls, vulnerabilities, and constraints); viewing technology as extended brains all networked together (similar to Dr. Alexander Kott’s look at the Internet of Battlefield Things [IoBT]); and leveraging these capabilities to enable Soldier learning at extremely high speeds.

Source: Connecting Universes

7. While there are a number of emerging concepts and technologies to improve and accelerate learning (TNT, extended reality, personalized learning models, and intelligent tutors), the focus, training stimuli, data sets, and desired outcomes all have to be properly tuned and aligned or the Learner could end up losing correct behavior habits (developing maladaptive plasticity), developing incorrect or skewed behaviors (per the desired capability), or assuming inert cognitive biases.

Source: TechCrunch

8. Geolocation may become increasingly less important when it comes to learning in the future. If Apple required users to go to Silicon Valley to get trained on an iPhone, they would be exponentially less successful. But this is how the Army currently trains. The ubiquity of connectivity, the growth of the Internet of Things (and eventually Internet of Everything), the introduction of universal interfaces (think one XBOX controller capable of controlling 10 different types of vehicles), major advances in modeling and simulations, and social media innovation all converge to minimize the importance of teachers, students, mentors, and learners being collocated at the same physical location.

Transdisciplinarity at Work / Source: https://www.cetl.hku.hk

9. Significant questions have to be asked regarding the specificity of training in children at a young age to the point that we may be overemphasizing STEM from an early age and not helping them learn across a wider spectrum. We need Transdisciplinarity in the coming generations.

10. 3-D reconstructions of bases, training areas, cities, and military objectives coupled with mixed reality, haptic sensing, and intuitive controls have the potential to dramatically change how Soldiers train and learn when it comes to not only single performance tasks (e.g., marksmanship, vehicle driving, reconnaissance, etc.) but also in dense urban operations, multi-unit maneuver, and command and control.

Heavy Duty by rOEN911 / Source: DeviantArt

During the next two weeks, we will be posting the videos from each of the Learning in 2050 Conference presentations on the TRADOC G-2 Operational Environment (OE) Enterprise YouTube Channel and the associated slides on our Mad Scientist APAN site — stay connected here at the Mad Scientist Laboratory.

One of the main thrusts in the Mad Scientist lines of effort is harnessing and cultivating the Intellect of the Nation. In this vein, we are asking Learning in 2050 Conference participants (both in person and online) to share their ideas on the presentations and topic. Please consider:

– What topics were most important to you personally and professionally?

– What were your main takeaways from the event?

– What topics did you want the speakers to extrapolate more on?

– What were the implications for your given occupation/career field from the findings of the event?

Your input will be of critical importance to our analysis and products that will have significant impact on the future of the force in design, structuring, planning, and training!  Please submit your input to Mad Scientist at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil.

67. “The Tenth Man”

Source: Yahoo

[Editor’s Note: In the movie World War Z (I know… the book was way better!), an Israeli security operative describes how Israel prepared for the coming zombie plague. Their strategy was if nine men agreed on an analysis or a course of action, the tenth man had to take an alternative view.

This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging. The Mad Scientist Laboratory will begin a series of posts entitled “The Tenth Man” to offer a platform for the contrarians in our network (I know you’re out there!) to share their alternative perspectives and analyses regarding the Future Operational Environment.]

Our foundational assumption about the Future Operational Environment is that the Character of Warfare is changing due to an exponential convergence of emerging technologies. Artificial Intelligence, Robotics, Autonomy, Quantum Sciences, Nano Materials, and Neuro advances will mean more lethal warfare at machine speed, integrated seamlessly across all five domains – air, land, sea, cyber, and space.

We have consistently seen four main themes used to counter this idea of a changing character of war, driven by technology:

Source: danovski11 / DeviantArt

1. Cost of Robotic Warfare: All armies must plan for the need to reconstitute forces. This is particularly ingrained in the U.S. Army’s culture where we have often lost the first battles in any given conflict (e.g., Kasserine Pass in World War II and Task Force Smith in Korea). We cannot afford to have a “one loss” Army where our national wealth and industrial base can not support the reconstitution of a significant part of our Army. A high-cost, roboticized Army might also limit our political leaders’ options for the use of military force due to the risk of loss and associated cost.

Gartner Hype Cycle

2. Technology Hype: Technologists are well aware of the idea of a hype cycle when forecasting emerging technologies. Machine learning was all the rage in the 1970s, but the technology needed to drive these tools did not exist. Improved computing has finally helped us realize this vision, forty years later. The U.S. Army’s experience with the Future Combat System hits a nerve when assumptions of the future require the integration of emerging technologies.

Source: Fallout 4

3. Robotic Warfare: A roboticized Army is over-optimized to fight against a peer competitor, which is the least likely mission the Army will face. We build an Army and develop Leaders first and foremost to protect our Nation’s sovereignty. This means having an Army capable of deterring, and failing that, defeating peer competitors. At the same time, this Army must be versatile enough to execute a myriad of additional missions across the full spectrum of conflict. A hyper-connected Army enabled by robots with fewer Soldiers will be challenged in executing missions requiring significant human interactions such as humanitarian relief, building partner capacity, and counter-insurgency operations.

4. Coalition Warfare: A technology-enabled force will exasperate interoperability challenges with both our traditional and new allies. Our Army will not fight unilaterally on future battlefields. We have had difficulties with the interoperability of communications and have had gaps between capabilities that increased mission risks. These risks were offset by the skills our allies brought to the battlefield. We cannot build an Army that does not account for a coalition battlefield and our allies may not be able to afford the tech-enabled force envisioned in the Future Operational Environment.

All four of these assumptions are valid and should be further studied as we build the Army of 2028 and the Army of 2050. There are many other contrarian views about the Future Operational Environment, and so we are calling upon our network to put on their red hats and be our “Tenth Man.”

If you have an idea or concept that challenges or runs contrary to our understanding of the Future Operational Environment as described here in the Mad Scientist Laboratory, The Operational Environment and the Changing Character of Future Warfare paper, and The Changing Character of Future Warfare video, please draft it up as a blog post and forward it to our attention at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for our next edition of “The Tenth Man”!

56. An Appropriate Level of Trust…

The Mad Scientist team participates in many thought exercises, tabletops, and wargames associated with how we will live, work, and fight in the future. A consistent theme in these events is the idea that a major barrier to the integration of robotic systems into Army formations is a lack of trust between humans and machines. This assumption rings true as we hear the media and opinion polls describe how society doesn’t trust some disruptive technologies, like driverless cars or the robots coming for our jobs.

In his recent book, Army of None, Paul Scharre describes an event that nearly led to a nuclear confrontation between the Soviet Union and the United States. On September 26, 1983, LTC Stanislav Petrov, a Soviet Officer serving in a bunker outside Moscow was alerted to a U.S. missile launch by a recently deployed space-based early warning system. The Soviet Officer trusted his “gut” – or experientially informed intuition – that this was a false alarm. His gut was right and the world was saved from an inadvertent nuclear exchange because this officer did not over trust the system. But is this the rule or an exception to how humans interact with technology?

The subject of trust between Soldiers, Soldiers and Leaders, and the Army and society is central to the idea of the Army as a profession. At the most tactical level, trust is seen as essential to combat readiness as Soldiers must trust each other in dangerous situations. Humans naturally learn to trust their peers and subordinates once they have worked with them for a period of time. You learn what someone’s strengths and weaknesses are, what they can handle, and under what conditions they will struggle. This human dynamic does not translate to human-machine interaction and the tendency to anthropomorphize machines could be a huge barrier.

We recommend that the Army explore the possibility that Soldiers and Leaders could over trust AI and robotic systems. Over trust of these systems could blunt human expertise, judgement, and intuition thought to be critical to winning in complex operational environments. Also, over trust might lead to additional adversarial vulnerabilities such as deception and spoofing.

In 2016, a research team at the Georgia Institute of Technology revealed the results of a study entitled “Overtrust of Robots in Emergency Evacuation Scenarios”. The research team put 42 test participants into a fire emergency with a robot responsible for escorting them to an emergency exit. As the robot passed obvious exits and got lost, 37 participants continued to follow the robot and an additional 2 stood with the robot and didn’t move towards either exit. The study’s takeaway was that roboticists must think about programs that will help humans establish an “appropriate level of trust” with robot teammates.

In Future Crimes, Marc Goodman writes of the idea of “In Screen We Trust” and the vulnerabilities this trust builds into our interaction with our automation. His example of the cyber-attack against the Iranian uranium enrichment centrifuges highlights the vulnerability of experts believing or trusting their screens against mounting evidence that something else might be contributing to the failure of centrifuges. These experts over trusted their technology or just did not have an “appropriate level of trust”. What does this have to do with Soldiers on the future battlefield? Well, increasingly we depend on our screens and, in the future, our heads-up displays to translate the world around us. This translation will only become more demanding on the future battlefield with war at machine speed.

So what should our assumptions be about trust and our robotic teammates on the future battlefield?

1) Soldiers and Leaders will react differently to technology integration.

2) Capability developers must account for trust building factors in physical design, natural language processing, and voice communication.

3) Intuition and judgement remain a critical component of human-machine teaming and operating on the future battlefield. Speed becomes a major challenge as humans become the weak link.

4) Building an “appropriate level of trust” will need to be part of Leader Development and training. Mere expertise in a field does not prevent over trust when interacting with our robotic teammates.

5) Lastly, lack of trust is not a barrier to AI and robotic integration on the future battlefield. These capabilities will exist in our formations as well as those of our adversaries. The formation that develops the best concepts for effective human-machine teaming, with trust being a major component, will have the advantage.

Interested in learning more on this topic? Watch Dr. Kimberly Jackson Ryan (Draper Labs).

[Editor’s Note:  A special word of thanks goes out to fellow Mad Scientist Mr. Paul Scharre for sharing his ideas with the Mad Scientist team regarding this topic.]

55. Influence at Machine Speed: The Coming of AI-Powered Propaganda

[Editor’s Note: Mad Scientist Laboratory is pleased to present the following guest blog post by MAJ Chris Telley, U.S. Army, assigned to the Naval Postgraduate School, addressing how Artificial Intelligence (AI) must be understood as an Information Operations (IO) tool if U.S. defense professionals are to develop effective countermeasures and ensure our resilience to its employment by potential adversaries.]

AI-enabled IO present a more pressing strategic threat than the physical hazards of slaughter-bots or even algorithmically-escalated nuclear war. IO are efforts to “influence, disrupt, corrupt, or usurp the decision-making of adversaries and potential adversaries;” here, we’re talking about using AI to do so. AI-guided IO tools can empathize with an audience to say anything, in any way needed, to change the perceptions that drive those physical weapons. Future IO systems will be able to individually monitor and affect tens of thousands of people at once. Defense professionals must understand the fundamental influence potential of these technologies if they are to drive security institutions to counter malign AI use in the information environment.

Source: Peter Adamis / Abalinx.com

Programmatic marketing, using consumer’s data habits to drive real time automated bidding on personalized advertising, has been used for a few years now. Cambridge Analytica’s Facebook targeting made international headlines using similar techniques, but digital electioneering is just the tip of the iceberg. An AI trained with data from users’ social media accounts, economic media interactions (Uber, Applepay, etc.), and their devices’ positional data can infer predictive knowledge of its targets. With that knowledge, emerging tools — like Replika — can truly befriend a person, allowing it to train that individual, for good or ill.

Source: Getty Creative

Substantive feedback is required to train an individual’s response; humans tend to respond best to content and feedback with which they agree. That content can be algorithmically mass produced. For years, Narrative Science tools have helped writers create sports stories and stock summaries, but it’s just as easy to use them to create disinformation. That’s just text, though; today, the AI can create fake video. A recent warning, ostensibly from former President Obama, provides an entertaining yet frightening demonstration of how Deepfakes will challenge our presumptions about truth in the coming years. The Defense Advanced Research Projects Agency (DARPA) is funding a project this summer to determine whether AI-generated Deepfakes will become impossible to distinguish from the real thing, even using other AI systems.

Given that malign actors can now employ AI to lieat machine speed,” they still have to get the story to an audience. Russian bot armies continue to make headlines doing this very thing. The New York Times maintains about a dozen Twitter feeds and produces around 300 tweets a day, but Russia’s Internet Research Agency (IRA) regularly puts out 25,000 tweets in the same twenty-four hours. The IRA’s bots are really just low-tech curators; they collect, interpret, and display desired information to promote the Kremlin’s narratives.

Source: Josep Lago/AFP/Getty Images

Next-generation bot armies will employ far faster computing techniques and profit from an order of magnitude greater network speed when 5G services are fielded. If “Repetition is a key tenet of IO execution,” then this machine gun-like ability to fire information at an audience will, with empathetic precision and custom content, provide the means to change a decisive audience’s very reality. No breakthrough science is needed, no bureaucratic project office required. These pieces are already there, waiting for an adversary to put them together.

The DoD is looking at AI but remains focused on image classification and swarming quadcopters while ignoring the convergent possibilities of predictive audience understanding, tailored content production, and massive scale dissemination. What little digital IO we’ve done, sometimes called social media “WebOps,” has been contractor heavy and prone to naïve missteps. However, groups like USSOCOM’s SOFWERX and the students at the Naval Postgraduate School are advancing the state of our art. At NPS, future senior leaders are working on AI, now. A half-dozen of the school’s departments have stood up classes and events specifically aimed at operationalizing advanced computing. The young defense professionals currently working on AI should grapple with emerging influence tools and form the foundation of the DoD’s future institutional capabilities.

MAJ Chris Telley is an Army information operations officer assigned to the Naval Postgraduate School. His assignments have included theater engagement at U.S. Army Japan and advanced technology integration with the U.S. Air Force. Chris commanded in Afghanistan and served in Iraq as a United States Marine. He tweets at @chris_telley.

This blog post represents the opinions of the author and do not reflect the position of the Army or the United States Government.

52. Potential Game Changers

The Mad Scientist Initiative brings together cutting-edge leaders and thinkers from the technology industry, research laboratories, academia, and across the military and Government to explore the impact of potentially disruptive technologies. Much like Johannes Gutenberg’s moveable type (illustrated above), these transformational game changers have the potential to impact how we live, create, think, and prosper. Understanding their individual and convergent impacts is essential to continued battlefield dominance in the Future Operational Environment. In accordance with The Operational Environment and the Changing Character of Future Warfare, we have divided this continuum into two distinct timeframes:

The Era of Accelerated Human Progress (Now through 2035):
The period where our adversaries can take advantage of new technologies, new doctrine, and revised strategic concepts to effectively challenge U.S. military forces across multiple domains. Game changers during this era include:

• Robotics: Forty plus countries develop military robots with some level of autonomy. Impact on society, employment.
Vulnerable: To Cyber/Electromagnetic (EM) disruption, battery life, ethics without man in the loop.
Formats: Unmanned/Autonomous; ground/air vehicles/subsurface/sea systems. Nano-weapons.
Examples: (Air) Hunter/killer Unmanned Aerial Vehicle (UAV) swarms; (Ground) Russian Uran: Recon, ATGMs, SAMs.

• Artificial Intelligence: Human-Agent Teaming, where humans and intelligent systems work together to achieve either a physical or mental task. The human and the intelligent system will trade-off cognitive and physical loads in a collaborative fashion.

• Swarms/Semi Autonomous: Massed, coordinated, fast, collaborative, small, stand-off. Overwhelm target systems. Mass or disaggregate.

• Internet of Things (IoT): Trillions of internet linked items create opportunities and vulnerabilities. Explosive growth in low Size Weight and Power (SWaP) connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness). Greater than 100 devices per human. Significant end device processing (sensor analytics, sensor to shooter, supply chain management).
Vulnerable: To Cyber/EM/Power disruption. Privacy concerns regarding location and tracking.
Sensor to shooter: Accelerate kill chain, data processing, and decision-making.

• Space: Over 50 nations operate in space, increasingly congested and difficult to monitor, endanger Positioning, Navigation, and Timing (PNT)

GPS Jamming/Spoofing: Increasingly sophisticated, used successfully in Ukraine.
Anti Satellite: China has tested two direct ascent anti-satellite missiles.

The Era of Contested Equality (2035 through 2050):
The period marked by significant breakthroughs in technology and convergences in terms of capabilities, which lead to significant changes in the character of warfare. During this period, traditional aspects of warfare undergo dramatic, almost revolutionary changes which at the end of this timeframe may even challenge the very nature of warfare itself. Game changers during this era include:

• Hyper Velocity Weapons:
Rail Guns (Electrodynamic Kinetic Energy Weapons): Electromagnetic projectile launchers. High velocity/energy and space (Mach 5 or higher). Not powered by explosive.
No Propellant: Easier to store and handle.
Lower Cost Projectiles: Potentially. Extreme G-force requires sturdy payloads.
Limiting factors: Power. Significant IR signature. Materials science.
Hyper Glide Vehicles: Less susceptible to anti-ballistic missile countermeasures.

• Directed Energy Weapons: Signature not visible without technology, must dwell on target. Power requirements currently problematic.
Potential: Tunable, lethal, and non-lethal.
Laser: Directed energy damages intended target. Targets: Counter Aircraft, UAS, Missiles, Projectiles, Sensors, Swarms.
Radio Frequency (RF): Attack targets across the frequency spectrum. Targets: Not just RF; Microwave weapons “cook targets,” people, electronics.

• Synthetic Biology: Engineering / modification of biological entities
Increased Crop Yield: Potential to reduce food scarcity.
Weaponization: Potential for micro-targeting, Seek & destroy microbes that can target DNA. Potentially accessible to super-empowered individuals.
Medical Advances: Enhance soldier survivability.
Genetic Modification: Disease resistant, potentially designer babies and super athletes/soldiers. Synthetic DNA stores digital data. Data can be used for micro-targeting.
CRISPR: Genome editing.

• Information Environment: Use IoT and sensors to harness the flow of information for situational understanding and decision-making advantage.

In envisioning Future Operational Environment possibilities, the Mad Scientist Initiative employs a number of techniques. We have found Crowdsourcing (i.e., the gathering of ideas, thoughts, and concepts from a wide variety of interested individuals assists us in diversifying thoughts and challenging conventional assumptions) to be a particularly effective technique. To that end, we have published our latest, 2-page compendium of Potential Game Changers here — we would like to hear your feedback regarding them. Please let us know your thoughts / observations by posting them in this blog post’s Comment box (found below, in the Leave a Reply section). Alternatively, you can also submit them to us via email at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil. Thank you in advance for your contributions!

51. Black Swans and Pink Flamingos

The Mad Scientist Initiative recently facilitated a workshop with thought leaders from across the Department of Defense, the Intelligence Community, other Government agencies, industry, and academia to address the unknown, unknowns (i.e., Black Swans) and the known, knowns (i.e., Pink Flamingos) to synthesize cross-agency thinking about possible disruptions to the Future Operational Environment.

Black Swans: In Nassim Nicholas Taleb’s original context, a black swan (unknown, unknowns) is an event or situation which is unpredictable, but has a major effect. For this conference, we used a looser definition, identifying possibilities that are not likely, but might have significant impacts on how we think about warfighting and security.

Pink Flamingos: Defined by Frank Hoffman, Pink Flamingos are the known, knowns that are often discussed, but ignored by Leaders trapped by organizational cultures and rigid bureaucratic decision-making structures. Peter Schwartz further describes Pink Flamingos as the “inevitable surprise.” Digital photography was a pink flamingo to Kodak.

At the workshop, attendees identified the following Black Swans:

Naturally Occurring Disaster: These events (i.e., Carrington Event — solar flare frying solid state electronics, super volcano eruptions, earthquake swarms, etc.) would have an enormous impact on the Army and its ability to continue to operate and defend the nation and support national recovery operations. While warning times have increased for many of these events, there are limited measures that can be implemented to mitigate the devastating effects of these events.

Virtual Nations: While the primacy of Westphalian borders has been challenged and the power of traditional nation-states has been waning over the last decade, some political scientists have assumed that supranational organizations and non-state actors would take their place. One potential black swan is the emergence of virtual nations due to the convergence of blockchain technologies, crypto-currency, and the ability to project power and legitimacy through the virtual world. Virtual nations could be organized based on ideologies, business models, or single interests. Virtual nations could supersede, supplement, or compete with traditional, physical nations. The Army of the future may not be prepared to interact and compete with virtual nations.

Competition in Venues Other than Warfare (Economic, Technological, Demographic, etc.) Achieving Primacy: In the near future, war in the traditional sense may be less prevalent, while competitions in other areas may be the driving forces behind national oppositions. How does the Army need to prepare for an eventuality where armed conflict is not as important as it once was?

Alternate Internet — “Alternet”: A distinct entity, separate from the general commercial internet, only accessible with specific corresponding hardware. This technology would allow for unregulated and unmonitored communication and commerce, potentially granting safe haven to criminal and terrorist activities.

At the workshop, attendees identified the following Pink Flamingos:

Safe at Home: Army installations are no longer the sanctuaries they once were, as adversaries will be able to attack Soldiers and families through social media and other cyberspace means. Additionally, installations no longer merely house, train, and deploy Soldiers — unmanned combat systems are controlled from home installations -— a trend in virtual power that will increase in the future. The Army needs a plan to harden our installations and train Soldiers and families to be resilient for this eventuality.

Hypersonics: High speed (Mach 5 or higher) and highly maneuverable missiles or glide vehicles that can defeat our air defense systems. The speed of these weapons is unmatched and their maneuverability allows them to keep their targets unknown until only seconds before impact, negating current countermeasures.

Generalized, Operationalized Artificial Intelligence (AI): Artificial intelligence is one of the most prominent pink flamingos throughout global media and governments. Narrow artificial intelligence is being addressed as rapidly as possible through ventures such as Project MAVEN. However, generalized and operationalized artificial intelligence – that can think, contextualize, and operate like a human – has the potential to disrupt not only operations, but also the military at its very core and foundation.

Space/Counterspace: Space is becoming increasingly congested, commercialized, and democratized. Disruption, degradation, and denial in space threatens to cripple multi-domain warfare operations. States and non-state actors alike are exploring options to counter one another, compete, and potentially even fight in space.

Quantum Sciences: Quantum science – communication, computing, and sensing – has the potential to solve some intractable but very specific problem sets. Quantum technology remains in its infancy. However, as the growth of qubits in quantum computing continues to expand, so does the potentiality of traditional encryption being utterly broken. Quantum sensing can allow for much more precise atomic clocks surpassing the precision timing of GPS, as well as quantum imaging that provides better results than classical imaging in a variety of wavelengths.

Bioweapons/Biohacking: The democratization of bio technology will mean that super-empowered individuals as well as nation states will have the ability to engineer weapons and hacks that can augment friendly human forces or target and degrade enemy human forces (e.g., targeted disease or genetic modifications).

Personalized Warfare: Warfare is now waged on a personal level, where adversaries can attack the bank accounts of Soldiers’ families, infiltrate their social media, or even target them specifically by their genetics. The Army needs to understand that the individual Soldier can be exploited in many different ways, often through information publicly provided or stolen.

Source: ommbeu / Fotolia
Deep Fakes/Information Warfare: Information warfare and “fake news” have played a prominent role in global politics over the last several years and could dominate the relationship between societies, governments, politicians, and militaries in the future operational environment. Information operations, thanks to big data and humanity’s ever-growing digital presence, are targeted at an extremely personal and specific level. One of the more concerning aspects of this is an artificial intelligence-based human image/voice synthesis technique known as deep fakes. Deep fakes can essentially put words in the mouths of prominent or trusted politicians and celebrities.

Multi-Domain Swarming: Swarming is often thought about in terms of unmanned aerial systems (UAS), but one significant pink flamingo is swarming taking place across multiple domains with self-organizing, autonomous aerial, ground, maritime (sub and surface), and even subterranean unmanned systems. U.S. defense systems on a linear modernization and development model will not be capable of dealing with the saturation and complexity issues arising from these multi-domain swarms.

Lethal Autonomy: An autonomous system with the ability to track, target, and fire without the supervision or authority of a human in/on the loop. The U.S. Army will have to examine its own policy regarding these issues as well as our adversaries, who may be less deterred by ethical/policy issues.

Tactical Nuclear Exchange: While strategic nuclear war and mutually assured destruction have been discussed and addressed ad nauseam, not enough attention has been given to the potential of a tactical nuclear exchange between state actors. One tactical nuclear attack, while not guaranteeing a nuclear holocaust, would bring about a myriad of problems for U.S. forces worldwide (e.g., the potential for escalation, fallout, contamination of water and air, and disaster response). Additionally, a high altitude nuclear burst’s electromagnetic pulse has the potential to fry solid state electronics across a wide-area, with devastating results to the affected nation’s electrical grid, essential government services, and food distribution networks.

Leaders must anticipate these future possibilities in determining the character of future conflicts and in force design and equipping decisions. Using a mental model of black swans and pink flamingos provides a helpful framework for assessing the risks associated with these decisions.

For additional information on projected black swans for the next 20+ years, see the RAND Corporation’s Discontinuities and Distractions — Rethinking Security for the Year 2040.

50. Four Elements for Future Innovation

(Editor’s Note: Mad Scientist Laboratory is pleased to present a new post by returning guest blogger Dr. Richard Nabors addressing the four key practices of innovation. Dr. Nabors’ previous guest posts discussed how integrated sensor systems will provide Future Soldiers with the requisite situational awareness to fight and win in increasingly complex and advanced battlespaces, and how Augmented and Mixed Reality are the critical elements required for these integrated sensor systems to become truly operational and support Soldiers’ needs in complex environments.)

For the U.S. military to maintain its overmatch capabilities, innovation is an absolute necessity. As noted in The Operational Environment and the Changing Character of Future Warfare, our adversaries will continue to aggressively pursue rapid innovation in key technologies in order to challenge U.S. forces across multiple domains. Because of its vital necessity, U.S. innovation cannot be left solely to the development of serendipitous discoveries.

The Army has successfully generated innovative programs and transitioned them from the research community into military use. In the process, it has identified four key practices that can be used in the future development of innovative programs. These practices – identifying the need, the vision, the expertise, and the resources – are essential in preparing for warfare in the Future Operational Environment. The recently completed Third Generation Forward Looking Infrared (3rd Gen FLIR) program provides us with a contemporary use case regarding how each of these practices are key to the success of future innovations.

1. Identifying the NEED:
To increase speed, precision, and accuracy of a platform lethality, while at the same time increasing mission effectiveness and warfighter safety and survivability.

As the U.S. Army Training and Doctrine Command (TRADOC) noted in its Advanced Engagement Battlespace assessment, future Advanced Engagements will be…
compressed in time, as the speed of weapon delivery and their associated effects accelerate enormously;
extended in space, in many cases to a global extent, via precision long-range strike and interconnectedness, particularly in the information environment;
far more lethal, by virtue of ubiquitous sensors, proliferated precision, high kinetic energy weapons and advanced area munitions;
routinely interconnected – and contested — across the multiple domains of air, land, sea, space and cyber; and
interactive across the multiple dimensions of conflict, not only across every domain in the physical dimension, but also the cognitive dimension of information operations, and even the moral dimension of belief and values.

Identifying the NEED within the context of these future Advanced Engagement characteristics is critical to the success of future innovations.

The first-generation FLIR systems gave a limited ability to detect objects on the battlefield at night. They were large, slow, and provided low-resolution, short-range images. The need was for greater speed, precision, and range in the targeting process to unlock the full potential of infrared imaging. Third generation FLIR uses multiband infrared imaging sensors combined with multiple fields of view which are integrated with computer software to automatically enhance images in real-time. Sensors can be used across multiple platforms and missions, allowing optimization of equipment for battlefield conditions, greatly enhancing mission effectiveness and survivability, and providing significant cost savings.

Source: John-Stone-Art
2. Identifying the VISION:
To look beyond the need and what is possible to what could be possible.

As we look forward into the Future Operational Environment, we must address those revolutionary technologies that, when developed and fielded, will provide a decisive edge over adversaries not similarly equipped. These potential Game Changers include:
Laser and Radio Frequency Weapons – Scalable lethal and non-Lethal directed energy weapons can counter Aircraft, UAS, Missiles, Projectiles, Sensors, and Swarms.
Swarms – Leverage autonomy, robotics, and artificial intelligence to generate “global behavior with local rules” for multiple entities – either homogeneous or heterogeneous teams.
• Rail Guns and Enhanced Directed Kinetic Energy Weapons (EDKEW) – Non explosive electromagnetic projectile launchers provide high velocity/high energy weapons.
• Energetics – Provides increased accuracy and muzzle energy.
• Synthetic Biology – Engineering and modification of biological entities has potential weaponization.
• Internet of Things – Linked internet “things” create opportunity and vulnerability. Great potential benefits already found in developing U.S. systems also create a vulnerability.
• Power – Future effectiveness depends on renewable sources and reduced consumption. Small nuclear reactors are potentially a cost-effective source of stable power.

Understanding these Future Operational Environment Game Changers is central to identifying the VISION and looking beyond the need to what could be possible.

The 3rd Gen FLIR program struggled early in its development to identify requirements necessary to sustain a successful program. Without the user community’s understanding of a vision of what could be possible, requirements were based around the perceived limitations of what technology could provide. To overcome this, the research community developed a comprehensive strategy for educational outreach to the Army’s requirement developers, military officers, and industry on the full potential of what 3rd Gen FLIR could achieve. This campaign highlighted not only the recognized need, but also a vision for what was possible, and served as the catalyst to bring the entire community together.

3. Identifying the EXPERTISE:
To gather expertise from all possible sources into a comprehensive solution.

Human creativity is the most transformative force in the world; people compound the rate of innovation and technology development. This expertise is fueling the convergence of technologies that is already leading to revolutionary achievements with respect to sensing, data acquisition and retrieval, and computer processing hardware.

Identifying the EXPERTISE leads to the exponential convergence and innovation that will afford strategic advantage to those who recognize and leverage them.

The expertise required to achieve 3rd Gen FLIR success was from the integration of more than 16 significant research and development projects from multiple organizations: Small Business Innovation Research programs; applied research funding, partnering in-house expertise with external communities; Manufacturing Technology (ManTech) initiatives, working with manufacturers to develop the technology and long-term manufacturing capabilities; and advanced technology development funding with traditional large defense contractors. The talented workforce of the Army research community strategically aligned these individual activities and worked with them to provide a comprehensive, interconnected final solution.

4. Identifying the RESOURCES:
To consistently invest in innovative technology by partnering with others to create multiple funding sources.

The 2017 National Security Strategy introduced the National Security Innovation Base as a critical component of its vision of American security. In order to meet the challenges of the Future Operational Environment, the Department of Defense and other agencies must establish strategic partnerships with U.S. companies to help align private sector Research and Development (R&D) resources to priority national security applications in order to nurture innovation.

The development of 3rd Gen FLIR took many years of appropriate, consistent investments into innovations and technology breakthroughs. Obtaining the support of industry and leveraging their internal R&D investments required the Army to build trust in the overall program. By creating partnerships with others, such as the U.S. Army Communications-Electronics Research, Development and Engineering Center (CERDEC) and ManTech, 3rd Gen FLIR was able to integrate multiple funding sources to ensure a secure resource foundation.

The successful 3rd Gen FLIR program is a prototype of the implementation of an innovative program, which transitions good ideas into actual capabilities. It exemplifies how identifying the need, the vision, the expertise and the resources can create an environment where innovation thrives, equipping warriors with the best technology in the world. As the Army looks to increase its exploration of innovative technology development for the future, these examples of past successes can serve as models to build on moving forward.

See our Prototype Warfare post to learn more about other contemporary innovation successes that are helping the U.S. maintain its competitive advantage and win in an increasingly contested Operational Environment.

Dr. Richard Nabors is Associate Director for Strategic Planning and Deputy Director, Operations Division, U.S. Army Research, Development and Engineering Command (RDECOM) Communications-Electronics Research, Development and Engineering Center (CERDEC), Night Vision and Electronic Sensors Directorate.

49. “The Queue”

(Editor’s Note: Beginning today, the Mad Science Laboratory will publish a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the previous month. In this anthology, we will address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!)

1. Army of None: Autonomous Weapons and the Future of War, by Paul Scharre, Senior Fellow and Director of the Technology and National Security Program, Center for a New American Security.

One of our favorite Mad Scientists, Paul Scharre, has authored a must read for all military Leaders. This book will help Leaders understand the definitions of robotic and autonomous weapons, how they are proliferating across states, non-states, and super-empowered individuals (his chapter on Garage Bots makes it clear this is not state proliferation analogous), and lastly the ethical considerations that come up at every Mad Scientist Conference. During these Conferences, we have discussed the idea of algorithm vs algorithm warfare and what role human judgement plays in this version of future combat. Paul’s chapters on flash war really challenge our ideas of how a human operates in the loop and his analogies using the financial markets are helpful for developing the questions needed to explore future possibilities and develop policies for dealing with warfare at machine speed.

Source: Rosoboronexport via YouTube
2. “Convergence on retaining human control of weapons systems,” in Campaign to Stop Killer Robots, 13 April 2018.

April 2018 marked the fifth anniversary of the Campaign to Stop Killer Robots. Earlier this month, 82 countries and numerous NGOs also convened at the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland, where many stressed the need to retain human control over weapons systems and the use of force. While the majority in attendance proposed moving forward this November to start negotiations towards a legally binding protocol addressing fully autonomous weapons, five key states rejected moving forward in negotiating new international law – France, Israel, Russia, the United Kingdom, and the United States. Mad Scientist notes that the convergence of a number of emerging technologies (synthetic prototyping, additive manufacturing, advanced modeling and simulations, software-defined everything, advanced materials) are advancing both the feasibility and democratization of prototype warfare, enabling and improving the engineering of autonomous weapons by non-state actors and super-empowered individuals alike. The genie is out of the bottle – with the advent of the Hyperactive Battlefield, advanced engagements will collapse the decision-action cycle to mere milliseconds, granting a decisive edge to the side with more autonomous decision-action.

Source: The Stack
3. “China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems,” by Elsa Kania, Adjunct Fellow with the Technology and National Security Program, Center for a New American Security, in Lawfare, 17 Apr 18.

Mad Scientist Elsa Kania addresses the People’s Republic of China’s apparent juxtaposition between their diplomatic commitment to limit the use of fully autonomous lethal weapons systems and the PLA’s active pursuit of AI dominance on the battlefield. The PRC’s decision on lethal autonomy and how it defines the role of human judgement in lethal operations will have tactical, operational, and strategic implications. In TRADOC’s Changing Character of Warfare assessment, we addressed the idea of an asymmetry in ethics where the differing ethical choices non-state and state adversaries make on the integration of emerging technologies could have real battlefield overmatch implications. This is a clear pink flamingo where we know the risks but struggle with addressing the threat. It is also an area where technological surprise is likely, as systems could have the ability to move from human in the loop mode to fully autonomous with a flip of a switch.

Source: HBO.com
4. “Maeve’s Dilemma in Westworld: What Does It Mean to be Free?,” by Marco Antonio Azevedo and Ana Azevedo, in Institute of Art and Ideas, 12 Apr 18. [Note: Best viewed on your personal device as access to this site may be limited by Government networks]

While this article focuses primarily on a higher-level philosophical interpretation of human vs. machine (or artificial intelligence, being, etc.), the core arguments and discussion remain relevant to an Army that is looking to increase its reliance on artificial intelligence and robotics. Technological advancements in these areas continue to trend toward modeling humans (both in form and the brain). However, the closer we get to making this a reality, the closer we get to confronting questions about consciousness and artificial humanity. Are we prepared to face these questions earnestly? Do we want an artificial entity that is, essentially, human? What do we do when that breakthrough occurs? Does biological vs. synthetic matter if the being “achieves” personhood? For additional insights on this topic, watch Linda MacDonald Glenn‘s Ethics and Law around the Co-Evolution of Humans and AI presentation from the Mad Scientist Visualizing Multi Domain Battle in 2030-2050 Conference at Georgetown University, 25-26 Jul 17.

5. Do You Trust This Computer?, directed by Chris Paine, Papercut Films, 2018.

The Army, and society as a whole, is continuing to offload certain tasks and receive pieces of information from artificial intelligence sources. Future Army Leaders will be heavily influenced by AI processing and distributing information used for decision making. But how much trust should we put in the information we get? Is it safe to be so reliant? What should the correct ratio be of human/machine contribution to decision-making? Army Leaders need to be prepared to make AI one tool of many, understand its value, and know how to interpret its information, when to question its output, and apply appropriate context. Elon Musk has shown his support for this documentary and tweeted about its importance.

6. Ready Player One, directed by Steven Spielberg, Amblin Entertainment, 2018.

Adapted from the novel of the same name, this film visualizes a future world where most of society is consumed by a massive online virtual reality “game” known as the OASIS. As society transitions from the physical to the virtual (texting, email, skype, MMORPG, Amazon, etc.), large groups of people will become less reliant on the physical world’s governmental and economic systems that have been established for centuries. As virtual money begins to have real value, physical money will begin to lose value. If people can get many of their goods and services through a virtual world, they will become less reliant on the physical world. Correspondingly, physical world social constructs will have less control of the people who still inhabit it, but spend increasing amounts of time interacting in the virtual world. This has huge implications for the future geo-political landscape as many varied and geographically diverse groups of people will begin congregating and forming virtual allegiances across all of the pre-established, but increasingly irrelevant physical world geographic borders. This will dilute the effectiveness, necessity, and control of the nation-state and transfer that power to the company(ies) facilitating the virtual environment.

Source: XO, “SoftEcologies,” suckerPUNCH
7. “US Army could enlist robots inspired by invertebrates,” by Bonnie Burton, in c/net, 22 Apr 18.

As if Boston Dynamic’s SpotMini isn’t creepy enough, the U.S. Army Research Laboratory (ARL) and the University of Minnesota are developing a flexible, soft robot inspired by squid and other invertebrates that Soldiers can create on-demand using 3-D printers on the battlefield. Too often, media visualizations have conditioned us to think of robots in anthropomorphic terms (with corresponding limitations). This and other breakthroughs in “soft,” polymorphic, printable robotics may grant Soldiers in the Future Operational Environment with hitherto unimagined on-demand, tailorable autonomous systems that will assist operations in the tight confines of complex, congested, and non-permissive environments (e.g., dense urban and subterranean). Soft robotics may also prove to be more resilient in arduous conditions. This development changes the paradigm for how robotics are imagined in both design and application.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

For additional insights into the Mad Scientist Initiative and how we continually explore the future through collaborative partnerships and continuous dialogue with academia, industry, and government, check out this Spy Museum’s SPYCAST podcast.