122. The Guy Behind the Guy: AI as the Indispensable Marshal

[Editor’s Note: Mad Scientist Laboratory is pleased to present today’s guest blog post by Mr. Brady Moore and Mr. Chris Sauceda, addressing how Artificial Intelligence (AI) systems and entities conducting machine speed collection, collation, and analysis of battlefield information will free up warfighters and commanders to do what they do best — fight and make decisions, respectively. This Augmented Intelligence will enable commanders to focus on the battle with coup d’œil, or the “stroke of an eye,” maintaining situational awareness on future fights at machine speed, without losing precious time crunching data.]

Jon Favreau’s Mike character (left) is the “guy behind the guy,” to Vince Vaughn’s Trent character (right) in Swingers, directed by Doug Liman, Miramax;(1996) / Source: Pinterest

In the 1996 film Swingers, the characters Trent (played by Vince Vaughn) and Mike (played by Jon Favreau) star as a couple of young guys trying to make it in Hollywood. On a trip to Las Vegas, Trent introduces Mike as “the guy behind the guy” – implying that Mike’s value is that he has the know-how to get things done, acts quickly, and therefore is indispensable to a leading figure. Yes, I’m talking about Artificial Intelligence for Decision-Making on the future battlefield – and “the guy behind the guy” sums up how AI will provide a decisive advantage in Multi-Domain Operations (MDO).

Some of the problems commanders will have on future battlefields will be the same ones they have today and the same ones they had 200 years ago: the friction and fog of war. The rise of information availability and connectivity brings today’s challenges – of which most of us are aware. Advanced adversary technologies will bring future challenges for intelligence gathering, command, communication, mobility, and dispersion. Future commanders and their staffs must be able to deal with both perennial and novel challenges faster than their adversaries, in disadvantageous circumstances we can’t control. “The guy behind the guy” will need to be conversant in vast amounts of information and quick to act.

Louis-Alexandre Berthier was a French Marshal and Vice-Constable of the Empire, and Chief of Staff under Napoleon / oil portrait by Jacques Augustin Catherine Pajou (1766–1828), Source: Wikimedia Commons

In western warfare, the original “guy behind the guy” wasn’t Mike – it was this stunning figure. Marshal Louis-Alexandre Berthier was Napoleon Bonaparte’s Chief of Staff from the start of his first Italian campaign in 1796 until his first abdication in 1814. Famous for rarely sleeping while on campaign, Paul Thiebault said of Berthier in 1796:

“Quite apart from his specialist training as a topographical engineer, he had knowledge and experience of staff work and furthermore a remarkable grasp of everything to do with war. He had also, above all else, the gift of writing a complete order and transmitting it with the utmost speed and clarity…No one could have better suited General Bonaparte, who wanted a man capable of relieving him of all detailed work, to understand him instantly and to foresee what he would need.”

Bonaparte’s military record, his genius for war, and skill as a leader are undisputed, but Berthier so enhanced his capabilities that even Napoleon himself admitted about his absence at Waterloo, “If Berthier had been there, I would not have met this misfortune.”

Augmented Intelligence, where intelligent systems enhance human capabilities (rather than systems that aspire to replicate the full scope of human intelligence), has the potential to act as a digital Chief of Staff to a battlefield commander. Just like Berthier, AI for decision-making would free up leaders to clearly consider more factors and make better decisions – allowing them to command more, and research and analyze less. AI should allow humans to do what they do best in combat – be imaginative, compel others, and act with an inherent intuition, while the AI tool finds, processes, and presents the needed information in time.

So Augmented Intelligence would filter information to prioritize only the most relevant and timely information to help manage today’s information overload, as well as quickly help communicate intent – but what about yesterday’s friction and fog, and tomorrow’s adversary technology? The future battlefield seems like one where U.S. commanders will be starved for the kind of Intelligence, Surveillance, and Reconnaissance (ISR) and communication we are so used to today, a battlefield with contested Electromagnetic Spectrum (EMS) and active cyber effects, whether known or unknown. How can commanders and their staffs begin to overcome challenges we haven’t yet been presented in war?

Average is Over: Powering America Beyond the Age of the Great Stagnation, by Tyler Cowen / Dutton, The Penguin Group, published in 2013

In his 2013 book Average is Over, economist Tyler Cowen examines the way freestyle chess players (who are free to use computers when playing the game) use AI tools to compete and win, and makes some interesting observations that are absolutely applicable to the future of warfare at every level. He finds competitors have to play against foes who have AI tools themselves, and that AI tools make chess move decisions that can be recognized (by people) and countered. The most successful freestyle chess players use a combination of their own knowledge of the game, but pick and choose times and situations to use different kinds of AI throughout a game. Their opponents not only then have to consider which AI is being used against them, but also their human operator’s overall strategy. This combination of Augmented Intelligence with an AI tool, along with natural inclinations and human intuitions will likely result in a powerful equilibrium of human and AI perception, analysis, and ultimately enhanced complex decision-making.

With a well-trained and versatile “guy behind the guy,” a commander and staff could employ different aspects of Augmented Intelligence at different times, based on need or appropriateness. A company commander in a dense urban fight, equipped with an appropriate AI tool – a “guy behind the guy” that helps him make sense of the battlefield – what could that commander accomplish with his company? He could employ the tool to notice things humans don’t – or at least notice them faster and alert him. Changes in historic traffic patterns or electronic signals in an area could indicate an upcoming attack or a fleeing enemy, or the system could let the commander know that just a little more specific data could help establish a pattern where enemy data was scarce. And if the commander was presented with the very complex and large problems that characterize modern dense urban combat, the system could help shrink and sequence problems to make them more solvable – for instance find a good subset of information to experiment with and help prove a hypothesis before trying out a solution in the real world – risking bandwidth instead of blood.

The U.S. strategy for MDO has already identified the critical need to observe, orient, decide, and act faster than our adversaries – multiple AI tools that have all necessary information, and can present it and act quickly will certainly be indispensable to leaders on the battlefield. An AI “guy behind the guy” continuously sizing up the situation, finding the right information and allowing for better, faster decisions in difficult situations is how Augmented Intelligence will best serve leaders in combat and provide battlefield advantage.

If you enjoyed this post, please also read:

… watch Juliane Gallina‘s Arsenal of the Mind presentation at the Mad Scientist Robotics, AI, & Autonomy Visioning Multi Domain Battle in 2030-2050 Conference at Georgia Tech Research Institute, Atlanta, Georgia, on 7-8 March 2017

… and learn more about potential AI battlefield applications in our Crowdsourcing the Future of the AI Battlefield information paper.

Brady Moore is a Senior Enterprise Client Executive at Neudesic in New York City. A graduate of The Citadel, he is a former U.S. Army Infantry and Special Forces officer with service as a leader, planner, and advisor across Iraq, Afghanistan, Africa, and, South Asia. After leaving the Army in 2011, he obtained an MBA at Penn State and worked as an IBM Cognitive Solutions Leader covering analytics, AI, and Machine Learning in National Security. He’s the Junior Vice Commander of VFW Post 2906 in Pompton Lakes, NJ, and Cofounder of the Special Forces Association Chapter 58 in New York City. He also works with Elite Meet as often as he can.

Chris Sauceda is an account manager within the U.S. Army Defense and Intel IBM account, covering Command and Control, Cyber, and Advanced Analytics/ Artificial Intelligence. Chris served on active duty and deployed in support of Operation Iraqi Freedom, and has been in the Defense contracting business for over 13 years. Focused on driving cutting edge technologies to the warfighter, he also currently serves as a Signal Officer in the Texas Military Department.

120. Autonomous Robotic Systems in the Russian Ground Forces

[Editor’s Note: Mad Scientist Laboratory welcomes back returning guest blogger and proclaimed Mad Scientist Mr. Samuel Bendett with today’s post, addressing Russia’s commitment to mass produce independent ground combat robotic systems. Simon Briggs, professor of interdisciplinary arts at the University of Edinburgh, predicts that “in 2030 AI will be in routine use to fight wars and kill people, far more effectively than humans can currently kill.”  Mr. Bendett’s post below addresses the status of current operationally tested and fielded Russian Unmanned Ground Vehicle (UGV) capabilities, and their pivot to acquire systems able to “independently recognize targets, use weapons, and interact in groups and swarms.” (Note:  Some of the embedded links in this post are best accessed using non-DoD networks.)]

Russian Minister of Defense Sergei Shoigu / Source: Wikimedia Commons

Over the past several years, the Russian military has invested heavily in the design, production, and testing of unmanned combat systems. In March 2018, Russian Defense Minister Sergei Shoigu said that mass production of combat robots for the Russian army could begin as early as that year. Now, the Ministry of Defense (MOD) is moving ahead with creating plans for such systems to act independently on the battlefield.

According to the Russian state media (TASS), Russian military robotic complexes (RBCs) will be able to independently recognize targets, use weapons, and interact in groups and swarms. Such plans were stated in the article by the staff of the 3rd Central Scientific Research Institute of the Russian Federation’s MOD.

Uran-6 Airborne Countermine System with flail / Source: Russian Federation MOD

Russia has already tested several Unmanned Ground Vehicles (UGVs) in combat. Its Uran-6, Scarab, and Sphera demining UGVs were rated well by the Russian engineering forces, and there are plans to start acquisition of such vehicles. However, these systems were designed to have their operators close by. When it came to a UGV that was originally built for operator remoteness in potential combat, things got more complicated.

Uran-9 engaging targets with its 30mm 2A72 autocannon on a test range.  Operational tests in Syria proved less successful.  / Source:  YouTube

Russia’s Uran-9 combat UGV experienced a large number of failures when tested in Syria, among them transportation, communication, firing, and situational awareness. The lessons from Uran-9 tests supposedly prompted the Russian military to consider placing more emphasis on using such UGVs as one-off attack vehicles against adversary hard points and stationary targets.

Russian ground combat forces conducting urban operations in Syria / Source: Wikimedia

Nonetheless, the aforementioned TASS article analyzes the general requirements for unmanned military systems employed by Russian ground forces. Among them is the ability to solve tasks in different combat conditions during day and night, under enemy fire, electronic and informational counteraction, in conditions of radiation, chemical contamination, and electromagnetic attack – as well as requirements such as modularity and multifunctionality. The article also points out “the [systems’] ability to independently perform tasks in conditions of ambiguity” – implying the use of Artificial Intelligence.

To achieve these requirements, the creation of an “intelligent decision-making system” is proposed, which will also supervise the use of weapons. “The way out of this situation is the intensification of research on increasing the autonomy of the RBCs and the introduction of intelligent decision-making systems at the control stages, including group, autonomous movement and use of equipment for its intended purpose, including weapons, into military robotics,” the article says.

An example of the complex, ambiguous environments that will challenge future Russian RBCs:  Russian troops in Aleppo, Syria / Source: Wikimedia Commons via article in the University of Melboune’s Pursuit, “Why is Russia Still Supporting Syria?”

The TASS article states that in the near future, the MOD is planning to initiate work aimed at providing technical support for solving this problem set. This research will include domestic laser scanning devices for geographical positioning, the development of methods and equipment for determining the permeability of the soil on which the UGV operates, the development of methods for controlling the military robot in “unstable communications,” and the development of methods for analyzing combat environments such as recognizing scenes, images, and targets.

Successfully employing UGVs in combat requires complicated systems, something that the aforementioned initiatives will seek to address. This work will probably rely on Russia’s Syrian experience, as well as on the current projects and upgrades to Moscow’s growing fleet of combat UGVs. On 24 January 2018, the Kalashnikov Design Bureau that oversees the completion of Uran-9 work admitted that this UGV has been accepted into military service. Although few details were given, the statement did include the fact that this vehicle will be further “refined” based on lessons learned during its Syria deployment, and that the Uran-9 presents “good scientific and technical groundwork for further products.” The extent of upgrades to that vehicle was not given – however, numerous failures in Syrian trials imply that there is lots of work ahead for this project. The statement also indicates that the Uran-9 may be a test-bed for further UGV development, an interesting fact considering the country’s already diverse collection of combat UGVs

As reported in DefenseOne, Russian Colonel Col. Oleg Pomazuev stated that the Nerekhta UGV “outperformed” manned systems in recent exercises / Source: DefenseOne and Sergey Ptichkin / RG

Today, the Russian military is testing and evaluating several systems, such as Nerekhta and Soratnik. The latter was also supposedly tested in “near-combat” conditions, presumably in Syria or elsewhere. The MOD has been testing smaller Platforma-M and large Vikhr combat UGVs, along with other unmanned vehicles. Yet the defining characteristic for these machines so far has been the fact that they were all remote-operated by soldiers, often in near proximity to the machine itself. Endowing these UGVs with more independent decision–making in the “fog of war” via an intelligent command and control system may exponentially increase their combat effectiveness — assuming that such systems can function as planned.

If you enjoyed this post, please also:

Read Mr. Bendett’s previous post, Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward

… and watch Zvezda Broadcasting‘s video, showing a Vikhr unmanned, tele-operated BMP-3 maneuvering and shooting its 7.62mm MG, 30mm cannon, and automatic grenade launcher on a test range.

Automated lethality is but one of the many Future Operational Environment trends that the U.S. Army’s Mad Scientist Initiative is tracking. Mad Scientist seeks to crowdsource your visions of future combat with our Science Fiction Writing Contest 2019. Our deadline for submission is 1 APRIL 2019, so please review the contest details and associated release form here, get those creative writing juices flowing, and send us your visions of combat in 2030!  Selected submissions may be chosen for publication or a possible future speaking opportunity.

Samuel Bendett is a Researcher at CNA and a Fellow in Russia Studies at the American Foreign Policy Council. He is also a proud Mad Scientist.

117. Old Human vs. New Human

[Editor’s Note: On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC. Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces. One finding from this conference is that tomorrow’s Soldiers will learn differently from earlier generations, given the technological innovations that will have surrounded them from birth through their high school graduation.  To effectively engage these “New Humans” and prepare them for combat on future battlefields, the Army must discard old paradigms of learning that no longer resonate (e.g., those desiccated lectures delivered via interminable PowerPoint presentations) and embrace more effective means of instruction.]

The recruit of 2050 will be born in 2032 and will be fundamentally different from the generations born before them. Marc Prensky, educational writer and speaker who coined the term digital native, asserts this “New Human” will stand in stark contrast to the “Old Human” in the ways they assimilate information and approach learning.1 Where humans today are born into a world with ubiquitous internet, hyper-connectivity, and the Internet of Things, each of these elements are generally external to the human. By 2032, these technologies likely will have converged and will be embedded or integrated into the individual with connectivity literally on the tips of their fingers. The challenge for the Army will be to recognize the implications of this momentous shift and alter its learning methodologies, approach to training, and educational paradigm to account for these digital natives.

These New Humans will be accustomed to the use of artificial intelligence (AI) to augment and supplement decision-making in their everyday lives. AI will be responsible for keeping them on schedule, suggesting options for what and when to eat, delivering relevant news and information, and serving as an on-demand embedded expert. The Old Human learned to use these technologies and adapted their learning style to accommodate them, while the New Human will be born into them and their learning style will be a result of them. In 2018, 94% of Americans aged 18-29 owned some kind of smartphone.2 Compare that to 73% ownership for ages 50-64 and 46% for age 65 and above and it becomes clear that there is a strong disconnect between the age groups in terms of employing technology. Both of the leading software developers for smartphones include a built-in artificially intelligent digital assistant, and at the end of 2017, nearly half of all U.S. adults used a digital voice assistant in some way.3 Based on these trends, there likely will be in the future an even greater technological wedge between New Humans and Old Humans.

http://www.pewinternet.org/fact-sheet/mobile/

New Humans will be information assimilators, where Old Humans were information gatherers. The techniques to acquire and gather information have evolved swiftly since the advent of the printing press, from user-intensive methods such as manual research, to a reduction in user involvement through Internet search engines. Now, narrow AI using natural language processing is transitioning to AI-enabled predictive learning. Through these AI-enabled virtual entities, New Humans will carry targeted, predictive, and continuous learning assistants with them. These assistants will observe, listen, and process everything of relevance to the learner and then deliver them information as necessary.

There is an abundance of research on the stark contrast between the three generations currently in the workforce: Baby Boomers, Generation X, and Millennials.4, 5 There will be similar fundamental differences between Old Humans and New Humans and their learning styles. The New Human likely will value experiential learning over traditional classroom learning.6 The convergence of mixed reality and advanced, high fidelity modeling and simulation will provide New Humans with immersive, experiential learning. For example, Soldiers learning military history and battlefield tactics will be able to experience it ubiquitously, observing how each facet of the battlefield affects the whole in real-time as opposed to reading about it sequentially. Soldiers in training could stand next to an avatar of General Patton and experience him explaining his command decisions firsthand.

There is an opportunity for the Army to adapt its education and training to these growing differences. The Army could—and eventually will need—to recruit, train, and develop New Humans by altering its current structure and recruitment programs. It will become imperative to conduct training with new tools, materials, and technologies that will allow Soldiers to become information assimilators. Additionally, the incorporation of experiential learning techniques will entice Soldiers’ learning. There is an opportunity for the Army to pave the way and train its Soldiers with cutting edge technology rather than trying to belatedly catch up to what is publicly available.

Evolution in Learning Technologies

If you enjoyed this post, please also watch Elliott Masie‘s video presentation on Dynamic Readiness and  Mark Prensky‘s presentation on The Future of Learning from of the Mad Scientist Learning in 2050 Conference

… see the following related blog posts:

… and read The Mad Scientist Learning in 2050 Final Report.


1 Prensky, Mark, Mad Scientist Conference: Learning in 2050, Georgetown University, 9 August 2018

2 http://www.pewinternet.org/fact-sheet/mobile/

3 http://www.pewresearch.org/fact-tank/2017/12/12/nearly-half-of-americans-use-digital-voice-assistants-mostly-on-their-smartphones/

4 https://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Generational-issues-in-the-workplace.aspx

5 https://blogs.uco.edu/customizededucation/2018/01/16/generational-differences-in-the-workplace/

6 https://www.apa.org/monitor/2010/03/undergraduates.aspx

116. Three Futurist Urban Scenarios

[Editor’s Note: Mad Scientist welcomes back returning guest blogger Dr. Nir Buras with today’s post.  We’ve found crowdsourcing (i.e., the gathering of ideas, thoughts, and concepts from a widespread variety of interested individuals) to be a very effective tool in enabling us to diversify our thoughts and challenge our assumptions.  Dr. Buras’ post takes the results from one such crowdsourcing exercise and extrapolates three future urban scenarios.  Given The Army Vision‘s clarion call to “Focus training on high-intensity conflict, with emphasis on operating in dense urban terrain,” our readers would do well to consider how the Army would operate in each of Dr. Buras’ posited future scenarios…]

The challenges of the 21st century have been forecast and are well-known. In many ways we are already experiencing the future now. But predictions are hard to validate. A way around that is turning to slightly older predictions to illuminate the magnitude of the issues and the reality of their propositions.1 Futurists William E. Halal and Michael Marien’s predictions of 2011 have aged enough to be useful. In an improved version of the Delphi method, they iteratively built consensus among participants. Halal and Marien balanced the individual sense of over sixty well-qualified experts and thinkers representing a range of technologies with facilitated feedback from the others. They translated their implicit or tacit know how to make qualified quantitative empirical predictions.2

From their research we can transpose three future urban scenarios:  The High-Tech City, The Feral City, and Muddling Through.

The High-Tech City

The High-Tech City scenario is based primarily on futurist Jim Dator’s high-tech predictions. It envisions the continued growth of a technologically progressive, upwardly mobile, internationally dominant, science-guided, rich, leisure-filled, abundant, and liberal society. Widespread understanding of what works largely avoids energy shortages, climate change, and global conflict.3

The high-tech, digital megacity is envisaged as a Dubai on steroids. It is hyper-connected and energy-efficient, powered by self-sustaining, renewable resources and nuclear energy.4

 

Connected by subways and skyways, with skyscraping vertical gardens, the cities are ringed by elaborately managed green spaces and ecosystems. The city’s 50 to 150-story megastructures, “cities-in-buildings,” incorporate apartments, offices, schools and grocery stores, hospitals and shopping centers, sports facilities and cultural centers, gardens, and running tracks. Alongside them rise vertical farms housing animals and crops. The rooftop garden of the 2015 film High Rise depicts how aerial terraces up high provide a sense of suburban living in the high-tech city.5

On land, zero-emission driverless traffic zips about on intelligent highways. High-speed trains glide silently by. After dark, spider bots and snake drones automatically inspect and repair buildings and infrastructure.6

In the air, helicopters, drones, and flying cars zoom around. Small drones, mimicking insects and birds, and programmable nano-chips, some as small as “smart” dust, swarm over the city into any object or shape on command. To avoid surface traffic, inconvenience, and crime, wealthier residents fly everywhere.7

Dominated by centralized government and private sector bureaucracies wielding AI, these self-constructing robotic “cyburgs” have massive technology, robotics, and nanotechnology embedded in every aspect of their life, powered by mammoth fusion energy plants.8

Every unit of every component is embedded with at least one flea-size chip. Connected into a single worldwide digital network, trillions of sensors monitor countless parameters for the city and everything in it. The ruling AI, commanded directly by individual minds, autonomously creates, edits, and implements software, simultaneously processing feedback from a global network of sensors.9

 

Metropolis by Fritz Lang was the first film to show a city of the future as a modernist dystopia. / Produced by Ufa.

The High-Tech City is not a new concept. It goes back to Jules Verne, H. G. Wells, and Fritz Lang, who most inspired its urban look in the 1927 film Metropolis. The extrapolated growth of technology has long been the basis for predictions. But professional futurists surprisingly agree that a High-Tech Jetsons scenario has only a 0%-5% probability of being realized.10

Poignantly, the early predictors transmitted a message that the stressful lifestyle of the High-Tech City contradicts the intention of freedom from drudge. Moreover, the High-Tech megacities’ appetite for minerals may lay waste to whole ecosystems. Much of the earth may become a feral wilderness. Massive, centralized AI Internet clouds and distribution systems give a false sense of cultural robustness. People become redundant and democracy meaningless. The world may fail to react to accelerated global crises, with disastrous consequences. The paradoxical obsolescence of high-tech could slide humanity into a new Dark Age.11

The Feral City

Futurists disturbingly describe a Decline to Disaster scenario as five times more likely to happen than the high-tech one. From Tainter’s theory of collapse and Jane Jacobs’s Dark Age Ahead we learn that the cycles of urban problem-solving lead to more problems and ultimately failures. If Murphy’s Law kicks in, futurists predict a 60% chance that large parts of the world may be plunged into an Armageddon-type techno-dystopian scenario, typified by the films Mad Max (1979) and Blade Runner (1982).12

Apocalyptic feral cities, once vital components in national economies, are routinely imagined as vast, sprawling urban environments defined by blighted buildings. An immense petri dish of both ancient and new diseases, rule of law has long been replaced by gang anarchy and the only security available in them is attained through brute power.13

Neat suburban areas were long ago stripped for their raw materials. Daily life in feral cities is characterized by a ubiquitous specter of murder, bloodshed, and war, of the militarization of young men, and the constant threat of rape to females. Urban enclaves are separated by wild zones, fragmented habitats consisting of wild nature and subsistence agriculture. With minimal or no sanitation facilities, a complete absence of environmental controls, and massive populations, feral cities suffer from extreme air pollution from vehicles and the use of open fires and coal for cooking and heating. In effect toxic-waste dumps, these cities pollute vast stretches of land, poisoning coastal waters, watersheds, and river systems throughout their hinterlands.14

Pollution is exported outside the enclaves, where the practices of the desperately poor, and the extraction of resources for the wealthy, induce extreme environmental deterioration. Rivers flow with human waste and leached chemicals from mining, contaminating much of the soil on their banks.15

Globally connected, a feral city might possess a modicum of commercial linkages, and some of its inhabitants might have access to advanced communication and computing. In some areas, agriculture might forcefully cultivate high-yield, GMO, and biomass crops. But secure long-distance travel nearly disappears, undertaken mostly by the super-rich and otherwise powerful.16

Dystopian reality and dystopian art: (a) Bangladeshi, hanging on a train in Ijtema in 2017, already live the dystopian future. (b) A Dystopian City

Futurists backcasting from 2050 say that the current urbanization of violence and war are harbingers of the feral city scenario. But feral cities have long been present. The Warsaw Ghetto in World War Two was among them, as were the Los Angeles’ Watts neighborhood in the 1960s and 1990s; Mogadishu in 2003, and Gaza repeatedly.17

Walled City of Kowloon

Conflict and crime changed once charming, peaceful Aleppo, Bamako, Caracas, Erbil, Mosul, Tripoli, and Salvador into feral cities. Medieval San Gimignano was one. Spectacularly, from 1889 to 1994 the ghastly spaces of Hong Kong’s singular urban phenomenon, the Walled City of Kowloon, provided a living example.18

Muddling Through

The good news is that futurists tend to believe in a 65%-85% probability of a Muddling Through scenario. Despite interlinked, cascading catastrophes, they suggest that technologies may gain some on the problems. Somehow securing a sustainable world for 9 billion people by 2050, they suggest the world will be massively changed, yet somehow livable.19

Lending credibility to the Muddling Through scenario is that it blends numerous hypotheses. It predicts that people living in rural communities will tend the land scientifically. Its technological salvation hypothesis posits that science will come to the rescue. Its free market hypothesis assumes that commerce will drive technological advancements.20

It pictures a “conserver” society tinged by Marxism, a neo-puritan “ecotopia,” colored by both the high-tech and feral scenarios. Tropical diseases, corruption, capitalism, socialism, inequality, and war are not eradicated. But nationalism, tribalism, and xenophobia are reduced after global traumas. Though measurably poorer, most people will still have a reasonable level of wellbeing.21 According to the Muddling Through scenario, large cities retract and densify around their old centers and waterfronts. Largely self-sufficient, small towns and cities survive amid the ruins of suburban sprawl, separated by resurgent forests and fields. Shopping malls, office towers and office parks, town dumps, tract homes, and abandoned steel and glass buildings are stripped for their recyclables. Unsalvageable downtowns in some cases go feral.22

A mix of high and low tech fosters digital communication with those at a distance. There would be drip irrigation, hydroponic farming, aquaculture, and grey water recycling, overlaid with artificial intelligence, biotechnology and biomimicry, nuclear power, geoengineering, and oil from algae.23

In some places, rail links are maintained, but cars are a rarity, and transportation is greatly reduced. Collapsed or dismantled freeways and bridges return to the forest or desert. While flying still exists, it is rarer. But expanded virtual mobility offering “holodeck” experiences subsumes tourism. Cosmopolitanism happens on the porch with an iPad.24

Surprisingly, the Muddling Through scenario ends up with urban fabric similar in properties to homeostatic planning had it been done intentionally. Work is a short walk from home. Corner stores pop up, as do rudimentary cafés, bistros, and other gathering places. Forty percent of the food is produced in or around cities on small farms. Wildlife returns to course freely. Groups of travelers move on surviving “high roads.” Communities meet at large sports venues situated in the countryside between them.25

Sea level rise is met with river and sea walls. At their base, vast new coral beds and kelp forests grow over the skeletons of submerged districts and towns. In a matter of years, rivers and seas build new beaches. Their flood plains are populated with new plants. Smaller scale trade waterfronts are reactivated for shipping, and some ships are even powered by sail. Cities occupying harbors, rivers, and railroad junctions reconnect to distant supply chains, mostly for non-quotidian (i.e., luxury) goods.26

Learning from Rome to Understand Detroit

Rome’s deterioration from a third century city of more than 1,000,000 people started long before it was acknowledged. An unnoticed population drop to 800,000 was characterized by ever larger buildings of decreasing beauty and craft, including the huge Baths of Diocletian (298-306 CE). Anticipating barbarian invasion, Rome’s walls were built (271-275 CE). It was ransacked twice (410 and 455 CE).27

But as if in a dream, 5th century life of the diminishing but still substantial population continued as normal. Invading Goths maintained Rome’s Senate, taxes, and cops. But administrative and military infrastructure vaporized. An unraveling education system led to the rise of illiteracy. Noble families began using mob politics, economic and social linkages broke down, travel and transportation became unsafe, and manufacturing collapsed.28

Rome when it was empty Campo Vaccino (Empty Field), Claude Lorrain (1604/1605–1682), 1636, Louvre, Paris

By 500 CE, Rome had less than 100,000 people. Systematic agriculture disappeared, and much land returned to forest. The Pope and nobility pillaged abandoned public buildings for their materials. The expansive city was reduced to small groups of inhabited buildings, interspersed among large areas of abandoned ruins and overgrown vegetation. In the 12th and 13th centuries the population of Rome was possibly as few as 20,000 people.29

The long journey from first cities, to Ancient Greece, Rome, and the Middle Ages, through Paris, Washington, and Shanghai, helps us understand how our cities might end up. Holding Rome up to the mirrors of history reads like backcasting Rome’s decline and survival in a Muddling Through scenario from today’s view. Halal predicted that muddling would start about 2023 to 2027 and that if we weren’t muddling by then, collapse would set in by 2029.30

Detroit started muddling in 1968. New York proved to be a fragile city during blackouts, as did Dubai in its 2009 financial crisis. Since the 1970s, most of America’s ten “dead cities,” many formerly among its largest and most vibrant, came disturbingly close to being feral. The overlapping invisibilities of heavily armed warlords and brutal police, make the favelas of Medellin and Rio de Janeiro virtually feral.31

Today we are at a tipping point. We can wait for the collapse of systems to reach homeostasis or attain it intentionally by applying Classic Planning principles.32

If you enjoyed this post, please also see Dr. Buras’ other posts:

Nir Buras is a PhD architect and planner with over 30 years of in-depth experience in strategic planning, architecture, and transportation design, as well as teaching and lecturing. His planning, design and construction experience includes East Side Access at Grand Central Terminal, New York; International Terminal D, Dallas-Fort-Worth; the Washington DC Dulles Metro line; work on the US Capitol and the Senate and House Office Buildings in Washington. Projects he has worked on have been published in the New York Times, the Washington Post, local newspapers, and trade magazines. Buras, whose original degree was Architect and Town planner, learned his first lesson in urbanism while planning military bases in the Negev Desert in Israel. Engaged in numerous projects since then, Buras has watched first-hand how urban planning impacted architecture. After the last decade of applying in practice the classical method that Buras learned in post-doctoral studies, his book, *The Art of Classic Planning* (Harvard University Press, 2019), presents the urban design and planning method of Classic Planning as a path forward for homeostatic, durable urbanism.


1 Population growth, clean water, compromised resilience of infrastructures, drug-resistant microbes, pandemics, possible famine, authoritarian regimes, social breakdowns, terrestrial cataclysms, terrorist mischief, nuclear mishaps, perhaps major war, inequity, education and healthcare collapse, climate change, ecological devastation, biodiversity loss, ocean acidification, world confusion, institutional gridlock, failures of leadership, failure to cooperate. Sources include: Glenn, Jerome C., Theodore J. Gordon, Elizabeth Florescu, 2013-14 State of the Future Millennium Project: Global Futures Studies and Research, Millennium-project.org (website), Washington, DC, 2014; Cutter, S. L. et al., Urban Systems, Infrastructure, and Vulnerability, in Climate Change Impacts in the United States: The Third National Climate Assessment, in Melillo, J. M. et al., (eds.), U.S. Global Change Research Program, 2014, Ch. 11, pp. 282-296; Kaminski, Frank, A review of James Kunstler’s The Long Emergency 10 years later, Mud City Press (website), Eugene, OR, 9 March 2015; Urban, Mark C., Accelerating extinction risk from climate change, Science Magazine, Vol. 348, Issue 6234, 1 May 2015, pp. 571-573; Kunstler, J.H., Clusterfuck Nation: A Glimpse into the Future, Kunstler.com (website), 2001b; US Geological Survey, Materials Flow and Sustainability, Fact Sheet FS-068-98, June 1998; Klare, M. T., The Race for What’s Left, Metropolitan Books, New York, 2012; Drielsma, Johannes A. et al., Mineral resources in life cycle impact assessment – defining the path forward, International Journal of Life Cycle Assessment, 21 (1), 2016, pp. 85-105; Meinert, Lawrence D. et al., Mineral Resources: Reserves, Peak Production and the Future, Resources 5(14), 2016; OECD World Nuclear Agency and International Atomic Energy Agency, 2004; Tahil, William, The Trouble with Lithium Implications of Future PHEV Production for Lithium Demand, Meridian International Research, 2007; Turner, Graham, Cathy Alexander, Limits to Growth was right. New research shows we’re nearing collapse, Guardian, Manchester, 1 September 2014; Kelemen, Peter, quoted in Cho, Renee, Rare Earth Metals: Will We Have Enough?, in State of the Planet, News from the Earth Institute, Earth Institute, Columbia University, September 19, 2012; Griffiths, Sarah, The end of the world as we know it? CO2 levels to reach a ‘tipping point’ on 6 June – and Earth may never recover, expert warns, Daily Mail, London, 12 May 2016; van der Werf, G.R. et al., CO2 emissions from forest loss, Nature Geoscience, Volume 2, November 2009, pp. 737–738; Global Deforestation, Global Change Program, University of Michigan, January 4, 2006; Arnell, Nigel, Future worlds: a narrative description of a plausible world following climate change, Met Office, London, 2012; The End, Scientific American, Special Issue, Sept 2010; Dator, Jim, Memo on mini-scenarios for the pacific island region, 3, November, 1981b, quoted in Bezold, Clement, Jim Dator’s Alternative Futures and the Path to IAF’s Aspirational Futures, Journal of Futures Studies, 14(2), November 2009, pp. 123 – 134.

2 Halal, William, Through the megacrisis: the passage to global maturity, Foresight Journal, VOL. 15 NO. 5, 2013a, pp. 392-404; Halal, William E., and Michael Marien, Global MegaCrisis Four Scenarios, Two Perspectives, The Futurist, Vol. 45, No. 3, May-June 2011; Halal, William E., Forecasting the technology revolution: Results and learnings from the TechCast project, Technological Forecasting and Social Change, 80.8, 2013b, pp. 1635-1643; TechCast Project, George Washington University, TechCast.org (website), Washington, DC, N.D.; National Research Council, Persistent Forecasting of Disruptive Technologies—Report 2, The National Academies Press, Washington, DC,2010.  Halal, William E., Technology’s Promise: Expert Knowledge on the Transformation of Business and Society, Palgrave Macmillan, London, 2008; Halal et al., The GW Forecast of Emerging Technologies, Technology Forecasting & Social Change, Vol. 59, 1998, pp. 89-110. The name was inspired by the oracle at Delphi (8th century BCE to 390 CE). The modern Delphi Method helps uncover data, and collect and distill the judgments of experts using rounds of questionnaires, interspersed with feedback. Each round is developed based on the results of the previous, until the research question is answered, a consensus is reached, a theoretical saturation is achieved, or sufficient information was exchanged. Linstone, Harold A., & Murray Turoff (eds.), The Delphi method: Techniques and applications, Addinson-Wesley, London, 1975; Halal, William E., Business Strategy for the Technology Revolution: Competing at the Edge of Creative Destruction, Journal of Knowledge Economics, Springer Science+Business Media, New York, September 2012. The author consolidated both of Halal and Marien muddling scenarios into one. The uncertainty of each particular forecast element was about 20% – 30 %.

3 Dator, James, Advancing Futures, Westport: Ct, Praeger, 2002; Bezold, 2009.

4 Chan, Tony, in Reubold, Todd, Envision 2050: The Future of Cities, Ensia.com (website), 16 June, 2014; Kunstler, James Howard, Back to the Future, Orion Magazine, June 23, 2011. Urry, John et al., Living in the City, Foresight, Government Office for Science, London, 2014; Hoff, Mary, Envision 2050: The Future of Transportation, Ensia.com (website), 31 March, 2014.

5 Kaku, Michio, The World in 2100, New York Post, New York, 20 March 2011. Tonn, Bruce E., LeCorbusier Meets the Jetsons in Anytown U.S.A. in the Year 2050: Glimpses of the Future, Planning Forum, Community and, Regional Planning, Volume 8, School of Architecture, The University of Texas, Austin, 2002; Urry et al., 2014.

6 Kaku, 2011; Hon, 2016. Rubbish bins will send alarms when they are about full. Talking garbage bins will reward people with poems, aphorisms, and songs for placing street rubbish in the bin. Heinonen, 2013.

7 Urry et al., 2014.

8 Heinonen, 2013. The prefix cy*, an abbreviation of cybernetics, relates to computers and virtual reality. The suffix *burg means city, fortified town. Urrutia, Orlando, Eco-Cybernetic City of the Future, Pacebutler.com (website), 12 February 2010; Tonn, 2002.

9 Shepard, M., Sentient City: Ubiquitous Computing, Architecture, And The Future of Urban Space. MIT Press, Cambridge, 2011; Kurzweil, Ray, The Singularity is Near, Penguin Group, New York, 2005. Some futurists predict that the energy required to keep a “global brain” operating may so deplete energy that it will bankrupt society and cause total collapse. Heinonen, 2013. The terms smart city, intelligent city, and digital city are sometimes synonymous, but the digital or intelligent city is considered heavily technological. Heinonen, 2013; Giffinger, Rudolf et al., Smart cities – Ranking of European medium-sized cities. Centre of Regional Science, Vienna UT, October 2007; Kaku, 2011; Vermesan, Ovidiu and Friess, Peter, Internet of Things: Converging Technologies for Smart Environments and Integrated Ecosystems, River Publishers, Aalborg DK, 2013; Cooper, G., Using Technology to Improve Society, The Guardian, Manchester, 2010; Heinonen, 2013. Typical smart city programs utilize traffic data visualization, smart grids, smart water and e-government solutions, The Internet, smartphones, inexpensive sensors, and mobile devices. Amsterdam, Dubai, Cairo, Edinburg, Malaga, and Yokohama have smart city schemes. Webb, Molly et al., Information Marketplaces: The New Economics of Cities, The Climate Group, ARUP, Accenture and The University of Nottingham, 2011.

10 Dator, 2002; Bezold, 2009. The Jetsons originally ran a single season in 1962-63. It was revived but not resuscitated in 1985. The term Jetsons today stands for “unlikely, faraway futurism.” Novak, Matt, 50 Years of the Jetsons: Why The Show Still Matters, Smithsonian.Com, 19 September 2012.

11 Perrow, Charles, Normal Accidents: Living with High-Risk Technologies, Basic Books, New York, 1984. By adding complexity, including conventional engineering warnings, precautions, and safeguards, systems failure not only becomes inevitable, but it may help create new categories of accidents, such as those of Bhopal, the Challenger disaster, Chernobyl, and Fukushima. Deconcentrating high-risk populations, corporate power, and critical infrastructures is suggested. Perrow, Charles, The Next Catastrophe: Reducing Our Vulnerabilities to Natural, Industrial, and Terrorist Disasters, Princeton University Press, Princeton, 2011; Turner, 2014; Jacobs, Jane, Dark Age Ahead, Random House, New York, 2004, p.24.

12 Jacobs, 2004; Dirda, Michael, A living urban legend on the sorry way we live now, Washington Post, Washington DC, 6 June, 2004; Dator, 2002; Bezold, 2009; Dator, James, Alternative futures & the futures of law, in Dator, James & Clement Bezold (eds.), Judging the future, University of Hawaii Press, Honolulu, 1981. pp.1-17; Halal, 2013b.

13 The term feral city was coined in Norton, Richard J., Feral Cities, Naval War College Review, Vol. LVI, No. 4, Autumn 2003. See also Brunn, Stanley D. et al., Cities of the World: World Regional Urban Development, Rowman & Littlefield, Lanham, MD, 2003, pp. 5–14, chap. 1.

14 Norton, 2003.

15 Urry, J., Offshoring. Polity, Cambridge, 2014; Gallopin, G., A. Hammond, P. Raskin, R. Swart, Branch Points, Global Scenario Group, Stockholm Environment Institute, Stockholm, 1997, p. 34. Norton, 2003.

16 Tonn, 2002; Urry et al., 2014.

17 Backcasting is future hindsight. Kilcullen, David, Out of the Mountains: The Coming Age of the Urban Guerrilla, Oxford University Press, Oxford, 2013.

18 Heterotopia, in Foucault, Michel, The Order of Things, Vintage Books, New York, 1971; Foucault, M., Of Other Spaces, Diacritics 16, 1986, pp. 22-27. Girard, Greg, and Ian Lambot, City of Darkness: Life in Kowloon Walled City, Watermark, Chiddingfold, 1993, 2007, 2014; Tan, Aaron Hee-Hung, Kowloon Walled City: Heterotopia in a Space of Disappearance (Master’s Thesis), Harvard University, Cambridge, MA, 1993; Sinn, Elizabeth, Kowloon Walled City: Its Origin and Early History (PDF). Journal of the Hong Kong Branch of the Royal Asiatic Society, 27, 1987, pp. 30–31; Harter, Seth, Hong Kong’s Dirty Little Secret: Clearing the Walled City of Kowloon, Journal of Urban History 27, 1, 2000, pp. 92-113; Grau, Lester W. and Geoffrey Demarest, Diehard Buildings: Control Architecture a Challenge for the Urban Warrior, Military Review, Combined Arms Center, Fort Leavenworth, Kansas, September / October 2003; Kunstler, James Howard, A Reflection on Cities of the Future, Energy Bulletin, Post Carbon Institute, 28 September, 2006; ArenaNet Art Director Daniel Dociu wins Spectrum 14 gold medal!, Guild Wars.com (website), 9 March 2007. Authors, game designers, and filmmakers used the Walled City to convey a sense of feral urbanization. It was the setting for Jean-Claude Van Damme’s 1988 film Bloodsport; Jackie Chan’s 1993 film Crime Story was partly filmed there during among genuine scenes of building demolition; and the video game Shadowrun: Hong Kong features a futuristic Walled City. Today the location of the former Kowloon Walled City is occupied by a park modelled on early Qing Dynasty Jiangnan gardens.

19 Halal, 2013a; Wright, Austin Tappan, Islandia, Farrar & Rinehart, New York, Toronto, 1942; Tonn, Bruce E., Anytown U.S.A. in the Year 2050: Glimpses of the Future, Planning Forum, Community and, Regional Planning, Volume 8, School of Architecture, The University of Texas, Austin, 2002; Porritt, Jonathon, The World We Made: Alex McKay’s Story from 2050, Phaidon Press, London, 2013. World Made by Hand novels by James Howard Kunstler: World Made By Hand, Grove Press, New York, 2008; The Witch of Hebron, Atlantic Monthly Press, 2010; A History of the Future, Atlantic Monthly, 2014; The Harrows of Spring, Atlantic Monthly Press, 2016

20 Turner, 2014.

21 Dator, 2002; Bezold, 2009; Dator & Bezold, 1981; Dator, 1981a; Dator, 1981b; Dator, James, The Unholy Trinity, Plus One (Preface), Journal of Futures Studies, University of Hawaii, 13(3), February 2009, pp. 33 – 48; McDonough, William & Michael Braungart, Cradle to Cradle: Remaking the Way We Make Things, Macmillan, New York, 2002; Porritt, 2013; Urry et al., 2014.

22 Wright, 1942; Kunstler, 2011; Givens, Mark, Bring It On Home: An Interview with James Howard Kunstler, Urban Landscapes and Environmental Psychology, Mung Being (website), Issue 11, N.D., p. 30; Kunstler, World Made by Hand series.

23 Tonn, 2002; Mollison, B. C. Permaculture: A Designer’s Manual. Tagari Publications, Tyalgum, Australia, 1988; Holmgren, D. and B. Mollison, Permaculture One, Transworld Publishers, Melbourne, 1978; Holmgren, D., Permaculture: Principles and Pathways beyond Sustainability, Holmgren Design Services, Hepburn, Victoria, Australia, 2002; Holmgren, David, Future Scenarios: How Communities Can Adopt to Peak Oil and Climate Change, Chelsea Green Publishing White River Junction, Vermont, 2009; Walker, L., Eco-Village at Ithaca: Pioneering a Sustainable Culture, New Society Publishers, Gabriola Island, 2005; Hopkins, R., The Transition Handbook: From Oil Dependency to Local Resilience, Green Books, Totnes, Devon, 2008; Urry et al., 2014; Porritt, 2013.

24 Urry et al., 2014; Porritt, 2013; Caletrío, Javier, “The world we made. Alex McKay’s story from 2050” by Jonathon Porritt (review), Mobile Lives Forum, forumviesmobiles.org (website), 21 May 2015.

25 Kunstler, 2001b.

26 Kunstler, 2006; Williams, 2014.

27 Krautheimer, Richard, Rome: Profile of A City, 312-1308, Princeton University Press, Princeton, 1980.

28 Palmer, Ada, The Shape of Rome, exurbe.com (website), Chicago, 15 August 2013.

29 Procopius of Caesarea, (c.490/507- c.560s) Procopius, Dewing, H B., and Glanville Downey (trans), Procopius, Harvard University Press, Cambridge, MA, 2000.On the Wars in eight books (Polemon or De bellis) was published 552, with an addition in 554; Storey, Glenn R., The population of ancient Rome, Antiquity, December 1, 199; Wickham, Chris, Medieval Rome: Stability and Crisis of a City, 900-1150, Oxford Studies in Medieval European History, Oxford University Press, New York, Oxford, 2015. Population numbers are uncertain well into the Renaissance. Krautheimer, 1980.

30 Porritt, 2013; Alexander, Samuel, Resilience through Simplification: Revisiting Tainter’s Theory of Collapse, Simplicity Institute Report, Melbourne (?), 2012b; Palmer, 2013: Halal, 2013a, 2013b.

31 America’s “Ten Dead Cities” in 2010: Buffalo; Flint; Hartford; Cleveland; New Orleans; Detroit; Albany; Atlantic City; Allentown, and Galveston. McIntyre, Douglas A., America’s Ten Dead Cities: From Detroit to New Orleans, 24/7 Wall Street (website), 23 August, 2010; Gibson, Campbell, Population of The 100 Largest Cities And Other Urban Places In The United States: 1790 To 1990, Population Division, U.S. Bureau of the Census, Washington, DC, June 1998. See also “America’s 150 forgotten cities.” Hoyt, Lorlene and André Leroux, Voices from Forgotten Cities Innovative Revitalization Coalitions in America’s Older Small Cities, MIT, Cambridge, MA, 2007; Manaugh, Geoff, Cities Gone Wild, Bldgblog.com (website), 1 December 2009.

32 Buras, Nir, The Art of Classic Planning for Beautiful and Enduring Communities, Harvard University Press, Cambridge, 2019.

114. Mad Scientist Science Fiction Writing Contest 2019

Futuristic tank rendering  / Source: U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC)

[Editor’s Note:  Story Telling is a powerful tool that allows us to envision how innovative technologies could be employed and operationalized in the Future Operational Environment.  Mad Scientist is seeking your visions of future combat with our Science Fiction Writing Contest 2019.  Our deadline for submission is 1 APRIL 2019, so please review the contest details below, get those creative writing juices flowing, and send us your visions of combat in 2030!] 

Still from “The Future of the Soldier” video / Source:  U.S. Army Natick Soldier Research Development and Engineering Center

Background: The U.S. Army finds itself at a historical inflection point, where disparate, yet related elements of an increasingly complex Operational Environment (OE) are converging, creating a situation where fast moving trends are rapidly transforming the nature of all aspects of society and human life – including the character of warfare. It is important to take a creative approach to projecting and anticipating both transformational and enduring trends that will lend themselves to the depiction of the future. In this vein, the U.S. Army Mad Scientist Initiative is seeking your creativity and unique ideas to describe a battlefield that does not yet exist.

Illustration from “Silent Ruin” by Don Hudson & Kinsun Lo / Source:   U.S.  Army Cyber Institute at West Point

Task: Write about the following scenario – On March 17th, 2030, the country of Donovia, after months of strained relations and covert hostilities, invades neighboring country Otso. Donovia is a wealthy nation that is a near-peer competitor to the United States. Like the United States, Donovia has invested heavily in disruptive technologies such as robotics, AI, autonomy, quantum information sciences, bio enhancements and gene editing, space-based weapons and communications, drones, nanotechnology, and directed energy weapons. The United States is a close ally of Otso and is compelled to intervene due to treaty obligations and historical ties. The United States is about to engage Donovia in its first battle with a near-peer competitor in over 80 years…

Three ways to approach:
1) Forecasting – Description of the timeline and events leading up to the battle.
2) Describing – Account of the battle while it’s happening.
3) Backcasting – Retrospective look after the battle has ended (i.e., After Action Review or lessons learned).

Three questions to consider while writing (U.S., adversaries, and others):
1) What will forces and Soldiers look like in 2030?
2) What technologies will enable them or be prevalent on the battlefield?
3) What do Multi-Domain Operations look like in 2030?

Submission Guidelines:
– No more than 5000 words in length
– Provide your submission in .doc or .docx format
– Please use conventional text formatting (e.g., no columns) and have images “in line” with text
– Submissions from Government and DoD employees must be cleared through their respective PAOs prior to submission
MUST include completed release form (on the back of contest flyer)
CANNOT have been previously published

Selected submissions may be chosen for publication or a possible future speaking opportunity.

Contact: Send your submissions to: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil

For additional story telling inspiration, please see the following blog posts:

 

113. Connected Warfare

[Editor’s Note: As stated previously here in the Mad Scientist Laboratory, the nature of war remains inherently humanistic in the Future Operational Environment.  Today’s post by guest blogger COL James K. Greer (USA-Ret.) calls on us to stop envisioning Artificial Intelligence (AI) as a separate and distinct end state (oftentimes in competition with humanity) and to instead focus on preparing for future connected competitions and wars.]

The possibilities and challenges for future security, military operations, and warfare associated with advancements in AI are proposed and discussed with ever-increasing frequency, both within formal defense establishments and informally among national security professionals and stakeholders. One is confronted with a myriad of alternative futures, including everything from a humanity-killing variation of Terminator’s SkyNet to uncontrolled warfare ala WarGames to Deep Learning used to enhance existing military processes and operations. And of course legal and ethical issues surrounding the military use of AI abound.

Source: tmrwedition.com

Yet in most discussions of the military applications of AI and its use in warfare, we have a blind spot in our thinking about technological progress toward the future. That blind spot is that we think about AI largely as disconnected from humans and the human brain. Rather than thinking about AI-enabled systems as connected to humans, we think about them as parallel processes. We talk about human-in-the loop or human-on-the-loop largely in terms of the control over autonomous systems, rather than comprehensive connection to and interaction with those systems.

But even while significant progress is being made in the development of AI, almost no attention is paid to the military implications of advances in human connectivity. Experiments have already been conducted connecting the human brain directly to the internet, which of course connects the human mind not only to the Internet of Things (IoT), but potentially to every computer and AI device in the world. Such connections will be enabled by a chip in the brain that provides connectivity while enabling humans to perform all normal functions, including all those associated with warfare (as envisioned by John Scalzi’s BrainPal in “Old Man’s War”).

Source: Grau et al.

Moreover, experiments in connecting human brains to each other are ongoing. Brain-to-brain connectivity has occurred in a controlled setting enabled by an internet connection. And, in experiments conducted to date, the brain of one human can be used to direct the weapons firing of another human, demonstrating applicability to future warfare. While experimentation in brain-to-internet and brain-to-brain connectivity is not as advanced as the development of AI, it is easy to see that the potential benefits, desirability, and frankly, market forces are likely to accelerate the human side of connectivity development past the AI side.

Source: tapestrysolutions.com

So, when contemplating the future of human activity, of which warfare is unfortunately a central component, we cannot and must not think of AI development and human development as separate, but rather as interconnected. Future warfare will be connected warfare, with implications we must now begin to consider. How would such connected warfare be conducted? How would mission command be exercised between man and machine? What are the leadership implications of the human leader’s brain being connected to those of their subordinates? How will humans manage information for decision-making without being completely overloaded and paralyzed by overwhelming amounts of data? What are the moral, ethical, and legal implications of connected humans in combat, as well as responsibility for the actions of machines to which they are connected? These and thousands of other questions and implications related to policy and operation must be considered.

The power of AI resides not just in that of the individual computer, but in the connection of each computer to literally millions, if not billions, of sensors, servers, computers, and smart devices employing thousands, if not millions, of software programs and apps. The consensus is that at some point the computing and analytic power of AI will surpass that of the individual. And therein lies a major flaw in our thinking about the future. The power of AI may surpass that of a human being, but it won’t surpass the learning, thinking, and decision-making power of connected human beings. When a future human is connected to the internet, that human will have access to the computing power of all AI. But, when that same human is connected to several (in a platoon), or hundreds (on a ship) or thousands (in multiple headquarters) of other humans, then the power of AI will be exceeded by multiple orders of magnitude. The challenge of course is being able to think effectively under those circumstances, with your brain connected to all those sensors, computers, and other humans. This is what Ray Kurzwell terms “hybrid thinking.”   Imagine how that is going to change every facet of human life, to include every aspect of warfare, and how everyone in our future defense establishment, uniformed or not, will have to be capable of hybrid thinking.

Source: Genetic Literacy Project

So, what will the military human bring to warfare that the AI-empowered computer won’t? Certainly, one of the major challenges with AI thus far has been its inability to demonstrate human intuition. AI can replicate some derivative tasks with intuition using what is now called “Artificial Intuition.” These tasks are primarily the intuitive decisions that result from experience: AI generates this experience through some large number of iterations, which is how Goggle’s AlphaGo was able to beat the human world Go champion. Still, this is only a small part of the capacity of humans in terms not only of intuition, but of “insight,” what we call the “light bulb moment”. Humans will also bring emotional intelligence to connected warfare. Emotional intelligence, including aspects such as empathy, loyalty, and courage, are critical in the crucible of war and are not capabilities that machines can provide the Force, not today and perhaps not ever.

Warfare in the future is not going to be conducted by machines, no matter how far AI advances. Warfare will instead be connected human to human, human to internet, and internet to machine in complex, global networks. We cannot know today how such warfare will be conducted or what characteristics and capabilities of future forces will be necessary for victory. What we can do is cease developing AI as if it were something separate and distinct from, and often envisioned in competition with, humanity and instead focus our endeavors and investments in preparing for future connected competitions and wars.

If you enjoyed this post, please read the following Mad Scientist Laboratory blog posts:

… and watch Dr. Alexander Kott‘s presentation The Network is the Robot, presented at the Mad Scientist Robotics, Artificial Intelligence, & Autonomy: Visioning Multi Domain Battle in 2030-2050 Conference, at the Georgia Tech Research Institute, 8-9 March 2017, in Atlanta, Georgia.

COL James K. Greer (USA-Ret.) is the Defense Threat Reduction Agency (DTRA) and Joint Improvised Threat Defeat Organization (JIDO) Integrator at the Combined Arms Command. A former cavalry officer, he served thirty years in the US Army, commanding at all levels from platoon through Brigade. Jim served in operational units in CONUS, Germany, the Balkans and the Middle East. He served in US Army Training and Doctrine Command (TRADOC), primarily focused on leader, capabilities and doctrine development. He has significant concept development experience, co-writing concepts for Force XXI, Army After Next and Army Transformation. Jim was the Army representative to OSD-Net assessment 20XX Wargame Series developing concepts OSD and the Joint Staff. He is a former Director of the Army School of Advanced Military Studies (SAMS) and instructor in tactics at West Point. Jim is a veteran of six combat tours in Iraq, Afghanistan, and the Balkans, including serving as Chief of Staff of the Multi-National Security Transition Command – Iraq (MNSTC-I). Since leaving active duty, Jim has led the conduct of research for the Army Research Institute (ARI) and designed, developed and delivered instruction in leadership, strategic foresight, design, and strategic and operational planning. Dr. Greer holds a Doctorate in Education, with his dissertation subject as US Army leader self-development. A graduate of the United States Military Academy, he has a Master’s Degree in Education, with a concentration in Psychological Counseling: as well as Masters Degrees in National Security from the National War College and Operational Planning from the School of Advanced Military Studies.

111. AI Enhancing EI in War

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s guest blog post by MAJ Vincent Dueñas, addressing how AI can mitigate a human commander’s cognitive biases and enhance his/her (and their staff’s)  decision-making, freeing them to do what they do best — command, fight, and win on future battlefields!]

Humans are susceptible to cognitive biases and these biases sometimes result in catastrophic outcomes, particularly in the high stress environment of war-time decision-making. Artificial Intelligence (AI) offers the possibility of mitigating the susceptibility of negative outcomes in the commander’s decision-making process by enhancing the collective Emotional Intelligence (EI) of the commander and his/her staff. AI will continue to become more prevalent in combat and as such, should be integrated in a way that advances the EI capacity of our commanders. An interactive AI that feels like one is communicating with a staff officer, which has human-compatible principles, can support decision-making in high-stakes, time-critical situations with ambiguous or incomplete information.

Mission Command in the Army is the exercise of authority and direction by the commander using mission orders to enable disciplined initiative within the commander’s intent.i It requires an environment of mutual trust and shared understanding between the commander and his subordinates in order to understand, visualize, describe, and direct throughout the decision-making Operations Process and mass the effects of combat power.ii

The mission command philosophy necessitates improved EI. EI is defined as the capacity to be aware of, control, and express one’s emotions, and to handle interpersonal relationships judiciously and empathetically, at much quicker speeds in order seize the initiative in war.iii The more effective our commanders are at EI, the better they lead, fight, and win using all the tools available.

AI Staff Officer

To conceptualize how AI can enhance decision-making on the battlefields of the future, we must understand that AI today is advancing more quickly in narrow problem solving domains than in those that require broad understanding.iv This means that, for now, humans continue to retain the advantage in broad information assimilation. The advent of machine-learning algorithms that could be applied to autonomous lethal weapons systems has so far resulted in a general predilection towards ensuring humans remain in the decision-making loop with respect to all aspects of warfare.v, vi AI’s near-term niche will continue to advance rapidly in narrow domains and become a more useful interactive assistant capable of analyzing not only the systems it manages, but the very users themselves. AI could be used to provide detailed analysis and aggregated assessments for the commander at the key decision points that require a human-in-the-loop interface.

The Battalion is a good example organization to visualize this framework. A machine-learning software system could be connected into different staff systems to analyze data produced by the section as they execute their warfighting functions. This machine-learning software system would also assess the human-in-the-loop decisions against statistical outcomes and aggregate important data to support the commander’s assessments. Over time, this EI-based machine-learning software system could rank the quality of the staff officers’ judgements. The commander can then consider the value of the staff officers’ assessments against the officers’ track-record of reliability and the raw data provided by the staff sections’ systems. The Bridgewater financial firm employs this very type of human decision-making assessment algorithm in order to assess the “believability” of their employees’ judgements before making high-stakes, and sometimes time-critical, international financial decisions.vii Included in such a multi-layered machine-learning system applied to the battalion, there would also be an assessment made of the commander’s own reliability, to maximize objectivity.

Observations by the AI of multiple iterations of human behavioral patterns during simulations and real-world operations would improve its accuracy and enhance the trust between this type of AI system and its users. Commanders’ EI skills would be put front and center for scrutiny and could improve drastically by virtue of the weight of the responsibility of consciously knowing the cognitive bias shortcomings of the staff with quantifiable evidence, at any given time. This assisted decision-making AI framework would also consequently reinforce the commander’s intuition and decisions as it elevates the level of objectivity in decision-making.

Human-Compatibility

The capacity to understand information broadly and conduct unsupervised learning remains the virtue of humans for the foreseeable future.viii The integration of AI into the battlefield should work towards enhancing the EI of the commander since it supports mission command and complements the human advantage in decision-making. Giving the AI the feel of a staff officer implies also providing it with a framework for how it might begin to understand the information it is receiving and the decisions being made by the commander.

Stuart Russell offers a construct of limitations that should be coded into AI in order to make it most useful to humanity and prevent conclusions that result in an AI turning on humanity. These three concepts are:  1) principle of altruism towards the human race (and not itself), 2) maximizing uncertainty by making it follow only human objectives, but not explaining what those are, and 3) making it learn by exposing it to everything and all types of humans.ix

Russell’s principles offer a human-compatible guide for AI to be useful within the human decision-making process, protecting humans from unintended consequences of the AI making decisions on its own. The integration of these principles in battlefield AI systems would provide the best chance of ensuring the AI serves as an assistant to the commander, enhancing his/her EI to make better decisions.

Making AI Work

The potential opportunities and pitfalls are abundant for the employment of AI in decision-making. Apart from the obvious danger of this type of system being hacked, the possibility of the AI machine-learning algorithms harboring biased coding inconsistent with the values of the unit employing it are real.

The commander’s primary goal is to achieve the mission. The future includes AI, and commanders will need to trust and integrate AI assessments into their natural decision-making process and make it part of their intuitive calculus. In this way, they will have ready access to objective analyses of their units’ potential biases, enhancing their own EI, and be able overcome them to accomplish their mission.

If you enjoyed this post, please also read:

An Appropriate Level of Trust…

Takeaways Learned about the Future of the AI Battlefield

Bias and Machine Learning

Man-Machine Rules

MAJ Vincent Dueñas is an Army Foreign Area Officer and has deployed as a cavalry and communications officer. His writing on national security issues, decision-making, and international affairs has been featured in Divergent Options, Small Wars Journal, and The Strategy Bridge. MAJ Dueñas is a member of the Military Writers Guild and a Term Member with the Council on Foreign Relations. The views reflected are his own and do not represent the opinion of the United States Government or any of its agencies.


i United States, Army, States, United. “ADRP 5-0 2012: The Operations Process.” ADRP 5-0 2012: The Operations Process, Headquarters, Dept. of the Army., 2012, pp. 1–1.

ii Ibid. pp. 1-1 – 1-3.

iiiEmotional Intelligence | Definition of Emotional Intelligence in English by Oxford Dictionaries.” Oxford Dictionaries | English, Oxford Dictionaries, 2018, en.oxforddictionaries.com/definition/emotional_intelligence.

iv Trent, Stoney, and Scott Lathrop. “A Primer on Artificial Intelligence for Military Leaders.” Small Wars Journal, 2018, smallwarsjournal.com/index.php/jrnl/art/primer-artificial-intelligence-military-leaders.

v Scharre, Paul. ARMY OF NONE: Autonomous Weapons and the Future of War. W W NORTON, 2019.

vi Evans, Hayley. “Lethal Autonomous Weapons Systems at the First and Second U.N. CGE Meetings.” Lawfare, 2018, https://www.lawfareblog.com/lethal-autonomous-weapons-systems-first-and-second-un-gge-meetings.

vii Dalio, Ray. Principles. Simon and Schuster, 2017.

viii Trent and Lathrop.

ix Russell, Stuart, director. Three Principles for Creating Safer AI. TED: Ideas Worth Spreading, 2017, www.ted.com/talks/stuart_russell_3_principles_for_creating_safer_ai.

104. Critical Thinking: The Neglected Skill Required to Win Future Conflicts

[Editor’s Note: As addressed in last week’s post, entitled The Human Targeting Solution: An AI Story, the incorporation of Artificial Intelligence (AI) as a warfighting capability has the potential to revolutionize combat, accelerating the future fight to machine speeds.  That said, the advanced algorithms underpinning these AI combat multipliers remain dependent on the accuracy and currency of their data feeds. In the aforementioned post, the protagonist’s challenge in overriding the AI-prescribed optimal (yet flawed) targeting solution illustrates the inherent tension between human critical thinking and the benefits of AI.

Today’s guest blog post, submitted by MAJ Cynthia Dehne, expands upon this theme, addressing human critical thinking as the often neglected, yet essential skill required to successfully integrate and employ emergent technologies while simultaneously understanding their limitations on future battlefields.  Warfare will remain an intrinsically human endeavor, the fusion of deliberate and calculating human intellect with ever more lethal technological advances. ]

The future character of war will be influenced by emerging technologies such as AI, robotics, computing, and synthetic biology. Cutting-edge technologies will become increasingly cheaper and readily available, introducing a wider range of actors on the battlefield. Moreover, nation-state actors are no longer the drivers of cutting-edge technology — militaries are leveraging the private sector who are leading research and development in emergent technologies. Proliferation of these cheap, accessible technologies will allow both peer competitors and non-state actors to wage serious threats in the future operational environment.  Due to the abundance of new players on the battlefield combined with emerging technologies, future conflicts will be won by those who both possess “critical thinking” skills and can integrate technology seamlessly to inform decision-making in war instead of relying on technology to win war. Achieving success in the future eras of accelerated human progress and contested equality will require the U.S. Army to develop Soldiers who are adept at seamlessly employing technology on the battlefield while continuously exercising critical thinking skills.

The Foundation for Critical Thinking defines critical thinking as “the art of analyzing and evaluating thinking with a view to improve it.” 1 Furthermore, they assert that a well cultivated critical thinker can do the following: raise vital questions and problems and formulate them clearly and precisely; gather and assess relevant information, using abstract ideas to interpret it effectively; come to well-reasoned conclusions and solutions, testing them against relevant criteria and standards; think open-mindedly within alternative systems of thought, recognizing and assessing, as needed, their assumptions, implications, and practical consequences; and communicate effectively with others in figuring out solutions to complex problems.2

Many experts in education and psychology argue that critical thinking skills are declining. In 2017, Dr. Stephen Camarata wrote about the emerging crisis in critical thinking and college students’ struggles to tackle real world problem solving. He emphasized the essential need for critical thinking and asserted that “a young adult whose brain has been “wired’ to be innovative, think critically, and problem solve is at a tremendous competitive advantage in today’s increasingly complex and competitive world.”3 Although most government agencies, policy makers, and businesses deem critical thinking important, STEM fields continue to be prioritized. However, if creative thinking skills are not fused with STEM, then there will continue to be a decline in those equipped with well-rounded critical thinking abilities. In 2017, Mark Cuban opined during an interview with Bloomberg TV that the nature of work is changing and the future skill that will be more in-demand will be “creative thinking.” Specifically, he stated “I personally think there’s going to be a greater demand in 10 years for liberal arts majors than there were for programming majors and maybe even engineering.”4 Additionally, Forbes magazine published an article in 2018 declaring that “creativity is the skill of the future.”5

Employing future technologies effectively will be key to winning war, but it is only one aspect. During the Vietnam War, the U.S. relied heavily on technology but were defeated by an enemy who leveraged simple guerilla tactics combined with minimal military technology. Emerging technologies will be vital to inform decision-making, but will not negate battlefield friction. Carl von Clausewitz ascertained that although everything is simple in war, the simplest things become difficult and accumulate and create friction.6 Historically, a lack of information caused friction and uncertainty. However, complexity is a driver of friction in current warfare and will heavily influence future warfare. Complex, high-tech weapon systems will dominate the future battlefield and create added friction. Interdependent systems linking communications and warfighting functions will introduce more friction which will require highly skilled thinkers to navigate.

The newly published U.S. Army in Multi-Domain Operations 2028 concept “describes how Army forces fight across all domains, the electromagnetic spectrum (EMS), and the information environment and at echelon7  to “enable the Joint Force to compete with China and Russia below armed conflict, penetrate and dis-integrate their anti-access and area denial systems and ultimately defeat them in armed conflict and consolidate gains, and then return to competition.” Even with technological advances and intelligence improvement, elements of friction will be present in future wars. Both great armies and asymmetric threats have vulnerabilities, due to small things in terms of friction that morph into larger issues capable of crippling a fighting force. Therefore, success in future war is dependent on military commanders that understand these elements and how to overcome friction. Future technologies must be fused with critical thinking to mitigate friction and achieve strategic success. The U.S. Army must simultaneously emphasize integrating critical thinking in doctrine and exercises when training Soldiers on new technologies.

Soldiers should be creative, innovative thinkers; the Army must foster critically thinking as an essential skill.  The Insight Assessment emphasizes that “weakness in critical thinking skill results in loss of opportunities, of financial resources, of relationships, and even loss of life. There is probably no other attribute more worthy of measure than critical thinking skills.”9 Gaining and maintaining competitive advantage over adversaries in a complex, fluid future operational environment requires Soldiers to be both skilled in technology and experts in critical thinking.

If you enjoyed this post, please also see:

Mr. Chris Taylor’s presentation on Problem Solving in the Wild, from the Mad Scientist Learning in 2050 Conference at Georgetown University, 8-9 August 2018;

and the following Mad Scientist Laboratory blog posts:

TRADOC 2028

Making the Future More Personal: The Oft-Forgotten Human Driver in Future’s Analysis

 MAJ Cynthia Dehne is in the U.S. Army Reserve, assigned to the TRADOC G-2 and has operational experience in Afghanistan, Iraq, Kuwait, and Qatar. She is a graduate of the U.S. Army Command and General Staff College and holds masters degrees in International Relations and in Diplomacy and International Commerce.


1 Paul, Richard, and Elder, Linda. Critical Thinking Concepts and Tools. Dillon Beach, CA: Foundation for Critical Thinking, 2016, p. 2.

2 Paul, R., and Elder, L. Foundation for Critical Thinking. Dillon Beach, CA: Foundation for Critical Thinking, 2016, p. 2.

3 Camarata, Stephen. “The Emerging Crisis in Critical Thinking.” Psychology Today, March 21, 2017. Accessed October 10, 2018, from https://www.psychologytoday.com/us/blog/the-intuitive-parent/201703/the-emerging-crisis-in-critical-thinking.

4 Wile, Rob. “Mark Cuban Says This Will Be the No.1 Job Skill in 10 Years.” Time, February 20, 2017. Accessed October 11, 2018. http://time.com/money/4676298/mark-cuban-best-job-skill/.

5 Powers, Anna. “Creativity Is The Skill Of The Future.” Forbes, April 30, 2018. Accessed October 14, 2018. https://www.forbes.com/sites/annapowers/2018/04/30/creativity-is-the-skill-of-the-future/#3dd533f04fd4.

6 Clausewitz, Carl von, Michael Howard, Peter Paret, and Bernard Brodie. On War. Princeton, N.J.: Princeton University Press, 1984, p. 119.

7 U.S. Army. The U.S. Army in Multi-Domain Operations 2028, Department of the Army. TRADOC Pamphlet 525-3-1, December 6, 2018, p. 5.

8 U.S. Army. The U.S. Army in Multi-Domain Operations 2028, Department of the Army. TRADOC Pamphlet 525-3-1, December 6, 2018, p. 15.

9 Insight Assessment. “Risks Associated with Weak Critical Thinkers.” Insight Assessment, 2018. Accessed October 22, 2018, from https://www.insightassessment.com/Uses/Risks-Associated-with-Weak-Critical-Thinkers.

100. Prediction Machines: The Simple Economics of Artificial Intelligence

[Editor’s Note: Mad Scientist Laboratory is pleased to review Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Harvard Business Review Press, 17 April 2018.  While economics is not a perfect analog to warfare, this book will enhance our readers’ understanding of narrow Artificial Intelligence (AI) and its tremendous potential to change the character of future warfare by disrupting human-centered battlefield rhythms and facilitating combat at machine speed.]

This insightful book by economists Ajay Agrawal, Joshua Gans, and Avi Goldfarb penetrates the hype often associated with AI by describing its base functions and roles and providing the economic framework for its future applications.  Of particular interest is their perspective of AI entities as prediction machines. In simplifying and de-mything our understanding of AI and Machine Learning (ML) as prediction tools, akin to computers being nothing more than extremely powerful mathematics machines, the authors effectively describe the economic impacts that these prediction machines will have in the future.

The book addresses the three categories of data underpinning AI / ML:

Training: This is the Big Data that trains the underlying AI algorithms in the first place. Generally, the bigger and most robust the data set is, the more effective the AI’s predictive capability will be. Activities such as driving (with millions of iterations every day) and online commerce (with similar large numbers of transactions) in defined environments lend themselves to efficient AI applications.

Input: This is the data that the AI will be taking in, either from purposeful, active injects or passively from the environment around it. Again, defined environments are far easier to cope with in this regard.

Feedback: This data comes from either manual inputs by users and developers or from AI understanding what effects took place from its previous applications. While often overlooked, this data is critical to iteratively enhancing and refining the AI’s performance as well as identifying biases and askew decision-making. AI is not a static, one-off product; much like software, it must be continually updated, either through injects or learning.

The authors explore narrow AI rather than a general, super, or “strong” AI.  Proclaimed Mad Scientist Paul Scharre and Michael Horowitz define narrow AI as follows:

their expertise is confined to a single domain, as opposed to hypothetical future “general” AI systems that could apply expertise more broadly. Machines – at least for now – lack the general-purpose reasoning that humans use to flexibly perform a range of tasks: making coffee one minute, then taking a phone call from work, then putting on a toddler’s shoes and putting her in the car for school.”  – from Artificial Intelligence What Every Policymaker Needs to Know, Center for New American Security, 19 June 2018

These narrow AI applications could have significant implications for U.S. Armed Forces personnel, force structure, operations, and processes. While economics is not a direct analogy to warfare, there are a number of aspects that can be distilled into the following ramifications:

Internet of Battle Things (IOBT) / Source: Alexander Kott, ARL

1. The battlefield is dynamic and has innumerable variables that have great potential to mischaracterize the ground truth with limited, purposely subverted, or “dirty” input data. Additionally, the relative short duration of battles and battlefield activities means that AI would not receive consistent, plentiful, and defined data, similar to what it would receive in civilian transportation and economic applications.

2. The U.S. military will not be able to just “throw AI on it” and achieve effective results. The effective application of AI will require a disciplined and comprehensive review of all warfighting functions to determine where AI can best augment and enhance our current Soldier-centric capabilities (i.e., identify those workflows and processes – Intelligence and Targeting Cycles – that can be enhanced with the application of AI).  Leaders will also have to assess where AI can replace Soldiers in workflows and organizational architecture, and whether AI necessitates the discarding or major restructuring of either.  Note that Goldman-Sachs is in the process of conducting this type of self-evaluation right now.

3. Due to its incredible “thirst” for Big Data, AI/ML will necessitate tradeoffs between security and privacy (the former likely being more important to the military) and quantity and quality of data.

 

4. In the near to mid-term future, AI/ML will not replace Leaders, Soldiers, and Analysts, but will allow them to focus on the big issues (i.e., “the fight”) by freeing them from the resource-intensive (i.e., time and manpower) mundane and rote tasks of data crunching, possibly facilitating the reallocation of manpower to growing need areas in data management, machine training, and AI translation.

This book is a must-read for those interested in obtaining a down-to-earth assessment on the state of narrow AI and its potential applications to both economics and warfare.

If you enjoyed this review, please also read the following Mad Scientist Laboratory blog posts:

Takeaways Learned about the Future of the AI Battlefield

Leveraging Artificial Intelligence and Machine Learning to Meet Warfighter Needs

… and watch the following presentations from the Mad Scientist Robotics, AI, and Autonomy – Visioning Multi-Domain Battle in 2030-2050 Conference, 7-8 March 2017, co-sponsored by Georgia Tech Research Institute:

Artificial Intelligence and Machine Learning: Potential Application in Defense Today and Tomorrow,” presented by Mr. Louis Maziotta, Armament Research, Development, and Engineering Center (ARDEC).

Unmanned and Autonomous Systems, presented by Paul Scharre, CNAS.