117. Old Human vs. New Human

[Editor’s Note: On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC. Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces. One finding from this conference is that tomorrow’s Soldiers will learn differently from earlier generations, given the technological innovations that will have surrounded them from birth through their high school graduation.  To effectively engage these “New Humans” and prepare them for combat on future battlefields, the Army must discard old paradigms of learning that no longer resonate (e.g., those desiccated lectures delivered via interminable PowerPoint presentations) and embrace more effective means of instruction.]

The recruit of 2050 will be born in 2032 and will be fundamentally different from the generations born before them. Marc Prensky, educational writer and speaker who coined the term digital native, asserts this “New Human” will stand in stark contrast to the “Old Human” in the ways they assimilate information and approach learning.1 Where humans today are born into a world with ubiquitous internet, hyper-connectivity, and the Internet of Things, each of these elements are generally external to the human. By 2032, these technologies likely will have converged and will be embedded or integrated into the individual with connectivity literally on the tips of their fingers. The challenge for the Army will be to recognize the implications of this momentous shift and alter its learning methodologies, approach to training, and educational paradigm to account for these digital natives.

These New Humans will be accustomed to the use of artificial intelligence (AI) to augment and supplement decision-making in their everyday lives. AI will be responsible for keeping them on schedule, suggesting options for what and when to eat, delivering relevant news and information, and serving as an on-demand embedded expert. The Old Human learned to use these technologies and adapted their learning style to accommodate them, while the New Human will be born into them and their learning style will be a result of them. In 2018, 94% of Americans aged 18-29 owned some kind of smartphone.2 Compare that to 73% ownership for ages 50-64 and 46% for age 65 and above and it becomes clear that there is a strong disconnect between the age groups in terms of employing technology. Both of the leading software developers for smartphones include a built-in artificially intelligent digital assistant, and at the end of 2017, nearly half of all U.S. adults used a digital voice assistant in some way.3 Based on these trends, there likely will be in the future an even greater technological wedge between New Humans and Old Humans.


New Humans will be information assimilators, where Old Humans were information gatherers. The techniques to acquire and gather information have evolved swiftly since the advent of the printing press, from user-intensive methods such as manual research, to a reduction in user involvement through Internet search engines. Now, narrow AI using natural language processing is transitioning to AI-enabled predictive learning. Through these AI-enabled virtual entities, New Humans will carry targeted, predictive, and continuous learning assistants with them. These assistants will observe, listen, and process everything of relevance to the learner and then deliver them information as necessary.

There is an abundance of research on the stark contrast between the three generations currently in the workforce: Baby Boomers, Generation X, and Millennials.4, 5 There will be similar fundamental differences between Old Humans and New Humans and their learning styles. The New Human likely will value experiential learning over traditional classroom learning.6 The convergence of mixed reality and advanced, high fidelity modeling and simulation will provide New Humans with immersive, experiential learning. For example, Soldiers learning military history and battlefield tactics will be able to experience it ubiquitously, observing how each facet of the battlefield affects the whole in real-time as opposed to reading about it sequentially. Soldiers in training could stand next to an avatar of General Patton and experience him explaining his command decisions firsthand.

There is an opportunity for the Army to adapt its education and training to these growing differences. The Army could—and eventually will need—to recruit, train, and develop New Humans by altering its current structure and recruitment programs. It will become imperative to conduct training with new tools, materials, and technologies that will allow Soldiers to become information assimilators. Additionally, the incorporation of experiential learning techniques will entice Soldiers’ learning. There is an opportunity for the Army to pave the way and train its Soldiers with cutting edge technology rather than trying to belatedly catch up to what is publicly available.

Evolution in Learning Technologies

If you enjoyed this post, please also watch Elliott Masie‘s video presentation on Dynamic Readiness and  Mark Prensky‘s presentation on The Future of Learning from of the Mad Scientist Learning in 2050 Conference

… see the following related blog posts:

… and read The Mad Scientist Learning in 2050 Final Report.

1 Prensky, Mark, Mad Scientist Conference: Learning in 2050, Georgetown University, 9 August 2018

2 http://www.pewinternet.org/fact-sheet/mobile/

3 http://www.pewresearch.org/fact-tank/2017/12/12/nearly-half-of-americans-use-digital-voice-assistants-mostly-on-their-smartphones/

4 https://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Generational-issues-in-the-workplace.aspx

5 https://blogs.uco.edu/customizededucation/2018/01/16/generational-differences-in-the-workplace/

6 https://www.apa.org/monitor/2010/03/undergraduates.aspx

113. Connected Warfare

[Editor’s Note: As stated previously here in the Mad Scientist Laboratory, the nature of war remains inherently humanistic in the Future Operational Environment.  Today’s post by guest blogger COL James K. Greer (USA-Ret.) calls on us to stop envisioning Artificial Intelligence (AI) as a separate and distinct end state (oftentimes in competition with humanity) and to instead focus on preparing for future connected competitions and wars.]

The possibilities and challenges for future security, military operations, and warfare associated with advancements in AI are proposed and discussed with ever-increasing frequency, both within formal defense establishments and informally among national security professionals and stakeholders. One is confronted with a myriad of alternative futures, including everything from a humanity-killing variation of Terminator’s SkyNet to uncontrolled warfare ala WarGames to Deep Learning used to enhance existing military processes and operations. And of course legal and ethical issues surrounding the military use of AI abound.

Source: tmrwedition.com

Yet in most discussions of the military applications of AI and its use in warfare, we have a blind spot in our thinking about technological progress toward the future. That blind spot is that we think about AI largely as disconnected from humans and the human brain. Rather than thinking about AI-enabled systems as connected to humans, we think about them as parallel processes. We talk about human-in-the loop or human-on-the-loop largely in terms of the control over autonomous systems, rather than comprehensive connection to and interaction with those systems.

But even while significant progress is being made in the development of AI, almost no attention is paid to the military implications of advances in human connectivity. Experiments have already been conducted connecting the human brain directly to the internet, which of course connects the human mind not only to the Internet of Things (IoT), but potentially to every computer and AI device in the world. Such connections will be enabled by a chip in the brain that provides connectivity while enabling humans to perform all normal functions, including all those associated with warfare (as envisioned by John Scalzi’s BrainPal in “Old Man’s War”).

Source: Grau et al.

Moreover, experiments in connecting human brains to each other are ongoing. Brain-to-brain connectivity has occurred in a controlled setting enabled by an internet connection. And, in experiments conducted to date, the brain of one human can be used to direct the weapons firing of another human, demonstrating applicability to future warfare. While experimentation in brain-to-internet and brain-to-brain connectivity is not as advanced as the development of AI, it is easy to see that the potential benefits, desirability, and frankly, market forces are likely to accelerate the human side of connectivity development past the AI side.

Source: tapestrysolutions.com

So, when contemplating the future of human activity, of which warfare is unfortunately a central component, we cannot and must not think of AI development and human development as separate, but rather as interconnected. Future warfare will be connected warfare, with implications we must now begin to consider. How would such connected warfare be conducted? How would mission command be exercised between man and machine? What are the leadership implications of the human leader’s brain being connected to those of their subordinates? How will humans manage information for decision-making without being completely overloaded and paralyzed by overwhelming amounts of data? What are the moral, ethical, and legal implications of connected humans in combat, as well as responsibility for the actions of machines to which they are connected? These and thousands of other questions and implications related to policy and operation must be considered.

The power of AI resides not just in that of the individual computer, but in the connection of each computer to literally millions, if not billions, of sensors, servers, computers, and smart devices employing thousands, if not millions, of software programs and apps. The consensus is that at some point the computing and analytic power of AI will surpass that of the individual. And therein lies a major flaw in our thinking about the future. The power of AI may surpass that of a human being, but it won’t surpass the learning, thinking, and decision-making power of connected human beings. When a future human is connected to the internet, that human will have access to the computing power of all AI. But, when that same human is connected to several (in a platoon), or hundreds (on a ship) or thousands (in multiple headquarters) of other humans, then the power of AI will be exceeded by multiple orders of magnitude. The challenge of course is being able to think effectively under those circumstances, with your brain connected to all those sensors, computers, and other humans. This is what Ray Kurzwell terms “hybrid thinking.”   Imagine how that is going to change every facet of human life, to include every aspect of warfare, and how everyone in our future defense establishment, uniformed or not, will have to be capable of hybrid thinking.

Source: Genetic Literacy Project

So, what will the military human bring to warfare that the AI-empowered computer won’t? Certainly, one of the major challenges with AI thus far has been its inability to demonstrate human intuition. AI can replicate some derivative tasks with intuition using what is now called “Artificial Intuition.” These tasks are primarily the intuitive decisions that result from experience: AI generates this experience through some large number of iterations, which is how Goggle’s AlphaGo was able to beat the human world Go champion. Still, this is only a small part of the capacity of humans in terms not only of intuition, but of “insight,” what we call the “light bulb moment”. Humans will also bring emotional intelligence to connected warfare. Emotional intelligence, including aspects such as empathy, loyalty, and courage, are critical in the crucible of war and are not capabilities that machines can provide the Force, not today and perhaps not ever.

Warfare in the future is not going to be conducted by machines, no matter how far AI advances. Warfare will instead be connected human to human, human to internet, and internet to machine in complex, global networks. We cannot know today how such warfare will be conducted or what characteristics and capabilities of future forces will be necessary for victory. What we can do is cease developing AI as if it were something separate and distinct from, and often envisioned in competition with, humanity and instead focus our endeavors and investments in preparing for future connected competitions and wars.

If you enjoyed this post, please read the following Mad Scientist Laboratory blog posts:

… and watch Dr. Alexander Kott‘s presentation The Network is the Robot, presented at the Mad Scientist Robotics, Artificial Intelligence, & Autonomy: Visioning Multi Domain Battle in 2030-2050 Conference, at the Georgia Tech Research Institute, 8-9 March 2017, in Atlanta, Georgia.

COL James K. Greer (USA-Ret.) is the Defense Threat Reduction Agency (DTRA) and Joint Improvised Threat Defeat Organization (JIDO) Integrator at the Combined Arms Command. A former cavalry officer, he served thirty years in the US Army, commanding at all levels from platoon through Brigade. Jim served in operational units in CONUS, Germany, the Balkans and the Middle East. He served in US Army Training and Doctrine Command (TRADOC), primarily focused on leader, capabilities and doctrine development. He has significant concept development experience, co-writing concepts for Force XXI, Army After Next and Army Transformation. Jim was the Army representative to OSD-Net assessment 20XX Wargame Series developing concepts OSD and the Joint Staff. He is a former Director of the Army School of Advanced Military Studies (SAMS) and instructor in tactics at West Point. Jim is a veteran of six combat tours in Iraq, Afghanistan, and the Balkans, including serving as Chief of Staff of the Multi-National Security Transition Command – Iraq (MNSTC-I). Since leaving active duty, Jim has led the conduct of research for the Army Research Institute (ARI) and designed, developed and delivered instruction in leadership, strategic foresight, design, and strategic and operational planning. Dr. Greer holds a Doctorate in Education, with his dissertation subject as US Army leader self-development. A graduate of the United States Military Academy, he has a Master’s Degree in Education, with a concentration in Psychological Counseling: as well as Masters Degrees in National Security from the National War College and Operational Planning from the School of Advanced Military Studies.

111. AI Enhancing EI in War

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s guest blog post by MAJ Vincent Dueñas, addressing how AI can mitigate a human commander’s cognitive biases and enhance his/her (and their staff’s)  decision-making, freeing them to do what they do best — command, fight, and win on future battlefields!]

Humans are susceptible to cognitive biases and these biases sometimes result in catastrophic outcomes, particularly in the high stress environment of war-time decision-making. Artificial Intelligence (AI) offers the possibility of mitigating the susceptibility of negative outcomes in the commander’s decision-making process by enhancing the collective Emotional Intelligence (EI) of the commander and his/her staff. AI will continue to become more prevalent in combat and as such, should be integrated in a way that advances the EI capacity of our commanders. An interactive AI that feels like one is communicating with a staff officer, which has human-compatible principles, can support decision-making in high-stakes, time-critical situations with ambiguous or incomplete information.

Mission Command in the Army is the exercise of authority and direction by the commander using mission orders to enable disciplined initiative within the commander’s intent.i It requires an environment of mutual trust and shared understanding between the commander and his subordinates in order to understand, visualize, describe, and direct throughout the decision-making Operations Process and mass the effects of combat power.ii

The mission command philosophy necessitates improved EI. EI is defined as the capacity to be aware of, control, and express one’s emotions, and to handle interpersonal relationships judiciously and empathetically, at much quicker speeds in order seize the initiative in war.iii The more effective our commanders are at EI, the better they lead, fight, and win using all the tools available.

AI Staff Officer

To conceptualize how AI can enhance decision-making on the battlefields of the future, we must understand that AI today is advancing more quickly in narrow problem solving domains than in those that require broad understanding.iv This means that, for now, humans continue to retain the advantage in broad information assimilation. The advent of machine-learning algorithms that could be applied to autonomous lethal weapons systems has so far resulted in a general predilection towards ensuring humans remain in the decision-making loop with respect to all aspects of warfare.v, vi AI’s near-term niche will continue to advance rapidly in narrow domains and become a more useful interactive assistant capable of analyzing not only the systems it manages, but the very users themselves. AI could be used to provide detailed analysis and aggregated assessments for the commander at the key decision points that require a human-in-the-loop interface.

The Battalion is a good example organization to visualize this framework. A machine-learning software system could be connected into different staff systems to analyze data produced by the section as they execute their warfighting functions. This machine-learning software system would also assess the human-in-the-loop decisions against statistical outcomes and aggregate important data to support the commander’s assessments. Over time, this EI-based machine-learning software system could rank the quality of the staff officers’ judgements. The commander can then consider the value of the staff officers’ assessments against the officers’ track-record of reliability and the raw data provided by the staff sections’ systems. The Bridgewater financial firm employs this very type of human decision-making assessment algorithm in order to assess the “believability” of their employees’ judgements before making high-stakes, and sometimes time-critical, international financial decisions.vii Included in such a multi-layered machine-learning system applied to the battalion, there would also be an assessment made of the commander’s own reliability, to maximize objectivity.

Observations by the AI of multiple iterations of human behavioral patterns during simulations and real-world operations would improve its accuracy and enhance the trust between this type of AI system and its users. Commanders’ EI skills would be put front and center for scrutiny and could improve drastically by virtue of the weight of the responsibility of consciously knowing the cognitive bias shortcomings of the staff with quantifiable evidence, at any given time. This assisted decision-making AI framework would also consequently reinforce the commander’s intuition and decisions as it elevates the level of objectivity in decision-making.


The capacity to understand information broadly and conduct unsupervised learning remains the virtue of humans for the foreseeable future.viii The integration of AI into the battlefield should work towards enhancing the EI of the commander since it supports mission command and complements the human advantage in decision-making. Giving the AI the feel of a staff officer implies also providing it with a framework for how it might begin to understand the information it is receiving and the decisions being made by the commander.

Stuart Russell offers a construct of limitations that should be coded into AI in order to make it most useful to humanity and prevent conclusions that result in an AI turning on humanity. These three concepts are:  1) principle of altruism towards the human race (and not itself), 2) maximizing uncertainty by making it follow only human objectives, but not explaining what those are, and 3) making it learn by exposing it to everything and all types of humans.ix

Russell’s principles offer a human-compatible guide for AI to be useful within the human decision-making process, protecting humans from unintended consequences of the AI making decisions on its own. The integration of these principles in battlefield AI systems would provide the best chance of ensuring the AI serves as an assistant to the commander, enhancing his/her EI to make better decisions.

Making AI Work

The potential opportunities and pitfalls are abundant for the employment of AI in decision-making. Apart from the obvious danger of this type of system being hacked, the possibility of the AI machine-learning algorithms harboring biased coding inconsistent with the values of the unit employing it are real.

The commander’s primary goal is to achieve the mission. The future includes AI, and commanders will need to trust and integrate AI assessments into their natural decision-making process and make it part of their intuitive calculus. In this way, they will have ready access to objective analyses of their units’ potential biases, enhancing their own EI, and be able overcome them to accomplish their mission.

If you enjoyed this post, please also read:

An Appropriate Level of Trust…

Takeaways Learned about the Future of the AI Battlefield

Bias and Machine Learning

Man-Machine Rules

MAJ Vincent Dueñas is an Army Foreign Area Officer and has deployed as a cavalry and communications officer. His writing on national security issues, decision-making, and international affairs has been featured in Divergent Options, Small Wars Journal, and The Strategy Bridge. MAJ Dueñas is a member of the Military Writers Guild and a Term Member with the Council on Foreign Relations. The views reflected are his own and do not represent the opinion of the United States Government or any of its agencies.

i United States, Army, States, United. “ADRP 5-0 2012: The Operations Process.” ADRP 5-0 2012: The Operations Process, Headquarters, Dept. of the Army., 2012, pp. 1–1.

ii Ibid. pp. 1-1 – 1-3.

iiiEmotional Intelligence | Definition of Emotional Intelligence in English by Oxford Dictionaries.” Oxford Dictionaries | English, Oxford Dictionaries, 2018, en.oxforddictionaries.com/definition/emotional_intelligence.

iv Trent, Stoney, and Scott Lathrop. “A Primer on Artificial Intelligence for Military Leaders.” Small Wars Journal, 2018, smallwarsjournal.com/index.php/jrnl/art/primer-artificial-intelligence-military-leaders.

v Scharre, Paul. ARMY OF NONE: Autonomous Weapons and the Future of War. W W NORTON, 2019.

vi Evans, Hayley. “Lethal Autonomous Weapons Systems at the First and Second U.N. CGE Meetings.” Lawfare, 2018, https://www.lawfareblog.com/lethal-autonomous-weapons-systems-first-and-second-un-gge-meetings.

vii Dalio, Ray. Principles. Simon and Schuster, 2017.

viii Trent and Lathrop.

ix Russell, Stuart, director. Three Principles for Creating Safer AI. TED: Ideas Worth Spreading, 2017, www.ted.com/talks/stuart_russell_3_principles_for_creating_safer_ai.

110. Future Jobs and Skillsets

[Editor’s Note:  On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC.  Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces.  Today’s post is extracted from this conference’s final report (more of which is addressed at the bottom of this post).]

The U.S. Army currently has more than 150 Military Occupational Specialties (MOSs), each requiring a Soldier to learn unique tasks, skills, and knowledges. The emergence of a number of new technologies – drones, Artificial Intelligence (AI), autonomy, immersive mixed reality, big data storage and analytics, etc. – coupled with the changing character of future warfare means that many of these MOSs will need to change, while others will need to be created. This already has been seen in the wider U.S. and global economy, where the growth of internet services, smartphones, social media, and cloud technology over the last ten years has introduced a host of new occupations that previously did not exist. The future will further define and compel the creation of new jobs and skillsets that have not yet been articulated or even imagined. Today’s hobbies (e.g., drones) and recreational activities (e.g., Minecraft/Fortnite) that potential recruits engage in every day could become MOSs or Additional Skill Identifiers (ASIs) of the future.

Training eighty thousand new Recruits a year on existing MOSs is a colossal undertaking.  A great expansion in the jobs and skillsets needed to field a highly capable future Army, replete with modified or new MOSs, adds a considerable burden to the Army’s learning systems and institutions. These new requirements, however, will almost certainly present an opportunity for the Army to capitalize on intelligent tutors, personalized learning, and immersive learning to lessen costs and save time in Soldier and Leader development.

The recruit of 2050 will be born in 2032 and will be fundamentally different from the generations born before them.  Marc Prensky, educational writer and speaker who coined the term digital native, asserts this “New Human” will stand in stark contrast to the “Old Human” in the ways they learn and approach learning..1 Where humans today are born into a world with ubiquitous internet, hyper-connectivity, and the Internet of Things, each of these elements are generally external to the human.  By 2032, these technologies likely will have converged and will be embedded or integrated into the individual with connectivity literally on the tips of their fingers. 

Some of the newly required skills may be inherent within the next generation(s) of these Recruits. Many of the games, drones, and other everyday technologies that are already or soon to be very common – narrow AI, app development and general programming, and smart devices – will yield a variety of intrinsic skills that Recruits will have prior to entering the Army. Just like we no longer train Soldiers on how to use a computer, games like Fortnite, with no formal relationship with the military, will provide players with militarily-useful skills such as communications, resource management, foraging, force structure management, and fortification and structure building, all while attempting to survive against persistent attack.  Due to these trends, Recruits may come into the Army with fundamental technical skills and baseline military thinking attributes that flatten the learning curve for Initial Entry Training (IET).2

While these new Recruits may have a set of some required skills, there will still be a premium placed on premier skillsets in fields such as AI and machine learning, robotics, big data management, and quantum information sciences. Due to the high demand for these skillsets, the Army will have to compete for talent with private industry, battling them on compensation, benefits, perks, and a less restrictive work environment – limited to no dress code, flexible schedule, and freedom of action. In light of this, the Army may have to consider adjusting or relaxing its current recruitment processes, business practices, and force structuring to ensure it is able to attract and retain expertise. It also may have to reconsider how it adapts and utilizes its civilian workforce to undertake these types of tasks in new and creative ways.

The Recruit of 2050 will need to be engaged much differently than today. Potential Recruits may not want to be contacted by traditional methods3 – phone calls, in person, job fairs – but instead likely will prefer to “meet” digitally first. Recruiters already are seeing this today. In order to improve recruiting efforts, the Army may need to look for Recruits in non-traditional areas such as competitive online gaming. There is an opportunity for the Army to use AI to identify Recruit commonalities and improve its targeted advertisements in the digital realm to entice specific groups who have otherwise been overlooked. The Army is already exploring this avenue of approach through the formation of an eSports team that will engage young potential Recruits and attempt to normalize their view of Soldiers and the Army, making them both more relatable and enticing.4 This presents a broader opportunity to close the chasm that exists between civilians and the military.

The overall dynamic landscape of the future economy, the evolving labor market, and the changing character of future warfare will create an inflection point for the Army to re-evaluate longstanding recruitment strategies, workplace standards, and learning institutions and programs. This will bring about an opportunity for the Army to expand, refine, and realign its collection of skillsets and MOSs, making Soldiers more adapted for future battles, while at the same time challenging the Army to remain prominent in attracting premier talent in a highly competitive environment.

If you enjoyed this extract, please read the comprehensive Learning in 2050 Conference Final Report

… and see our TRADOC 2028 blog post.

1 Prensky, Mark, Mad Scientist Conference: Learning in 2050, Georgetown University, 9 August 2018.

2 Schatz, Sarah, Mad Scientist Conference: Learning in 2050, Georgetown University, 8 August 2018.

3 Davies, Hans, Mad Scientist Conference: Learning in 2050, Georgetown University, 9 August 2018.

4 Garland, Chad, Uncle Sam wants you — to play video games for the US Army, Stars and Stripes, 9 November 2018, https://www.stripes.com/news/uncle-sam-wants-you-to-play-video-games-for-the-us-army-1.555885.

109. Classic Planning Holism as a Basis for Megacity Strategy

[Editor’s Note: Recent operations against the Islamic State of Iraq and the Levant (ISIL) to liberate Mosul illustrate the challenges of warfighting in urban environments. In The Army Vision, GEN Mark A. Milley, Chief of Staff of the Army, and Dr. Mark T. Esper, Secretary of the Army, state that the U.S. Army must “Focus training on high-intensity conflict, with emphasis on operating in dense urban terrain…” Returning Mad Scientist Laboratory guest blogger Dr. Nir Buras leverages his expertise as an urban planner to propose a holistic approach to military operations in Megacities — Enjoy!]

A recent study identified 34 megacities, defined as having populations of over 10 million inhabitants.1 The scale, complexity, and dense populations of megacities, and their need for security, energy, water conservation, resource distribution, waste management, disaster management, construction, and transportation make them a challenging security environment.2

With urban warfare experience from Stalingrad to Gaza, it is clear that a doctrinal shift must take place.

Urban terrain, the “great equalizer,” diminishes an attacker’s advantages in firepower and mobility.”3 Recent experiences in Baghdad, Mosul, and Aleppo, as well as historically in Aachen, Seoul, Hue, and Ramadi, shift the perspective from problem solving to critical holistic thinking skills and decision-making required in ambiguous environments.4 For an Army, rule number one is to stay out of cities. If that is not possible, the second rule of warfare is to manipulate the environment.

The Strategic Studies Group finds that a megacity is the most challenging environment for a land force to operate in.5 But currently, the U.S. Army is incapable of operating within the megacity.6 The intellectual center of gravity is open to those who choose to seize it, because it does not exist.7

Cities are holistic entities, but holism is not about brown food and Birkenstocks.  Holism is a discipline managing whole systems which are more than a sum of their parts. Where problem-solving methodology drags the problem with it, resulting in negative synergies (new problems); the holistic methodology works from aspiration and results in positive synergies, many of which are unforeseen. The aspiration for megacity operations is control, not conquest. The cure must not be worse than the disease.

The holistic approach to combat, to fight the urban context, not the enemy, means reconfiguring the environment for operational purposes. Its goal of reforming antagonism to U.S. interests by controlling and reforming the city to become self-ruling and long-term sustainable, would facilitate urban, political, and economical homeostasis in alignment with U.S. interests and bequeath a homeostatic urban balance legacy — “Pax Americana.”8 Paradoxically, it may be the most cost-effective approach.

Megacities are inherently unsustainable and need to be fixed, war or not. Classic planning for megacities would break them down into environmentally controllable chunks of human scaled, walkable areas of 30,000, 120,000, and 500,000 persons by means of swathes of countryside. A continuous network of rural, agricultural, and natural areas, it would be at least 1-mile deep, and be the place where transport, major infrastructure, highways, campuses, large-scale sports venues, waste dumps, and even mines, might be located.

This is naturally ongoing in Detroit, is historically documented to have happened in Rome, and can be witnessed in Angkor Wat. While the greatest beneficiaries of this long-term would be the populace, its military benefits are obvious. The Army would simply accelerate the process.

The idea is to radically change the fighting environment while bolstering the population and its institutions to sympathize with U.S. goals. “Divide and conquer” followed by a sustainable legacy. Notably, operations within a megacity requires an understanding of a city’s normal procedures and daily operations beforehand.9 The proposed framework for this is the long-term classic planning of cities.

The application of classic planning to megacity operations follows four steps: Disrupt, Control, Stabilize, and Transfer.

1. DISRUPT urban fabric with swathes of country at least a mile wide containing a continuous network of rural, agricultural, natural, and water areas at least 1-mile deep, where transport, major infrastructure, highways, campuses, large-scale sports venues, etc., are located. In urban fabric, structures would be removed to virgin ground, and agriculture and nature reinstated there. Solutions for the debris will need to be developed, as well as for buried infrastructure. The block layout may remain in whole or part for agricultural and forest access. Soil bacteria may be used to rapidly consume toxic and hazardous materials. This has to be thoroughly planned in advance of a conflict.

2. CONTAIN urban fabric to a 1 hour walk (2 hours max), 2-4 miles from edge to edge, both in existing fabric and in new settlements for relocated persons.

3. STABILIZE neighborhoods, quarters, and city centers hierarchically, and densify them, up to 6-8 floors tall, according to the classic planning model of standard fabric buildings. Buildings taller than 6 or 8 stories may be placed on the periphery, if they are necessary at all.  Blocks, streets, plazas, and parks are laid out in appropriate dimensions.  Proven, traditional designs are used for buildings at least 85% of the time.  Stabilize communities through leadership, mentoring, the establishment of markets, industry, sources of income, and community institutions.

4. TRANSFER displaced communities to new urban fabric built on classic planning principles as developed after the Haiti Earthquake; and transfer air rights from land reclaimed for country to urban fabric centers (midrise densification) and peripheries (taller buildings as necessary). Transfer community management back to residents as soon as possible (1 year). Transfer loyalty; build community; develop education, mentoring, and training; and use civilian commercial work according to specifically developed management models for construction, economic, and urban management.10

To adopt a holistic approach to the megacity, the U.S. Army must engage in a comprehensive understanding of the environment prior to the arrival of forces, and plan the shaping of the environment, focusing on its physical attributes for both the benefit of the city and the Army. This holistic approach may generate outcomes similar to the type of synergies stimulated by the Marshall Plan after World War II.

If you enjoyed this post, please listen to:

Tomorrow’s Urban Battlefield podcast with Dr. Russell Glenn, hosted by our colleagues at the Modern War Institute.

… and also read the following:

– Mad Scientist Megacities and Dense Urban Areas Initiative in 2025 and Beyond Conference Final Report

– Where none have gone before: Operational and Strategic Perspectives on Multi-Domain Operations in Mega Cities Conference Proceedings

My City is Smarter than Yours!

Nir Buras is a PhD architect and planner with over 30 years of in-depth experience in strategic planning, architecture, and transportation design, as well as teaching and lecturing. His planning, design and construction experience includes East Side Access at Grand Central Terminal, New York; International Terminal D, Dallas-Fort-Worth; the Washington DC Dulles Metro line; work on the US Capitol and the Senate and House Office Buildings in Washington. Projects he has worked on have been published in the New York Times, the Washington Post, local newspapers, and trade magazines. Buras, whose original degree was Architect and Town planner, learned his first lesson in urbanism while planning military bases in the Negev Desert in Israel. Engaged in numerous projects since then, Buras has watched first-hand how urban planning impacted architecture. After the last decade of applying in practice the classical method that Buras learned in post-doctoral studies, his book, *The Art of Classic Planning* (Harvard University Press, 2019), presents the urban design and planning method of Classic Planning as a path forward for homeostatic, durable urbanism.

1 Demographia World Urban Areas 11th Annual Edition 2015,” Demographia, 2-20, September 18, 2015, accessed December 16, 2015, http://www.demographia.com/db-worldua.pdf. 67% of large urban areas (500,000 and higher) located in Asia and Africa.

2 Jack A. Goldstone, “The New Population Bomb: The Four Megatrends That Will Change the World,” Foreign Affairs, (January/February 2010) 38-39; National Intelligence Council, Global trends 2030 Report: Alternative Worlds (Washington, DC: National Intelligence Council, 2012), 1. Quoted in Kaune.

3 ARCIC, Unified Quest Executive Report 2014 (Fort Eustis, VA: US Army Capabilities Integration Center, 2014), 1. Quoted in Kaune.

4 Harris et al., Megacities and the US Army, 22. Louis A. Dimarco, Concrete Hell Urban Warfare from Stalingrad to Iraq (Oxford, UK: Osprey Publishing, 2012) 214-215. Quoted in Kaune.

5 Harris et al., Megacities and the US Army, 21.

6 Kaune.

7 David Betz, “Peering into the Past and Future of Urban Warfare in Israel,” War on the Rocks, December 17, 2015, accessed December 17, 2015, http://warontherocks.com/2015/12/peering-into-the-past-and-future-of-urban-warfare-in-israel/. Quoted in Kaune.

8 Tom R. Przybelski, “Hybrid War: The Gap in the Range of Military Operations” (Newport, RI: Naval War College, Joint Military Operations Department), iii.

9 Kaune.

10 Michael Evans, “The Case Against Megacities,” Parameters 45, no. 1, (Spring 2015): 36. Quoted in Kaune.

108. The Ghost of Mad Scientist Past!

Editor’s Note: Mad Scientist Laboratory is pleased to provide for your holiday reading pleasure our Anthology of the “best of” blog posts from 2018! This Anthology enables you to re-visit our futures oriented assessments, including ideas about the Operational Environment, technology trends, innovation, and our conference findings. Each article includes a wealth of links to interesting content, including Mad Scientist videos, podcasts, conference proceedings, and presentations.

And if you have not already done so, please consider subscribing to our blog site to stay abreast of upcoming Mad Scientist Conferences, on-line events, and writing exercises in 2019 by receiving it automatically in your email inbox twice weekly —  go to “SUBSCRIBE” on the right-hand side of your screen (or scroll down to the bottom if viewing the site on your PED), enter your commercial email address (i.e., non-DoD) in the “Email Address” text box, then select the “Confirm Follow” blue button in the subsequent email you receive. In doing so, you’ll stay connected with all things Mad Scientist!

Mad Scientist Laboratory wishes all of our readers the Happiest of Holiday Seasons!



107. “The Queue”

[Editor’s Note: Mad Scientist Laboratory is pleased to present our November edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the previous month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment (OE). We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

1. Is China a global leader in research and development? China Power Project, Center for Strategic and International Studies (CSIS), 2018. 

The United States Army’s concept of Multi-Domain Operations 2028 describes Russia and China as strategic competitors working to synthesize emerging technologies, such as artificial intelligence, hypersonics, machine learning, nanotechnology, and robotics, with their analysis of military doctrine and operations. The Future OE’s Era of Contested Equality (i.e., 2035 through 2050) describes China’s ascent to a peer competitor and our primary pacing threat. The fuel for these innovations is research and development funding from the Chinese Government and businesses.

CSIS’s China Power Project recently published an assessment of the rise in China’s research and development funding. There are three key facts that demonstrate the remarkable increase in funding and planning that will continue to drive Chinese innovation. First, “China’s R&D expenditure witnessed an almost 30-fold increase from 1991 to 2015 – from $13 billion to $376 billion. Presently, China spends more on R&D than Japan, Germany, and South Korea combined, and only trails the United States in terms of gross expenditure. According to some estimates, China will overtake the US as the top R&D spender by 2020.”

Second, globally businesses are funding the majority of the research and development activities. China is now following this trend with its “businesses financing 74.7 percent ($282 billion) of the country’s gross expenditure on R&D in 2015.” Tracking the origin of this funding is difficult with the Chinese government also operating a number of State Owned Entities. This could prove to be a strength for the Chinese Army’s access to commercial innovation.

China’s Micius quantum satellite, part of their Quantum Experiments at Space Scale (QUESS) program

Third, the Chinese government is funding cutting edge technologies where they are seeking to be global leaders. “Expenditures by the Chinese government stood at 16.2 percent of total R&D usage in 2015. This ratio is similar to that of advanced economies, such as the United States (11.2 percent). Government-driven expenditure has contributed to the development of the China National Space Administration. The Tiangong-2 space station and the “Micius’ quantum satellite – the first of its kind – are just two such examples.”

2. Microsoft will give the U.S. military access to ‘all the technology we create’, by Samantha Masunaga, Los Angeles Times (on-line), 1 December 2018.

Success in the future OE relies on many key assumptions. One such assumption is that the innovation cycle has flipped. Where the DoD used to drive technological innovation in this country, we now see private industry (namely Silicon Valley) as the driving force with the Army consuming products and transitioning technology for military use. If this system is to work, as the assumption implies, the Army must be able to work easily with the country’s leading technology companies.  Microsoft’s President Brad Smith stated recently that his company will “provide the U.S. military with access to the best technology … all the technology we create. Full stop.”

This is significant to the DoD for two reasons: It gives the DoD, and thus the Army, access to one of the leading technology developers in the world (with cloud computing and AI solutions), and it highlights that the assumptions we operate under are never guaranteed. Most recently, Google made the decision not to renew its contract with the DoD to provide AI support to Project Maven – a decision motivated, in part, by employee backlash.

Our near-peer competitors do not appear to be experiencing similar tensions or friction between their respective governments and private industry.  China’s President Xi is leveraging private sector advances for military applications via a “whole of nation” strategy, leading China’s Central Military-Civil Fusion Development Commission to address priorities including intelligent unmanned systems, biology and cross-disciplinary technologies, and quantum technologies.  Russia seeks to generate innovation by harnessing its defense industries with the nation’s military, civilian, and academic expertise at their Era Military Innovation Technopark to concentrate on advances in “information and telecommunication systems, artificial intelligence, robotic complexes, supercomputers, technical vision and pattern recognition, information security, nanotechnology and nanomaterials, energy tech and technology life support cycle, as well as bioengineering, biosynthetic, and biosensor technologies.”

Microsoft openly declaring its willingness to work seamlessly with the DoD is a substantial step forward toward success in the new innovation cycle and success in the future OE.

3. The Truth About Killer Robots, directed by Maxim Pozdorovkin, Third Party Films, premiered on HBO on 26 November 2018.

This documentary film could have been a highly informative piece on the disruptive potential posed by robotics and autonomous systems in future warfare. While it presents a jumble of interesting anecdotes addressing the societal changes wrought by the increased prevalence of autonomous systems, it fails to deliver on its title. Indeed, robot lethality is only tangentially addressed in a few of the documentary’s storylines:  the accidental death of a Volkswagen factory worker crushed by autonomous machinery; the first vehicular death of a driver engrossed by a Harry Potter movie while sitting behind the wheel of an autonomous-driving Tesla in Florida, and the use of a tele-operated device by the Dallas police to neutralize a mass shooter barricaded inside a building.

Russian unmanned, tele-operated BMP-3 shooting its 30mm cannon on a test range / Zvezda Broadcasting via YouTube

Given his choice of title, Mr. Pozdorovkin would have been better served in interviewing activists from the Campaign to Stop Killer Robots and participants at the Convention on Certain Conventional Weapons (CCW) who are negotiating in good faith to restrict the proliferation of lethal autonomy. A casual search of the Internet reveals a number of relevant video topics, ranging from the latest Russian advances in unmanned Ground Combat Vehicles (GCV) to a truly dystopian vision of swarming killer robots.

Instead, Mr. Pozdorovkin misleads his viewers by presenting a number creepy autonomy outliers (including a sad Chinese engineer who designed and then married his sexbot because of his inability to attract a living female mate given China’s disproportionately male population due to their former One-Child Policy); employing a sinister soundtrack and facial recognition special effects; and using a number of vapid androids (e.g., Japan’s Kodomoroid) to deliver contrived narration hyping a future where the distinction between humanity and machines is blurred. Where are Siskel and Ebert when you need ’em?

4. Walmart will soon use hundreds of AI robot janitors to scrub the floors of U.S. stores,” by Tom Huddleston Jr., CNBC, 5 December 2018.

The retail superpower Walmart is employing hundreds of robots in stores across the country, starting next month. These floor-scrubbing janitor robots will keep the stores’ floors immaculate using autonomous navigation that will be able to sense both people and obstacles.

The introduction of these autonomous cleaners will not be wholly disruptive to Walmart’s workforce operations, as they are only supplanting a task that is onerous for humans. But is this just the beginning? As humans’ comfort levels grow with the robots, will there then be an introduction of robot stocking, not unlike what is happening with Amazon? Will robots soon handle routine exchanges? And what of the displaced or under-employed workers resulting from this proliferation of autonomy, the widening economic gap between the haves and the have-nots, and the potential for social instability from neo-luddite movements in the Future OE?   Additionally, as these robots become increasingly conspicuous throughout our everyday lives in retail, food service, and many other areas, nefarious actors could hijack them or subvert them for terroristic, criminal, or generally malevolent uses.

The introduction of floor-cleaning robots at Walmart has larger implications than one might think. Robots are being considered for all the dull, dirty, and dangerous tasks assigned to the Army and the larger Department of Defense. The autonomous technology behind robots in Walmart today could have implications for our Soldiers at their home stations or on the battlefield of the future, conducting refueling and resupply runs, battlefield recovery, medevac, and other logistical and sustainment tasks.

5. What our science fiction says about us, by Tom Cassauwers, BBC News, 3 December 2018.

Right now the most interesting science fiction is produced in all sorts of non-traditional places,” says Anindita Banerjee, Associate Professor at Cornell University, whose research focuses on global sci-fi.  Sci-Fi and story telling enable us to break through our contemporary, mainstream echo chamber of parochialism to depict future technological possibilities and imagined worlds, political situations, and conflict. Unsurprisingly, different visions of the future imagining alternative realities are being written around the world – in China, Russia, and Africa. This rise of global science fiction challenges how we think about the evolution of the genre.  Historically, our occidental bias led us to believe that sci-fi was spreading from Western centers out to the rest of the world, blinding us to the fact that other regions also have rich histories of sci-fi depicting future possibilities from their cultural perspectives. Chinese science fiction has boomed in recent years, with standout books like Cixin Liu’s The Three-Body ProblemAfrofuturism is also on the rise since the release of the blockbuster Black Panther.

The Mad Scientist Initiative uses Crowdsourcing and Story Telling as two innovative tools to help us envision future possibilities and inform the OE through 2050. Strategic lessons learned from looking at the Future OE show us that the world of tomorrow will be far more challenging and dynamic. In our FY17 Science Fiction Writing Contest, we asked our community of action to describe Warfare in 2030-2050.  The stories submitted showed virtually every new technology is connected to and intersecting with other new technologies and advances.  The future OE presents us with a combination of new technologies and societal changes that will intensify long-standing international rivalries, create new security dynamics, and foster instability as well as opportunities. Sci-fi transcends beyond a global reflection on resistance; non-Western science fiction also taps into a worldwide consciousness – helping it conquer audiences beyond their respective home markets.

6. NVIDIA Invents AI Interactive Graphics, Nvidia.com, 3 December 2018.

A significant barrier to the modeling and simulation of dense urban environments has been the complexity of these areas in terms of building, vehicle, pedestrian, and foliage density. Megacities and their surrounding environments have such a massive concentration of entities that it has been a daunting task to re-create them digitally.  Nvidia has recently developed a first-step solution to this ongoing problem. Using neural networks and generative models, the developers are able to train AI to create realistic urban environments based off of real-world video.

As Nvidia admits, “One of the main obstacles developers face when creating virtual worlds, whether for game development, telepresence, or other applications is that creating the content is expensive. This method allows artists and developers to create at a much lower cost, by using AI that learns from the real world.” This process could significantly compress the development timeline, and while it wouldn’t address the other dimensions of urban operations — those entities that are underground or inside buildings (multi-floor and multi-room) — it would allow the Army to divert and focus more resources in those areas. The Chief of Staff of the Army has made readiness his #1 priority and stated, “In the future, I can say with very high degrees of confidence, the American Army is probably going to be fighting in urban areas,” and the Army “need[s] to man, organize, train and equip the force for operations in urban areas, highly dense urban areas.” 1  Nvidia’s solution could enable and empower the force to meet that goal.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future OE, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

1Commentary: The missing link to preparing for military operations in megacities and dense urban areas,” by Claudia ElDib and John Spencer, Army Times, 20 July 2018, https://www.armytimes.com/opinion/commentary/2018/07/20/commentary-the-missing-link-to-preparing-for-military-operations-in-megacities-and-dense-urban-areas/.

106. Man-Machine Rules

[Editor’s Note:  Mad Scientist Laboratory is pleased to present the first of two guest blog posts by Dr. Nir Buras.  In today’s post, he makes the compelling case for the establishment of man-machine rules.  Given the vast technological leaps we’ve made during the past two centuries (with associated societal disruptions), and the potential game changing technological innovations predicted through the middle of this century, we would do well to consider Dr. Buras’ recommended list of nine rules — developed for applicability to all technologies, from humankind’s first Paleolithic hand axe to the future’s much predicted, but long-awaited strong Artificial Intelligence (AI).]

Two hundred years of massive collateral impacts by technology have brought to the forefront of society’s consciousness the idea that some sort of rules for man-machine interaction are necessary, similar to the rules in place for gun safety, nuclear power, and biological agents. But where their physical effects are clear to see, the power of computing is veiled in virtuality and anthropomorphization. It appears harmless, if not familiar, and it often has a virtuous appearance.

Avid mathematician Ada Augusta Lovelace is often called the first computer programmer

Computing originated in the punched cards of Jacquard looms early in the 19th century. Today it carries the promise of a cloud of electrons from which we make our Emperor’s New Clothes. As far back as 1842, the brilliant mathematician Ada Augusta, Countess of Lovelace (1815-1852), foresaw the potential of computers. A protégé and associate of Charles Babbage (1791-1871), conceptual originator of the programmable digital computer, she realized the “almost incalculable” ultimate potential of such difference engines. She also recognized that, as in all extensions of human power or knowledge, “collateral influences” occur.1

AI presents us with such “collateral influences.”2  The question is not whether machine systems can mimic human abilities and nature, but when. Will the world become dependent on ungoverned algorithms?3  Should there be limits to mankind’s connection to machines? As concerns mount, well-meaning politicians, government officials, and some in the field are trying to forge ethical guidelines to address the collateral challenges of data use, robotics, and AI.4

A Hippocratic Oath of AI?

This cover of Asimov’s I, Robot illustrates the story “Runaround”, the first to list all Three Laws of Robotics.

Asimov’s Three Laws of Robotics are merely a literary ploy to infuse his storylines.5  In the real world, Apple, Amazon, Facebook, Google, DeepMind, IBM, and Microsoft, founded www.partnershiponai.org6 to ensure “… the safety and trustworthiness of AI technologies, the fairness and transparency of systems.” Data scientists from tech companies, governments, and nonprofits gathered to draft a voluntary digital charter for their profession.7  Oren Etzioni, CEO of the Allen Institute for AI and a professor at the University of Washington’s Computer Science Department, proposed a Hippocratic Oath for AI.

But such codes are composed of hard-to-enforce terms and vague goals, such as using AI “responsibly and ethically, with the aim of reducing bias and discrimination.” They pay lip service to privacy and human priority over machines. They appear to sugarcoat a culture which passes the buck to the lowliest Soldier.8

We know that good intentions are inadequate when enforcing confidentiality. Well-meant but unenforceable ideas don’t meet business standards.  It is unlikely that techies and their bosses, caught up in the magic of coding, will shepherd society through the challenges of the petabyte AI world.9  Vague principles, underwriting a non-binding code, cannot counter the cynical drive for profit.10

Indeed, in an area that lacks authorities or legislation to enforce rules, the Association for Computing Machinery (ACM) is itself backpedaling from its own Code of Ethics and Professional Conduct. Their document weakly defines notions of “public good” and “prioritizing the least advantaged.”11 Microsoft’s President Brad Smith admits that his company wouldn’t expect customers of its services to meet even these standards.

In the wake of the Cambridge Analytica scandal, it is clear that coders are not morally superior to other people and that voluntary, unenforceable Codes and Oaths are inadequate.12  Programming and algorithms clearly reflect ethical, philosophical, and moral positions.13  It is false to assume that the so-called “openness” trait of programmers reflects a broad mindfulness.  There is nothing heroic about “disruption for disruption’s sake” or hiding behind “black box computing.”14  The future cannot be left up to an adolescent-centric culture in an economic system that rests on greed.15  The society that adopts “Electronic personhood” deserves it.

Machines are Machines, People are People

After 200 years of the technology tail wagging the humanity dog, it is apparent now that we are replaying history – and don’t know it. Most human cultures have been intensively engaged with technology since before the Iron Age 3,000 years ago. We have been keenly aware of technology’s collateral effects mostly since the Industrial Revolution, but have not yet created general rules for how we want machines to impact individuals and society. The blurring of reality and virtuality that AI brings to the table might prompt us to do so.

Distinctions between the real and the virtual must be maintained if the behavior of the most sophisticated computation machines and robots is captured by legal systems. Nothing in the virtual world should be considered real any more than we believe that the hallucinations of a drunk or drugged person are real.

The simplest way to maintain the distinction is remembering that the real IS, and the virtual ISN’T, and that virtual mimesis is produced by machines. Lovelace reminded us that machines are just machines. While in a dark, distant future, giving machines personhood might lead to the collapse of humanity, Harari’s Homo Deus warns us that AI, robotics, and automation are quickly bringing the economic value of humans to zero.16

From the start of civilization, tools and machines have been used to reduce human drudge labor and increase production efficiency. But while tools and machines obviate physical aspects of human work in the context of the production of goods or processing information, they in no way affect the truth of humans as sentient and emotional living beings, nor the value of transactions among them.

Microsoft’s Tay AI Chatter Bot

The man-machine line is further blurred by our anthropomorphizing machinery, computing, and programming. We speak of machines in terms of human traits, and make programming analogous to human behavior. But there is nothing amusing about GIGO experiments like MIT’s psychotic bot Norman, or Microsoft’s fascist Tay.17 Technologists falling into the trap of considering that AI systems can make decisions, are analogous to children, playing with dolls, marveling that “their dolly is speaking.”

Machines don’t make decisions. Humans do. They may accept suggestions made by machines and when they do, they are responsible for the decisions made. People are and must be held accountable, especially those hiding behind machines. The holocaust taught us that one can never say, “I was just following orders.”

Nothing less than enforceable operational rules is required for any technical activity, including programming. It is especially important for tech companies, since evidence suggests that they take ethical questions to heart only under direct threats to their balance sheets.18

When virtuality offers experiences that humans perceive as real, the outcomes are the responsibility of the creators and distributors, no less than tobacco companies selling cigarettes, or pharmaceutical companies and cartels selling addictive drugs. Individuals do not have the right to risk the well-being of others to satisfy their need for complying with clichés such as “innovation,” and “disruption.”

Nuclear, chemical, biological, gun, aviation, machine, and automobile safety rules do not rely on human nature. They are based on technical rules and procedures. They are enforceable and moral responsibility is typically carried by the hierarchies of their organizations.19

As we master artificial intelligence, human intelligence must take charge.20 The highest values known to mankind remains human life and the qualities and quantities necessary for the best individual life experience.21 For the transactions and transformations in which technology assists, we need simple operational rules to regulate the actions and manners of individuals. Moving the focus to human interactions empowers individuals and society.

Man-Machine Rules

Man-Machine rules should address any tool or machine ever made or to be made. They would be equally applicable to any technology of any period, from the first flaked stone, to the ultimate predictive “emotion machines.” They would be adjudicated by common law.22

1. All material transformations and human transactions are to be conducted by humans.

2. Humans may directly employ hand/desktop/workstation devices in the above.

3. At all times, an individual human is responsible for the activity of any machine or program.

4. Responsibility for errors, omissions, negligence, mischief, or criminal-like activity is shared by every person in the organizational hierarchical chain, from the lowliest coder or operator, to the CEO of the organization, and its last shareholder.

5. Any person can shut off any machine at any time.

6. All computing is visible to anyone [No Black Box].

7. Personal Data are things. They belong to the individual who owns them, and any use of them by a third-party requires permission and compensation.

8. Technology must age before common use, until an Appropriate Technology is selected.

9. Disputes must be adjudicated according to Common Law.

Machines are here to help and advise humans, not replace them, and humans may exhibit a spectrum of responses to them. Some may ignore a robot’s advice and put others at risk. Some may follow recommendations to the point of becoming a zombie. But either way, Man-Machine Rules are based on and meant to support free, individual human choices.

Man-Machine Rules can help organize dialog around questions such as how to secure personal data. Do we need hardcopy and analog formats? How ethical are chips embedded in people and in their belongings? What degrees and controls are contemplatable for personal freedoms and personal risk? Will consumer rights and government organizations audit algorithms?23 Would equipment sabbaticals be enacted for societal and economic balances?

The idea that we can fix the tech world through a voluntary ethical code emergent from itself, paradoxically expects that the people who created the problems will fix them.24 It is not whether the focus should shift to human interactions that leaves more humans in touch with their destiny. The question is at what cost? If not now, when? If not by us, by whom?

If you reading enjoyed this post, please also see:

Prediction Machines: The Simple Economics of Artificial Intelligence

Artificial Intelligence (AI) Trends

Making the Future More Personal: The Oft-Forgotten Human Driver in Future’s Analysis

Nir Buras is a PhD architect and planner with over 30 years of in-depth experience in strategic planning, architecture, and transportation design, as well as teaching and lecturing. His planning, design and construction experience includes East Side Access at Grand Central Terminal, New York; International Terminal D, Dallas-Fort-Worth; the Washington DC Dulles Metro line; work on the US Capitol and the Senate and House Office Buildings in Washington. Projects he has worked on have been published in the New York Times, the Washington Post, local newspapers, and trade magazines. Buras, whose original degree was Architect and Town planner, learned his first lesson in urbanism while planning military bases in the Negev Desert in Israel. Engaged in numerous projects since then, Buras has watched first-hand how urban planning impacted architecture. After the last decade of applying in practice the classical method that Buras learned in post-doctoral studies, his book, *The Art of Classic Planning* (Harvard University Press, 2019), presents the urban design and planning method of Classic Planning as a path forward for homeostatic, durable urbanism.

1 Lovelace, Ada Augusta, Countess, Sketch of The Analytical Engine Invented by Charles Babbage by L. F. Menabrea of Turin, Officer of the Military Engineers, With notes upon the Memoir by the Translator, Bibliothèque Universelle de Genève, October, 1842, No. 82.

2 Oliveira, Arlindo, in Pereira, Vitor, Hippocratic Oath for Algorithms and Artificial Intelligence, Medium.com (website), 23 August 2018, https://medium.com/predict/hippocratic-oath-for-algorithms-and-artificial-intelligence-5836e14fb540; Middleton, Chris, Make AI developers sign Hippocratic Oath, urges ethics report: Industry backs RSA/YouGov report urging the development of ethical robotics and AI, computing.co.uk (website), 22 September 2017, https://www.computing.co.uk/ctg/news/3017891/make-ai-developers-sign-a-hippocratic-oath-urges-ethics-report; N.A., Do AI programmers need a Hippocratic oath?, Techhq.com (website), 15 August 2018, https://techhq.com/2018/08/do-ai-programmers-need-a-hippocratic-oath/

3 Oliveira, 2018; Dellot, Benedict, A Hippocratic Oath for AI Developers? It May Only Be a Matter of Time, Thersa.org (website), 13 February 2017, https://www.thersa.org/discover/publications-and-articles/rsa-blogs/2017/02/a-hippocratic-oath-for-ai-developers-it-may-only-be-a-matter-of-time; See also: Clifford, Catherine, Expert says graduates in A.I. should take oath: ‘I must not play at God nor let my technology do so’, Cnbc.com (website), 14 March 2018, https://www.cnbc.com/2018/03/14/allen-institute-ceo-says-a-i-graduates-should-take-oath.html; Johnson, Khari, AI Weekly: For the sake of us all, AI practitioners need a Hippocratic oath, Venturebeat.com (website), 23 March 2018, https://venturebeat.com/2018/03/23/ai-weekly-for-the-sake-of-us-all-ai-practitioners-need-a-hippocratic-oath/; Work, Robert O., former deputy secretary of defense, in Metz, Cade, Pentagon Wants Silicon Valley’s Help on A.I., New York Times, 15 March 2018.

4 Schotz, Mai, Should Data Scientists Adhere To A Hippocratic Oath?, Wired.com (website), 8 February 2018, https://www.wired.com/story/should-data-scientists-adhere-to-a-hippocratic-oath/; du Preez, Derek, MPs debate ‘hippocratic oath’ for those working with AI, Government.diginomica.com (website), 19 January 2018, https://government.diginomica.com/2018/01/19/mps-debate-hippocratic-oath-working-ai/

5 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Asimov, Isaac, Runaround, in I, Robot, The Isaac Asimov Collection ed., Doubleday, New York City, p. 40.

6 Middleton, 2017.

7 Etzioni, Oren, A Hippocratic Oath for artificial intelligence practitioners, Techcrunch.com (website), 14 March 2018. https://techcrunch.com/2018/03/14/a-hippocratic-oath-for-artificial-intelligence-practitioners/?platform=hootsuite

8 Do AI programmers need a Hippocratic oath?, Techhq, 2018.

9 Goodsmith, Dave, quoted in Schotz, 2018.

10 Schotz, 2018.

11 Do AI programmers need a Hippocratic oath?, Techhq, 2018. Wheeler, Schaun, in Schotz, 2018.

12 Gnambs, T., What makes a computer wiz? Linking personality traits and programming aptitude, Journal of Research in Personality, 58, 2015, pp. 31-34.

13 Oliveira, 2018.

14 Jarrett, Christian, The surprising truth about which personality traits do and don’t correlate with computer programming skills, Digest.bps.org.uk (website), British Psychological Society, 26 October 2015, https://digest.bps.org.uk/2015/10/26/the-surprising-truth-about-which-personality-traits-do-and-dont-correlate-with-computer-programming-skills/; Johnson, 2018.

15 Do AI programmers need a Hippocratic oath?, Techhq, 2018.

16 Harari, Yuval N. Homo Deus: A Brief History of Tomorrow. London: Harvill Secker, 2015.

17 That Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms is not an excuse. AI Twitter bot, Tay had to be deleted after it started making sexual references and declarations such as “Hitler did nothing wrong.”

18 Schotz, 2018.

19 See the example of Dr. Kerstin Dautenhahn, Research Professor of Artificial Intelligence in the School of Computer Science at the University of Hertfordshire, who claims no responsibility in determining the application of the work she creates. She might as well be feeding children shards of glass saying, “It is their choice to eat it or not.” In Middleton, 2017. The principle is that the risk of an unfavorable outcome lies with an individual as well as the entire chain of command, direction, and or ownership of their organization, including shareholders of public companies and citizens of states. Everybody has responsibility the moment they engage in anything that could affect others. Regulatory “sandboxes” for AI developer experiments – equivalent to pathogen or nuclear labs – should have the same types of controls and restrictions. Dellot, 2017.

20 Oliveira, 2018.

21 Sentience and sensibilities of other beings is recognized here, but not addressed.

22 The proposed rules may be appended to the International Covenant on Economic, Social and Cultural Rights (ICESCR, 1976), part of the International Bill of Human Rights, which include the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). International Covenant on Economic, Social and Cultural Rights, www.refworld.org.; EISIL International Covenant on Economic, Social and Cultural Rights, www.eisil.org; UN Treaty Collection: International Covenant on Economic, Social and Cultural Rights, UN. 3 January 1976; Fact Sheet No.2 (Rev.1), The International Bill of Human Rights, UN OHCHR. June 1996.

23 Dellot, 2017.

24 Schotz, 2018.

102. The Human Targeting Solution: An AI Story

[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by guest blogger CW3 Jesse R. Crifasi, envisioning a combat scenario in the not too distant future, teeing up the twin challenges facing the U.S Army in incorporating Artificial Intelligence (AI) across the force — “human-in-the-loop” versus “human-out-of-the-loop” and trust.  In it, CW3 Crifasi describes the inherent tension between human critical thinking and the benefits of Augmented Intelligence facilitating warfare at machine speed.  Enjoy!]

“CAITT, let’s re-run the targeting solution for tomorrow’s engagement… again,” asked Chief Warrant Officer Five Robert Menendez, in a not altogether annoyed tone of voice. Considering this was the fifth time he had asked, the tone of control Bob was exercising was nothing short of heroic for those knew him well. Fortunately, CAITT, short for Commander’s Artificially Intelligent Targeting Tool, did not seem to notice. Bob quietly thanked the nameless software engineer who had not programmed it to recognize the sarcasm and vitriol that he felt when he made the request.

“Chief, do you really think she is going to come up with anything different this time? You know that old saying about the definition of insanity, right?” asked DeMarcus Austin.  Bob shot the 28-year Captain a glare, clearly indicating that he knew exactly what the young man was implying. It was 0400 hours, and the entire Brigade Combat Team (BCT) was preparing to defend along its forward boundary. This after an exhausting three-day rapid deployment from their forward staging bases in Germany had everyone already on edge. In short, nothing had gone as expected or as planned for in the Operations Plan (OPLAN).

The UBRA’s, short for Unified Belorussian Russian Alliance’s, 323rd Tank Division was a mere 68 kilometers from the BCT’s Forward Line of Troops or FLOT. They would be in the BCT’s primary engagement area in six hours. Between 1EU DIV and the EU’s Expeditionary Air Force’s efforts, nothing was slowing UBRA’s advance towards the critical seaport city of Gdansk, Poland.

All the assumptions about air supremacy and cyber domination went out the window after the first UBRA tactical Electromagnetic Pulse (EMP) weapon detonated over Vilnius, Lithuania,  48 hours prior. A brilliant strategic move, the EMP fried every unshielded computer networked system the Allied Forces possessed. The Coalition AI Partner Network, so heavily relied on to execute the OPLAN, was inaccessible, as was every weapon system that linked to it. Right about now, Bob wished that CAITT was one of those systems.

Luckily for him and his boss, Colonel Steph “Duke” Ducalis, CAITT was designed with an internal Faraday shield preventing it and most of the U.S. Army’s other AI systems from suffering the same catastrophic damage. Unfortunately, the EU Armed Forces did not heed the same warnings and indicators. They were essentially crippled as they fervently worked to repair the damage. With the majority of U.S. military might committed to the Pacific Theatre, Colonel Ducalis’ BCT, a holdover from the old NATO alliance, was the lone American combat unit forward deployed in Western Europe. Alone and unafraid, as they say.

“Sir…” asked CAITT, snapping Bob out of his fatigue induced musings, “all data still indicates that engaging with our M56 Long-Range High-Velocity Missiles against the 323rd’s logistical assembly areas in Elblag will compel them to defeat. I estimate their advance will cease approximately 18 hours after direct fire battle commences. Given all of the variables, this is the optimal targeting solution.” Bob really hated how CAITT dispassionately stated her “optimal targeting solution,” in that sultry female tone. Clearly, that same software engineer who had ensured CAITT was durable also had a soft spot for British accents.

“CAITT, that makes no sense!” Bob stated exasperatedly. “The 323rd has approximately 250 T-90 MBTs — even if they expend all their fuel and munitions in that 18 hours, they will still overrun our defensive positions in less than six. We only have a single armored battalion with 35 FMC LAV3s. Even if they meet 3-1 K-kill ratios, we will not be able to hold our position. If they dislodge the LAVs, the dismounted infantrymen won’t stand a chance. We need to target the C2 nodes of their lead tank regiment now with the M56s. If we can neutralize their centralized command and control and delay their rate of march, it may give the EUAF enough time to get us those CAS and AI sorties they promised,” replied Bob. “That’s the right play, space for time.”

“I am sorry Mr. Menendez, I have no connection to the coalition network and cannot get a status update for the next Air Tasking Order. There is no confirmation that our Air Support Requests were received. I am issuing the target nominations to 2-142 HIMARS, they are moving towards their Position Areas Artillery now, airspace coordination is proceeding, and Colonel Ducalis is receiving his Commander’s Intervention Brief now. Pending his override there is nothing you can do.” CAITTs response almost sounded condescending to Bob; but then again, he remembered a time when human staff officers made recommendations to the boss, not smart-ass video game consoles.

“Chief, shouldn’t we just go with CAITTs solution? I mean she has all the raw data from the S2’s threat template and the weaponeering guidances that you built. CAITT is the joint program of record that we have to use, don’t we?” asked Captain Austin. Bob did not blame the young man for saying that. After all, this is what the Army wanted, staff officers that were more technicians and data managers than tacticians. The young man was simply not trained to question the AI’s conclusions.

“No sir, we should not, and by the way, I really hate how you call it a she,” answered Bob as he pondered his dilemma. Dammit! I’m the freaking Targeting Officer; I own this process, not this stupid thing… he thought for about five seconds before his instincts reasserted control of his senses.

Quickly jumping out of his chair, Bob left Captain Austin to oversee the data refinement and went outside to seek out the Commander’s Joint Lightweight Tactical Vehicle (JLTV). It took him a moment to locate it under the winter camouflage shielding, since Polish winters were just as brutal as advertised.

I must be getting old, Bob mused to himself, the cold air biting into his face. After twenty-five years of service, despite countless combat deployments in the Middle East, he was starting to get complacent. It was easy to think like young Captain Austin. He never should have trusted CAITT in the first place. It was so easy to let it make the decisions for you that many just stopped thinking altogether. The CIB would be Bob’s last chance to convince the boss that CAITT’s solution was wrong and he was right.

Bob entered the camo shield behind the JLTV constructing his argument to the boss in his mind. Colonel Ducalis had no time to entertain lengthy debate, this Bob knew. The fight was moving just too fast. Information is the currency of decision-making, and he would at best get about twenty seconds to make his case before something else grabbed the boss’s attention. CAITT would already be running the targeting solution straight to the boss via his Commanders Oculatory Device, jokingly called “COD,” referencing the old bawdy medieval term. Colonel Ducalis, already wearing the COD when Bob came in, was oblivious to everything else around him. Designed to construct a virtual and interactive battlefield environment, the COD worked almost too well. Even as Bob came in, CAITT was constructing the virtual battlefield, displaying missile aimpoints, HIMARs firing positions, airspace coordination measures, and detailed damage predictions for the target areas.

Bob could not understand how one person could absorb all that visual information in one sitting, but Colonel Ducalis was an exceptional commander. Standing nearby was the boss’s ever-present guardian, Major Lawrence Atlee, BCT XO, acting as always like a consigliere to his boss. His annoyance at Bob’s presence was evident by the scowl he received as he entered unannounced and, more egregiously, unrequested by him.

“Chief, what do you need?” asked Atlee, in his typically hurried tone, indicating that the boss should not be disturbed for all but the most serious reasons.

“Sir, it’s imperative I talk to the boss right now,” Bob demanded, somewhat out of breath — again, old age catching up. Without providing a reason to the XO, Bob moved directly to Colonel Ducalis and gently touched his arm. One did not shake a Brigade Commander, especially a former West Point Rugby player the size of Duke. The XO was not pleased.

“Bob, what’s up? I was just reviewing CAITT’s targeting solution,” said Duke as he lifted the COD off his face and saw his very distraught looking Targeting Officer. That’s hopeful, thought Bob, most Commanders would not even have bothered, simply letting the AI execute its solution.

Bob took a moment to compose himself and as he was about to pitch his case Atlee stepped in, “Sir, I’m very sorry. Chief here was just trying to let you know that he was ready to proceed.” Then turning to Bob he said in a manner that would not be confused as optional, “He was just leaving.”

Bob seized his chance as Duke looked right at him. They had served together for a long time. Bob remembered when Duke had asked him to come down from the 1EU Division Staff to fill his targeting officer billet. Undoubtedly, Duke trusted him and genuinely wanted to know what his concern was when he remove the COD in the first place. Bob owed it to him to give it to him straight.

“Sir, that is not correct,” Bob said speaking hurriedly. “We have a serious problem. CAITT’s targeting solution is completely wrong. The variables and assumptions were all predicated on the EUAF having air and cyber superiority. Those plans went out the window the second that EMP detonated. With all those aircraft down for CPU hardware replacement and software re-installs, those data points are now irrelevant. CAITT doesn’t know how long that will take because it is delinked from the Coalition’s AI Partner Network. I managed to get a low-frequency transmission established with Colonel Collins in Warsaw, and he thinks they can get us some sorties in the next six hours. CAITTs solution is ignoring the time versus space dynamic and going with a simple comparison of forces mathematical model. I’m betting it thinks that our casualties will be within acceptable limits after the 373rd expends all of its pre-staged consumable fuel and ammo. It thinks that we can hold our position if we cut off their re-supply. It may be right, but our losses will render us combat ineffective and unable to hold while 1EU DIV reconsolidates behind us.

“We need to implement this High Payoff Target List and Attack Guidance immediately disrupting and attriting their lead maneuver formations. Sir, we need to play for time and space,” Bob explained, hoping the sports analogy resonated while simultaneously accessing his Fires Forearm Display or FFaD, transmitting the data to Duke’s COD with a wave of his hand.

“Sir, I am not sure we should be deviating from the AI solution,” Atlee started to interject. “To be candid, and no offense to Mr. Menendez, the Army is eliminating their billets anyway since CAITT was fielded last year, same as they did for all the BCT S3s and FSOs. Their type of thinking is just not needed anymore, now that we have CAITT to do it for us.” Bob was amazed at how Major Atlee stated this dispassionately.

Bob, realizing where this was going, took a knee next to Duke.  He was clearly as tired as everyone else. Bob leaned in to speak while Duke started to review the new battlespace geometries and combat projections in his COD. “Duke,” Bob said in a low tone of voice so Major Atlee could not easily overhear him, “We’ve been friends a long time, I’ve never given you a bad recommendation. Please, override CAITT. LTC Givens can reposition his HIMARS battalion, but he has to start doing it now. This is our only chance; once those missiles are gone, we won’t get them back.”

He then stood up and patiently waited. Bob understood that he had pushed things as far as he could. Duke was a good man, a fine commander, and would make the right decision, Bob was certain of it.

Taking off his COD and rubbing his eyes, Duke leaned back and sighed heavily. The weight of command taking its full effect.

“CAITT,” stated Colonel Ducalis. “I am initiating Falcon 06’s override prerogative. Issue Chief Menendez’s targeting solution to LTC Givens immediately. Larry, get a hold of 1EU DIV and tell them we can hold our positions for 24 hours. After that, we may have to withdraw, but we will live to fight another day. Right now, trading time for space may not be the optimal strategy, but it is the human one. Let’s Go!”

If you enjoyed reading this post, please also see the following blog posts:

An Appropriate Level of Trust…

A Primer on Humanity: Iron Man versus Terminator

Takeaways Learned about the Future of the AI Battlefield

Leveraging Artificial Intelligence and Machine Learning to Meet Warfighter Needs

CW3 Jesse R. Crifasi is an active duty Field Artillery Warrant Officer. He has over 24 years in service and is currently serving as the Field Artillery Intelligence Officer (FAIO) for the 82nd Airborne Division.

The views expressed in this article are those of the author and do not reflect the official policy or position of the Department of the Army, DoD, or the U.S. Government.