113. Connected Warfare

[Editor’s Note: As stated previously here in the Mad Scientist Laboratory, the nature of war remains inherently humanistic in the Future Operational Environment.  Today’s post by guest blogger COL James K. Greer (USA-Ret.) calls on us to stop envisioning Artificial Intelligence (AI) as a separate and distinct end state (oftentimes in competition with humanity) and to instead focus on preparing for future connected competitions and wars.]

The possibilities and challenges for future security, military operations, and warfare associated with advancements in AI are proposed and discussed with ever-increasing frequency, both within formal defense establishments and informally among national security professionals and stakeholders. One is confronted with a myriad of alternative futures, including everything from a humanity-killing variation of Terminator’s SkyNet to uncontrolled warfare ala WarGames to Deep Learning used to enhance existing military processes and operations. And of course legal and ethical issues surrounding the military use of AI abound.

Source: tmrwedition.com

Yet in most discussions of the military applications of AI and its use in warfare, we have a blind spot in our thinking about technological progress toward the future. That blind spot is that we think about AI largely as disconnected from humans and the human brain. Rather than thinking about AI-enabled systems as connected to humans, we think about them as parallel processes. We talk about human-in-the loop or human-on-the-loop largely in terms of the control over autonomous systems, rather than comprehensive connection to and interaction with those systems.

But even while significant progress is being made in the development of AI, almost no attention is paid to the military implications of advances in human connectivity. Experiments have already been conducted connecting the human brain directly to the internet, which of course connects the human mind not only to the Internet of Things (IoT), but potentially to every computer and AI device in the world. Such connections will be enabled by a chip in the brain that provides connectivity while enabling humans to perform all normal functions, including all those associated with warfare (as envisioned by John Scalzi’s BrainPal in “Old Man’s War”).

Source: Grau et al.

Moreover, experiments in connecting human brains to each other are ongoing. Brain-to-brain connectivity has occurred in a controlled setting enabled by an internet connection. And, in experiments conducted to date, the brain of one human can be used to direct the weapons firing of another human, demonstrating applicability to future warfare. While experimentation in brain-to-internet and brain-to-brain connectivity is not as advanced as the development of AI, it is easy to see that the potential benefits, desirability, and frankly, market forces are likely to accelerate the human side of connectivity development past the AI side.

Source: tapestrysolutions.com

So, when contemplating the future of human activity, of which warfare is unfortunately a central component, we cannot and must not think of AI development and human development as separate, but rather as interconnected. Future warfare will be connected warfare, with implications we must now begin to consider. How would such connected warfare be conducted? How would mission command be exercised between man and machine? What are the leadership implications of the human leader’s brain being connected to those of their subordinates? How will humans manage information for decision-making without being completely overloaded and paralyzed by overwhelming amounts of data? What are the moral, ethical, and legal implications of connected humans in combat, as well as responsibility for the actions of machines to which they are connected? These and thousands of other questions and implications related to policy and operation must be considered.

The power of AI resides not just in that of the individual computer, but in the connection of each computer to literally millions, if not billions, of sensors, servers, computers, and smart devices employing thousands, if not millions, of software programs and apps. The consensus is that at some point the computing and analytic power of AI will surpass that of the individual. And therein lies a major flaw in our thinking about the future. The power of AI may surpass that of a human being, but it won’t surpass the learning, thinking, and decision-making power of connected human beings. When a future human is connected to the internet, that human will have access to the computing power of all AI. But, when that same human is connected to several (in a platoon), or hundreds (on a ship) or thousands (in multiple headquarters) of other humans, then the power of AI will be exceeded by multiple orders of magnitude. The challenge of course is being able to think effectively under those circumstances, with your brain connected to all those sensors, computers, and other humans. This is what Ray Kurzwell terms “hybrid thinking.”   Imagine how that is going to change every facet of human life, to include every aspect of warfare, and how everyone in our future defense establishment, uniformed or not, will have to be capable of hybrid thinking.

Source: Genetic Literacy Project

So, what will the military human bring to warfare that the AI-empowered computer won’t? Certainly, one of the major challenges with AI thus far has been its inability to demonstrate human intuition. AI can replicate some derivative tasks with intuition using what is now called “Artificial Intuition.” These tasks are primarily the intuitive decisions that result from experience: AI generates this experience through some large number of iterations, which is how Goggle’s AlphaGo was able to beat the human world Go champion. Still, this is only a small part of the capacity of humans in terms not only of intuition, but of “insight,” what we call the “light bulb moment”. Humans will also bring emotional intelligence to connected warfare. Emotional intelligence, including aspects such as empathy, loyalty, and courage, are critical in the crucible of war and are not capabilities that machines can provide the Force, not today and perhaps not ever.

Warfare in the future is not going to be conducted by machines, no matter how far AI advances. Warfare will instead be connected human to human, human to internet, and internet to machine in complex, global networks. We cannot know today how such warfare will be conducted or what characteristics and capabilities of future forces will be necessary for victory. What we can do is cease developing AI as if it were something separate and distinct from, and often envisioned in competition with, humanity and instead focus our endeavors and investments in preparing for future connected competitions and wars.

If you enjoyed this post, please read the following Mad Scientist Laboratory blog posts:

… and watch Dr. Alexander Kott‘s presentation The Network is the Robot, presented at the Mad Scientist Robotics, Artificial Intelligence, & Autonomy: Visioning Multi Domain Battle in 2030-2050 Conference, at the Georgia Tech Research Institute, 8-9 March 2017, in Atlanta, Georgia.

COL James K. Greer (USA-Ret.) is the Defense Threat Reduction Agency (DTRA) and Joint Improvised Threat Defeat Organization (JIDO) Integrator at the Combined Arms Command. A former cavalry officer, he served thirty years in the US Army, commanding at all levels from platoon through Brigade. Jim served in operational units in CONUS, Germany, the Balkans and the Middle East. He served in US Army Training and Doctrine Command (TRADOC), primarily focused on leader, capabilities and doctrine development. He has significant concept development experience, co-writing concepts for Force XXI, Army After Next and Army Transformation. Jim was the Army representative to OSD-Net assessment 20XX Wargame Series developing concepts OSD and the Joint Staff. He is a former Director of the Army School of Advanced Military Studies (SAMS) and instructor in tactics at West Point. Jim is a veteran of six combat tours in Iraq, Afghanistan, and the Balkans, including serving as Chief of Staff of the Multi-National Security Transition Command – Iraq (MNSTC-I). Since leaving active duty, Jim has led the conduct of research for the Army Research Institute (ARI) and designed, developed and delivered instruction in leadership, strategic foresight, design, and strategic and operational planning. Dr. Greer holds a Doctorate in Education, with his dissertation subject as US Army leader self-development. A graduate of the United States Military Academy, he has a Master’s Degree in Education, with a concentration in Psychological Counseling: as well as Masters Degrees in National Security from the National War College and Operational Planning from the School of Advanced Military Studies.

111. AI Enhancing EI in War

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s guest blog post by MAJ Vincent Dueñas, addressing how AI can mitigate a human commander’s cognitive biases and enhance his/her (and their staff’s)  decision-making, freeing them to do what they do best — command, fight, and win on future battlefields!]

Humans are susceptible to cognitive biases and these biases sometimes result in catastrophic outcomes, particularly in the high stress environment of war-time decision-making. Artificial Intelligence (AI) offers the possibility of mitigating the susceptibility of negative outcomes in the commander’s decision-making process by enhancing the collective Emotional Intelligence (EI) of the commander and his/her staff. AI will continue to become more prevalent in combat and as such, should be integrated in a way that advances the EI capacity of our commanders. An interactive AI that feels like one is communicating with a staff officer, which has human-compatible principles, can support decision-making in high-stakes, time-critical situations with ambiguous or incomplete information.

Mission Command in the Army is the exercise of authority and direction by the commander using mission orders to enable disciplined initiative within the commander’s intent.i It requires an environment of mutual trust and shared understanding between the commander and his subordinates in order to understand, visualize, describe, and direct throughout the decision-making Operations Process and mass the effects of combat power.ii

The mission command philosophy necessitates improved EI. EI is defined as the capacity to be aware of, control, and express one’s emotions, and to handle interpersonal relationships judiciously and empathetically, at much quicker speeds in order seize the initiative in war.iii The more effective our commanders are at EI, the better they lead, fight, and win using all the tools available.

AI Staff Officer

To conceptualize how AI can enhance decision-making on the battlefields of the future, we must understand that AI today is advancing more quickly in narrow problem solving domains than in those that require broad understanding.iv This means that, for now, humans continue to retain the advantage in broad information assimilation. The advent of machine-learning algorithms that could be applied to autonomous lethal weapons systems has so far resulted in a general predilection towards ensuring humans remain in the decision-making loop with respect to all aspects of warfare.v, vi AI’s near-term niche will continue to advance rapidly in narrow domains and become a more useful interactive assistant capable of analyzing not only the systems it manages, but the very users themselves. AI could be used to provide detailed analysis and aggregated assessments for the commander at the key decision points that require a human-in-the-loop interface.

The Battalion is a good example organization to visualize this framework. A machine-learning software system could be connected into different staff systems to analyze data produced by the section as they execute their warfighting functions. This machine-learning software system would also assess the human-in-the-loop decisions against statistical outcomes and aggregate important data to support the commander’s assessments. Over time, this EI-based machine-learning software system could rank the quality of the staff officers’ judgements. The commander can then consider the value of the staff officers’ assessments against the officers’ track-record of reliability and the raw data provided by the staff sections’ systems. The Bridgewater financial firm employs this very type of human decision-making assessment algorithm in order to assess the “believability” of their employees’ judgements before making high-stakes, and sometimes time-critical, international financial decisions.vii Included in such a multi-layered machine-learning system applied to the battalion, there would also be an assessment made of the commander’s own reliability, to maximize objectivity.

Observations by the AI of multiple iterations of human behavioral patterns during simulations and real-world operations would improve its accuracy and enhance the trust between this type of AI system and its users. Commanders’ EI skills would be put front and center for scrutiny and could improve drastically by virtue of the weight of the responsibility of consciously knowing the cognitive bias shortcomings of the staff with quantifiable evidence, at any given time. This assisted decision-making AI framework would also consequently reinforce the commander’s intuition and decisions as it elevates the level of objectivity in decision-making.

Human-Compatibility

The capacity to understand information broadly and conduct unsupervised learning remains the virtue of humans for the foreseeable future.viii The integration of AI into the battlefield should work towards enhancing the EI of the commander since it supports mission command and complements the human advantage in decision-making. Giving the AI the feel of a staff officer implies also providing it with a framework for how it might begin to understand the information it is receiving and the decisions being made by the commander.

Stuart Russell offers a construct of limitations that should be coded into AI in order to make it most useful to humanity and prevent conclusions that result in an AI turning on humanity. These three concepts are:  1) principle of altruism towards the human race (and not itself), 2) maximizing uncertainty by making it follow only human objectives, but not explaining what those are, and 3) making it learn by exposing it to everything and all types of humans.ix

Russell’s principles offer a human-compatible guide for AI to be useful within the human decision-making process, protecting humans from unintended consequences of the AI making decisions on its own. The integration of these principles in battlefield AI systems would provide the best chance of ensuring the AI serves as an assistant to the commander, enhancing his/her EI to make better decisions.

Making AI Work

The potential opportunities and pitfalls are abundant for the employment of AI in decision-making. Apart from the obvious danger of this type of system being hacked, the possibility of the AI machine-learning algorithms harboring biased coding inconsistent with the values of the unit employing it are real.

The commander’s primary goal is to achieve the mission. The future includes AI, and commanders will need to trust and integrate AI assessments into their natural decision-making process and make it part of their intuitive calculus. In this way, they will have ready access to objective analyses of their units’ potential biases, enhancing their own EI, and be able overcome them to accomplish their mission.

If you enjoyed this post, please also read:

An Appropriate Level of Trust…

Takeaways Learned about the Future of the AI Battlefield

Bias and Machine Learning

Man-Machine Rules

MAJ Vincent Dueñas is an Army Foreign Area Officer and has deployed as a cavalry and communications officer. His writing on national security issues, decision-making, and international affairs has been featured in Divergent Options, Small Wars Journal, and The Strategy Bridge. MAJ Dueñas is a member of the Military Writers Guild and a Term Member with the Council on Foreign Relations. The views reflected are his own and do not represent the opinion of the United States Government or any of its agencies.


i United States, Army, States, United. “ADRP 5-0 2012: The Operations Process.” ADRP 5-0 2012: The Operations Process, Headquarters, Dept. of the Army., 2012, pp. 1–1.

ii Ibid. pp. 1-1 – 1-3.

iiiEmotional Intelligence | Definition of Emotional Intelligence in English by Oxford Dictionaries.” Oxford Dictionaries | English, Oxford Dictionaries, 2018, en.oxforddictionaries.com/definition/emotional_intelligence.

iv Trent, Stoney, and Scott Lathrop. “A Primer on Artificial Intelligence for Military Leaders.” Small Wars Journal, 2018, smallwarsjournal.com/index.php/jrnl/art/primer-artificial-intelligence-military-leaders.

v Scharre, Paul. ARMY OF NONE: Autonomous Weapons and the Future of War. W W NORTON, 2019.

vi Evans, Hayley. “Lethal Autonomous Weapons Systems at the First and Second U.N. CGE Meetings.” Lawfare, 2018, https://www.lawfareblog.com/lethal-autonomous-weapons-systems-first-and-second-un-gge-meetings.

vii Dalio, Ray. Principles. Simon and Schuster, 2017.

viii Trent and Lathrop.

ix Russell, Stuart, director. Three Principles for Creating Safer AI. TED: Ideas Worth Spreading, 2017, www.ted.com/talks/stuart_russell_3_principles_for_creating_safer_ai.

110. Future Jobs and Skillsets

[Editor’s Note:  On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC.  Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces.  Today’s post is extracted from this conference’s final report (more of which is addressed at the bottom of this post).]

The U.S. Army currently has more than 150 Military Occupational Specialties (MOSs), each requiring a Soldier to learn unique tasks, skills, and knowledges. The emergence of a number of new technologies – drones, Artificial Intelligence (AI), autonomy, immersive mixed reality, big data storage and analytics, etc. – coupled with the changing character of future warfare means that many of these MOSs will need to change, while others will need to be created. This already has been seen in the wider U.S. and global economy, where the growth of internet services, smartphones, social media, and cloud technology over the last ten years has introduced a host of new occupations that previously did not exist. The future will further define and compel the creation of new jobs and skillsets that have not yet been articulated or even imagined. Today’s hobbies (e.g., drones) and recreational activities (e.g., Minecraft/Fortnite) that potential recruits engage in every day could become MOSs or Additional Skill Identifiers (ASIs) of the future.

Training eighty thousand new Recruits a year on existing MOSs is a colossal undertaking.  A great expansion in the jobs and skillsets needed to field a highly capable future Army, replete with modified or new MOSs, adds a considerable burden to the Army’s learning systems and institutions. These new requirements, however, will almost certainly present an opportunity for the Army to capitalize on intelligent tutors, personalized learning, and immersive learning to lessen costs and save time in Soldier and Leader development.

The recruit of 2050 will be born in 2032 and will be fundamentally different from the generations born before them.  Marc Prensky, educational writer and speaker who coined the term digital native, asserts this “New Human” will stand in stark contrast to the “Old Human” in the ways they learn and approach learning..1 Where humans today are born into a world with ubiquitous internet, hyper-connectivity, and the Internet of Things, each of these elements are generally external to the human.  By 2032, these technologies likely will have converged and will be embedded or integrated into the individual with connectivity literally on the tips of their fingers. 

Some of the newly required skills may be inherent within the next generation(s) of these Recruits. Many of the games, drones, and other everyday technologies that are already or soon to be very common – narrow AI, app development and general programming, and smart devices – will yield a variety of intrinsic skills that Recruits will have prior to entering the Army. Just like we no longer train Soldiers on how to use a computer, games like Fortnite, with no formal relationship with the military, will provide players with militarily-useful skills such as communications, resource management, foraging, force structure management, and fortification and structure building, all while attempting to survive against persistent attack.  Due to these trends, Recruits may come into the Army with fundamental technical skills and baseline military thinking attributes that flatten the learning curve for Initial Entry Training (IET).2

While these new Recruits may have a set of some required skills, there will still be a premium placed on premier skillsets in fields such as AI and machine learning, robotics, big data management, and quantum information sciences. Due to the high demand for these skillsets, the Army will have to compete for talent with private industry, battling them on compensation, benefits, perks, and a less restrictive work environment – limited to no dress code, flexible schedule, and freedom of action. In light of this, the Army may have to consider adjusting or relaxing its current recruitment processes, business practices, and force structuring to ensure it is able to attract and retain expertise. It also may have to reconsider how it adapts and utilizes its civilian workforce to undertake these types of tasks in new and creative ways.

The Recruit of 2050 will need to be engaged much differently than today. Potential Recruits may not want to be contacted by traditional methods3 – phone calls, in person, job fairs – but instead likely will prefer to “meet” digitally first. Recruiters already are seeing this today. In order to improve recruiting efforts, the Army may need to look for Recruits in non-traditional areas such as competitive online gaming. There is an opportunity for the Army to use AI to identify Recruit commonalities and improve its targeted advertisements in the digital realm to entice specific groups who have otherwise been overlooked. The Army is already exploring this avenue of approach through the formation of an eSports team that will engage young potential Recruits and attempt to normalize their view of Soldiers and the Army, making them both more relatable and enticing.4 This presents a broader opportunity to close the chasm that exists between civilians and the military.

The overall dynamic landscape of the future economy, the evolving labor market, and the changing character of future warfare will create an inflection point for the Army to re-evaluate longstanding recruitment strategies, workplace standards, and learning institutions and programs. This will bring about an opportunity for the Army to expand, refine, and realign its collection of skillsets and MOSs, making Soldiers more adapted for future battles, while at the same time challenging the Army to remain prominent in attracting premier talent in a highly competitive environment.

If you enjoyed this extract, please read the comprehensive Learning in 2050 Conference Final Report

… and see our TRADOC 2028 blog post.


1 Prensky, Mark, Mad Scientist Conference: Learning in 2050, Georgetown University, 9 August 2018.

2 Schatz, Sarah, Mad Scientist Conference: Learning in 2050, Georgetown University, 8 August 2018.

3 Davies, Hans, Mad Scientist Conference: Learning in 2050, Georgetown University, 9 August 2018.

4 Garland, Chad, Uncle Sam wants you — to play video games for the US Army, Stars and Stripes, 9 November 2018, https://www.stripes.com/news/uncle-sam-wants-you-to-play-video-games-for-the-us-army-1.555885.

109. Classic Planning Holism as a Basis for Megacity Strategy

[Editor’s Note: Recent operations against the Islamic State of Iraq and the Levant (ISIL) to liberate Mosul illustrate the challenges of warfighting in urban environments. In The Army Vision, GEN Mark A. Milley, Chief of Staff of the Army, and Dr. Mark T. Esper, Secretary of the Army, state that the U.S. Army must “Focus training on high-intensity conflict, with emphasis on operating in dense urban terrain…” Returning Mad Scientist Laboratory guest blogger Dr. Nir Buras leverages his expertise as an urban planner to propose a holistic approach to military operations in Megacities — Enjoy!]

A recent study identified 34 megacities, defined as having populations of over 10 million inhabitants.1 The scale, complexity, and dense populations of megacities, and their need for security, energy, water conservation, resource distribution, waste management, disaster management, construction, and transportation make them a challenging security environment.2

With urban warfare experience from Stalingrad to Gaza, it is clear that a doctrinal shift must take place.

Urban terrain, the “great equalizer,” diminishes an attacker’s advantages in firepower and mobility.”3 Recent experiences in Baghdad, Mosul, and Aleppo, as well as historically in Aachen, Seoul, Hue, and Ramadi, shift the perspective from problem solving to critical holistic thinking skills and decision-making required in ambiguous environments.4 For an Army, rule number one is to stay out of cities. If that is not possible, the second rule of warfare is to manipulate the environment.

The Strategic Studies Group finds that a megacity is the most challenging environment for a land force to operate in.5 But currently, the U.S. Army is incapable of operating within the megacity.6 The intellectual center of gravity is open to those who choose to seize it, because it does not exist.7

Cities are holistic entities, but holism is not about brown food and Birkenstocks.  Holism is a discipline managing whole systems which are more than a sum of their parts. Where problem-solving methodology drags the problem with it, resulting in negative synergies (new problems); the holistic methodology works from aspiration and results in positive synergies, many of which are unforeseen. The aspiration for megacity operations is control, not conquest. The cure must not be worse than the disease.

The holistic approach to combat, to fight the urban context, not the enemy, means reconfiguring the environment for operational purposes. Its goal of reforming antagonism to U.S. interests by controlling and reforming the city to become self-ruling and long-term sustainable, would facilitate urban, political, and economical homeostasis in alignment with U.S. interests and bequeath a homeostatic urban balance legacy — “Pax Americana.”8 Paradoxically, it may be the most cost-effective approach.

Megacities are inherently unsustainable and need to be fixed, war or not. Classic planning for megacities would break them down into environmentally controllable chunks of human scaled, walkable areas of 30,000, 120,000, and 500,000 persons by means of swathes of countryside. A continuous network of rural, agricultural, and natural areas, it would be at least 1-mile deep, and be the place where transport, major infrastructure, highways, campuses, large-scale sports venues, waste dumps, and even mines, might be located.

This is naturally ongoing in Detroit, is historically documented to have happened in Rome, and can be witnessed in Angkor Wat. While the greatest beneficiaries of this long-term would be the populace, its military benefits are obvious. The Army would simply accelerate the process.

The idea is to radically change the fighting environment while bolstering the population and its institutions to sympathize with U.S. goals. “Divide and conquer” followed by a sustainable legacy. Notably, operations within a megacity requires an understanding of a city’s normal procedures and daily operations beforehand.9 The proposed framework for this is the long-term classic planning of cities.

The application of classic planning to megacity operations follows four steps: Disrupt, Control, Stabilize, and Transfer.

1. DISRUPT urban fabric with swathes of country at least a mile wide containing a continuous network of rural, agricultural, natural, and water areas at least 1-mile deep, where transport, major infrastructure, highways, campuses, large-scale sports venues, etc., are located. In urban fabric, structures would be removed to virgin ground, and agriculture and nature reinstated there. Solutions for the debris will need to be developed, as well as for buried infrastructure. The block layout may remain in whole or part for agricultural and forest access. Soil bacteria may be used to rapidly consume toxic and hazardous materials. This has to be thoroughly planned in advance of a conflict.

2. CONTAIN urban fabric to a 1 hour walk (2 hours max), 2-4 miles from edge to edge, both in existing fabric and in new settlements for relocated persons.

3. STABILIZE neighborhoods, quarters, and city centers hierarchically, and densify them, up to 6-8 floors tall, according to the classic planning model of standard fabric buildings. Buildings taller than 6 or 8 stories may be placed on the periphery, if they are necessary at all.  Blocks, streets, plazas, and parks are laid out in appropriate dimensions.  Proven, traditional designs are used for buildings at least 85% of the time.  Stabilize communities through leadership, mentoring, the establishment of markets, industry, sources of income, and community institutions.

4. TRANSFER displaced communities to new urban fabric built on classic planning principles as developed after the Haiti Earthquake; and transfer air rights from land reclaimed for country to urban fabric centers (midrise densification) and peripheries (taller buildings as necessary). Transfer community management back to residents as soon as possible (1 year). Transfer loyalty; build community; develop education, mentoring, and training; and use civilian commercial work according to specifically developed management models for construction, economic, and urban management.10

To adopt a holistic approach to the megacity, the U.S. Army must engage in a comprehensive understanding of the environment prior to the arrival of forces, and plan the shaping of the environment, focusing on its physical attributes for both the benefit of the city and the Army. This holistic approach may generate outcomes similar to the type of synergies stimulated by the Marshall Plan after World War II.

If you enjoyed this post, please listen to:

Tomorrow’s Urban Battlefield podcast with Dr. Russell Glenn, hosted by our colleagues at the Modern War Institute.

… and also read the following:

– Mad Scientist Megacities and Dense Urban Areas Initiative in 2025 and Beyond Conference Final Report

– Where none have gone before: Operational and Strategic Perspectives on Multi-Domain Operations in Mega Cities Conference Proceedings

My City is Smarter than Yours!

Nir Buras is a PhD architect and planner with over 30 years of in-depth experience in strategic planning, architecture, and transportation design, as well as teaching and lecturing. His planning, design and construction experience includes East Side Access at Grand Central Terminal, New York; International Terminal D, Dallas-Fort-Worth; the Washington DC Dulles Metro line; work on the US Capitol and the Senate and House Office Buildings in Washington. Projects he has worked on have been published in the New York Times, the Washington Post, local newspapers, and trade magazines. Buras, whose original degree was Architect and Town planner, learned his first lesson in urbanism while planning military bases in the Negev Desert in Israel. Engaged in numerous projects since then, Buras has watched first-hand how urban planning impacted architecture. After the last decade of applying in practice the classical method that Buras learned in post-doctoral studies, his book, *The Art of Classic Planning* (Harvard University Press, 2019), presents the urban design and planning method of Classic Planning as a path forward for homeostatic, durable urbanism.


1 Demographia World Urban Areas 11th Annual Edition 2015,” Demographia, 2-20, September 18, 2015, accessed December 16, 2015, http://www.demographia.com/db-worldua.pdf. 67% of large urban areas (500,000 and higher) located in Asia and Africa.

2 Jack A. Goldstone, “The New Population Bomb: The Four Megatrends That Will Change the World,” Foreign Affairs, (January/February 2010) 38-39; National Intelligence Council, Global trends 2030 Report: Alternative Worlds (Washington, DC: National Intelligence Council, 2012), 1. Quoted in Kaune.

3 ARCIC, Unified Quest Executive Report 2014 (Fort Eustis, VA: US Army Capabilities Integration Center, 2014), 1. Quoted in Kaune.

4 Harris et al., Megacities and the US Army, 22. Louis A. Dimarco, Concrete Hell Urban Warfare from Stalingrad to Iraq (Oxford, UK: Osprey Publishing, 2012) 214-215. Quoted in Kaune.

5 Harris et al., Megacities and the US Army, 21.

6 Kaune.

7 David Betz, “Peering into the Past and Future of Urban Warfare in Israel,” War on the Rocks, December 17, 2015, accessed December 17, 2015, http://warontherocks.com/2015/12/peering-into-the-past-and-future-of-urban-warfare-in-israel/. Quoted in Kaune.

8 Tom R. Przybelski, “Hybrid War: The Gap in the Range of Military Operations” (Newport, RI: Naval War College, Joint Military Operations Department), iii.

9 Kaune.

10 Michael Evans, “The Case Against Megacities,” Parameters 45, no. 1, (Spring 2015): 36. Quoted in Kaune.

108. The Ghost of Mad Scientist Past!

Editor’s Note: Mad Scientist Laboratory is pleased to provide for your holiday reading pleasure our Anthology of the “best of” blog posts from 2018! This Anthology enables you to re-visit our futures oriented assessments, including ideas about the Operational Environment, technology trends, innovation, and our conference findings. Each article includes a wealth of links to interesting content, including Mad Scientist videos, podcasts, conference proceedings, and presentations.

And if you have not already done so, please consider subscribing to our blog site to stay abreast of upcoming Mad Scientist Conferences, on-line events, and writing exercises in 2019 by receiving it automatically in your email inbox twice weekly —  go to “SUBSCRIBE” on the right-hand side of your screen (or scroll down to the bottom if viewing the site on your PED), enter your commercial email address (i.e., non-DoD) in the “Email Address” text box, then select the “Confirm Follow” blue button in the subsequent email you receive. In doing so, you’ll stay connected with all things Mad Scientist!

Mad Scientist Laboratory wishes all of our readers the Happiest of Holiday Seasons!

 

 

107. “The Queue”

[Editor’s Note: Mad Scientist Laboratory is pleased to present our November edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the previous month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment (OE). We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

1. Is China a global leader in research and development? China Power Project, Center for Strategic and International Studies (CSIS), 2018. 

The United States Army’s concept of Multi-Domain Operations 2028 describes Russia and China as strategic competitors working to synthesize emerging technologies, such as artificial intelligence, hypersonics, machine learning, nanotechnology, and robotics, with their analysis of military doctrine and operations. The Future OE’s Era of Contested Equality (i.e., 2035 through 2050) describes China’s ascent to a peer competitor and our primary pacing threat. The fuel for these innovations is research and development funding from the Chinese Government and businesses.

CSIS’s China Power Project recently published an assessment of the rise in China’s research and development funding. There are three key facts that demonstrate the remarkable increase in funding and planning that will continue to drive Chinese innovation. First, “China’s R&D expenditure witnessed an almost 30-fold increase from 1991 to 2015 – from $13 billion to $376 billion. Presently, China spends more on R&D than Japan, Germany, and South Korea combined, and only trails the United States in terms of gross expenditure. According to some estimates, China will overtake the US as the top R&D spender by 2020.”

Second, globally businesses are funding the majority of the research and development activities. China is now following this trend with its “businesses financing 74.7 percent ($282 billion) of the country’s gross expenditure on R&D in 2015.” Tracking the origin of this funding is difficult with the Chinese government also operating a number of State Owned Entities. This could prove to be a strength for the Chinese Army’s access to commercial innovation.

China’s Micius quantum satellite, part of their Quantum Experiments at Space Scale (QUESS) program

Third, the Chinese government is funding cutting edge technologies where they are seeking to be global leaders. “Expenditures by the Chinese government stood at 16.2 percent of total R&D usage in 2015. This ratio is similar to that of advanced economies, such as the United States (11.2 percent). Government-driven expenditure has contributed to the development of the China National Space Administration. The Tiangong-2 space station and the “Micius’ quantum satellite – the first of its kind – are just two such examples.”

2. Microsoft will give the U.S. military access to ‘all the technology we create’, by Samantha Masunaga, Los Angeles Times (on-line), 1 December 2018.

Success in the future OE relies on many key assumptions. One such assumption is that the innovation cycle has flipped. Where the DoD used to drive technological innovation in this country, we now see private industry (namely Silicon Valley) as the driving force with the Army consuming products and transitioning technology for military use. If this system is to work, as the assumption implies, the Army must be able to work easily with the country’s leading technology companies.  Microsoft’s President Brad Smith stated recently that his company will “provide the U.S. military with access to the best technology … all the technology we create. Full stop.”

This is significant to the DoD for two reasons: It gives the DoD, and thus the Army, access to one of the leading technology developers in the world (with cloud computing and AI solutions), and it highlights that the assumptions we operate under are never guaranteed. Most recently, Google made the decision not to renew its contract with the DoD to provide AI support to Project Maven – a decision motivated, in part, by employee backlash.

Our near-peer competitors do not appear to be experiencing similar tensions or friction between their respective governments and private industry.  China’s President Xi is leveraging private sector advances for military applications via a “whole of nation” strategy, leading China’s Central Military-Civil Fusion Development Commission to address priorities including intelligent unmanned systems, biology and cross-disciplinary technologies, and quantum technologies.  Russia seeks to generate innovation by harnessing its defense industries with the nation’s military, civilian, and academic expertise at their Era Military Innovation Technopark to concentrate on advances in “information and telecommunication systems, artificial intelligence, robotic complexes, supercomputers, technical vision and pattern recognition, information security, nanotechnology and nanomaterials, energy tech and technology life support cycle, as well as bioengineering, biosynthetic, and biosensor technologies.”

Microsoft openly declaring its willingness to work seamlessly with the DoD is a substantial step forward toward success in the new innovation cycle and success in the future OE.

3. The Truth About Killer Robots, directed by Maxim Pozdorovkin, Third Party Films, premiered on HBO on 26 November 2018.

This documentary film could have been a highly informative piece on the disruptive potential posed by robotics and autonomous systems in future warfare. While it presents a jumble of interesting anecdotes addressing the societal changes wrought by the increased prevalence of autonomous systems, it fails to deliver on its title. Indeed, robot lethality is only tangentially addressed in a few of the documentary’s storylines:  the accidental death of a Volkswagen factory worker crushed by autonomous machinery; the first vehicular death of a driver engrossed by a Harry Potter movie while sitting behind the wheel of an autonomous-driving Tesla in Florida, and the use of a tele-operated device by the Dallas police to neutralize a mass shooter barricaded inside a building.

Russian unmanned, tele-operated BMP-3 shooting its 30mm cannon on a test range / Zvezda Broadcasting via YouTube

Given his choice of title, Mr. Pozdorovkin would have been better served in interviewing activists from the Campaign to Stop Killer Robots and participants at the Convention on Certain Conventional Weapons (CCW) who are negotiating in good faith to restrict the proliferation of lethal autonomy. A casual search of the Internet reveals a number of relevant video topics, ranging from the latest Russian advances in unmanned Ground Combat Vehicles (GCV) to a truly dystopian vision of swarming killer robots.

Instead, Mr. Pozdorovkin misleads his viewers by presenting a number creepy autonomy outliers (including a sad Chinese engineer who designed and then married his sexbot because of his inability to attract a living female mate given China’s disproportionately male population due to their former One-Child Policy); employing a sinister soundtrack and facial recognition special effects; and using a number of vapid androids (e.g., Japan’s Kodomoroid) to deliver contrived narration hyping a future where the distinction between humanity and machines is blurred. Where are Siskel and Ebert when you need ’em?

4. Walmart will soon use hundreds of AI robot janitors to scrub the floors of U.S. stores,” by Tom Huddleston Jr., CNBC, 5 December 2018.

The retail superpower Walmart is employing hundreds of robots in stores across the country, starting next month. These floor-scrubbing janitor robots will keep the stores’ floors immaculate using autonomous navigation that will be able to sense both people and obstacles.

The introduction of these autonomous cleaners will not be wholly disruptive to Walmart’s workforce operations, as they are only supplanting a task that is onerous for humans. But is this just the beginning? As humans’ comfort levels grow with the robots, will there then be an introduction of robot stocking, not unlike what is happening with Amazon? Will robots soon handle routine exchanges? And what of the displaced or under-employed workers resulting from this proliferation of autonomy, the widening economic gap between the haves and the have-nots, and the potential for social instability from neo-luddite movements in the Future OE?   Additionally, as these robots become increasingly conspicuous throughout our everyday lives in retail, food service, and many other areas, nefarious actors could hijack them or subvert them for terroristic, criminal, or generally malevolent uses.

The introduction of floor-cleaning robots at Walmart has larger implications than one might think. Robots are being considered for all the dull, dirty, and dangerous tasks assigned to the Army and the larger Department of Defense. The autonomous technology behind robots in Walmart today could have implications for our Soldiers at their home stations or on the battlefield of the future, conducting refueling and resupply runs, battlefield recovery, medevac, and other logistical and sustainment tasks.

5. What our science fiction says about us, by Tom Cassauwers, BBC News, 3 December 2018.

Right now the most interesting science fiction is produced in all sorts of non-traditional places,” says Anindita Banerjee, Associate Professor at Cornell University, whose research focuses on global sci-fi.  Sci-Fi and story telling enable us to break through our contemporary, mainstream echo chamber of parochialism to depict future technological possibilities and imagined worlds, political situations, and conflict. Unsurprisingly, different visions of the future imagining alternative realities are being written around the world – in China, Russia, and Africa. This rise of global science fiction challenges how we think about the evolution of the genre.  Historically, our occidental bias led us to believe that sci-fi was spreading from Western centers out to the rest of the world, blinding us to the fact that other regions also have rich histories of sci-fi depicting future possibilities from their cultural perspectives. Chinese science fiction has boomed in recent years, with standout books like Cixin Liu’s The Three-Body ProblemAfrofuturism is also on the rise since the release of the blockbuster Black Panther.

The Mad Scientist Initiative uses Crowdsourcing and Story Telling as two innovative tools to help us envision future possibilities and inform the OE through 2050. Strategic lessons learned from looking at the Future OE show us that the world of tomorrow will be far more challenging and dynamic. In our FY17 Science Fiction Writing Contest, we asked our community of action to describe Warfare in 2030-2050.  The stories submitted showed virtually every new technology is connected to and intersecting with other new technologies and advances.  The future OE presents us with a combination of new technologies and societal changes that will intensify long-standing international rivalries, create new security dynamics, and foster instability as well as opportunities. Sci-fi transcends beyond a global reflection on resistance; non-Western science fiction also taps into a worldwide consciousness – helping it conquer audiences beyond their respective home markets.

6. NVIDIA Invents AI Interactive Graphics, Nvidia.com, 3 December 2018.

A significant barrier to the modeling and simulation of dense urban environments has been the complexity of these areas in terms of building, vehicle, pedestrian, and foliage density. Megacities and their surrounding environments have such a massive concentration of entities that it has been a daunting task to re-create them digitally.  Nvidia has recently developed a first-step solution to this ongoing problem. Using neural networks and generative models, the developers are able to train AI to create realistic urban environments based off of real-world video.

As Nvidia admits, “One of the main obstacles developers face when creating virtual worlds, whether for game development, telepresence, or other applications is that creating the content is expensive. This method allows artists and developers to create at a much lower cost, by using AI that learns from the real world.” This process could significantly compress the development timeline, and while it wouldn’t address the other dimensions of urban operations — those entities that are underground or inside buildings (multi-floor and multi-room) — it would allow the Army to divert and focus more resources in those areas. The Chief of Staff of the Army has made readiness his #1 priority and stated, “In the future, I can say with very high degrees of confidence, the American Army is probably going to be fighting in urban areas,” and the Army “need[s] to man, organize, train and equip the force for operations in urban areas, highly dense urban areas.” 1  Nvidia’s solution could enable and empower the force to meet that goal.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future OE, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!


1Commentary: The missing link to preparing for military operations in megacities and dense urban areas,” by Claudia ElDib and John Spencer, Army Times, 20 July 2018, https://www.armytimes.com/opinion/commentary/2018/07/20/commentary-the-missing-link-to-preparing-for-military-operations-in-megacities-and-dense-urban-areas/.

106. Man-Machine Rules

[Editor’s Note:  Mad Scientist Laboratory is pleased to present the first of two guest blog posts by Dr. Nir Buras.  In today’s post, he makes the compelling case for the establishment of man-machine rules.  Given the vast technological leaps we’ve made during the past two centuries (with associated societal disruptions), and the potential game changing technological innovations predicted through the middle of this century, we would do well to consider Dr. Buras’ recommended list of nine rules — developed for applicability to all technologies, from humankind’s first Paleolithic hand axe to the future’s much predicted, but long-awaited strong Artificial Intelligence (AI).]

Two hundred years of massive collateral impacts by technology have brought to the forefront of society’s consciousness the idea that some sort of rules for man-machine interaction are necessary, similar to the rules in place for gun safety, nuclear power, and biological agents. But where their physical effects are clear to see, the power of computing is veiled in virtuality and anthropomorphization. It appears harmless, if not familiar, and it often has a virtuous appearance.

Avid mathematician Ada Augusta Lovelace is often called the first computer programmer

Computing originated in the punched cards of Jacquard looms early in the 19th century. Today it carries the promise of a cloud of electrons from which we make our Emperor’s New Clothes. As far back as 1842, the brilliant mathematician Ada Augusta, Countess of Lovelace (1815-1852), foresaw the potential of computers. A protégé and associate of Charles Babbage (1791-1871), conceptual originator of the programmable digital computer, she realized the “almost incalculable” ultimate potential of such difference engines. She also recognized that, as in all extensions of human power or knowledge, “collateral influences” occur.1

AI presents us with such “collateral influences.”2  The question is not whether machine systems can mimic human abilities and nature, but when. Will the world become dependent on ungoverned algorithms?3  Should there be limits to mankind’s connection to machines? As concerns mount, well-meaning politicians, government officials, and some in the field are trying to forge ethical guidelines to address the collateral challenges of data use, robotics, and AI.4

A Hippocratic Oath of AI?

This cover of Asimov’s I, Robot illustrates the story “Runaround”, the first to list all Three Laws of Robotics.

Asimov’s Three Laws of Robotics are merely a literary ploy to infuse his storylines.5  In the real world, Apple, Amazon, Facebook, Google, DeepMind, IBM, and Microsoft, founded www.partnershiponai.org6 to ensure “… the safety and trustworthiness of AI technologies, the fairness and transparency of systems.” Data scientists from tech companies, governments, and nonprofits gathered to draft a voluntary digital charter for their profession.7  Oren Etzioni, CEO of the Allen Institute for AI and a professor at the University of Washington’s Computer Science Department, proposed a Hippocratic Oath for AI.

But such codes are composed of hard-to-enforce terms and vague goals, such as using AI “responsibly and ethically, with the aim of reducing bias and discrimination.” They pay lip service to privacy and human priority over machines. They appear to sugarcoat a culture which passes the buck to the lowliest Soldier.8

We know that good intentions are inadequate when enforcing confidentiality. Well-meant but unenforceable ideas don’t meet business standards.  It is unlikely that techies and their bosses, caught up in the magic of coding, will shepherd society through the challenges of the petabyte AI world.9  Vague principles, underwriting a non-binding code, cannot counter the cynical drive for profit.10

Indeed, in an area that lacks authorities or legislation to enforce rules, the Association for Computing Machinery (ACM) is itself backpedaling from its own Code of Ethics and Professional Conduct. Their document weakly defines notions of “public good” and “prioritizing the least advantaged.”11 Microsoft’s President Brad Smith admits that his company wouldn’t expect customers of its services to meet even these standards.

In the wake of the Cambridge Analytica scandal, it is clear that coders are not morally superior to other people and that voluntary, unenforceable Codes and Oaths are inadequate.12  Programming and algorithms clearly reflect ethical, philosophical, and moral positions.13  It is false to assume that the so-called “openness” trait of programmers reflects a broad mindfulness.  There is nothing heroic about “disruption for disruption’s sake” or hiding behind “black box computing.”14  The future cannot be left up to an adolescent-centric culture in an economic system that rests on greed.15  The society that adopts “Electronic personhood” deserves it.

Machines are Machines, People are People

After 200 years of the technology tail wagging the humanity dog, it is apparent now that we are replaying history – and don’t know it. Most human cultures have been intensively engaged with technology since before the Iron Age 3,000 years ago. We have been keenly aware of technology’s collateral effects mostly since the Industrial Revolution, but have not yet created general rules for how we want machines to impact individuals and society. The blurring of reality and virtuality that AI brings to the table might prompt us to do so.

Distinctions between the real and the virtual must be maintained if the behavior of the most sophisticated computation machines and robots is captured by legal systems. Nothing in the virtual world should be considered real any more than we believe that the hallucinations of a drunk or drugged person are real.

The simplest way to maintain the distinction is remembering that the real IS, and the virtual ISN’T, and that virtual mimesis is produced by machines. Lovelace reminded us that machines are just machines. While in a dark, distant future, giving machines personhood might lead to the collapse of humanity, Harari’s Homo Deus warns us that AI, robotics, and automation are quickly bringing the economic value of humans to zero.16

From the start of civilization, tools and machines have been used to reduce human drudge labor and increase production efficiency. But while tools and machines obviate physical aspects of human work in the context of the production of goods or processing information, they in no way affect the truth of humans as sentient and emotional living beings, nor the value of transactions among them.

Microsoft’s Tay AI Chatter Bot

The man-machine line is further blurred by our anthropomorphizing machinery, computing, and programming. We speak of machines in terms of human traits, and make programming analogous to human behavior. But there is nothing amusing about GIGO experiments like MIT’s psychotic bot Norman, or Microsoft’s fascist Tay.17 Technologists falling into the trap of considering that AI systems can make decisions, are analogous to children, playing with dolls, marveling that “their dolly is speaking.”

Machines don’t make decisions. Humans do. They may accept suggestions made by machines and when they do, they are responsible for the decisions made. People are and must be held accountable, especially those hiding behind machines. The holocaust taught us that one can never say, “I was just following orders.”

Nothing less than enforceable operational rules is required for any technical activity, including programming. It is especially important for tech companies, since evidence suggests that they take ethical questions to heart only under direct threats to their balance sheets.18

When virtuality offers experiences that humans perceive as real, the outcomes are the responsibility of the creators and distributors, no less than tobacco companies selling cigarettes, or pharmaceutical companies and cartels selling addictive drugs. Individuals do not have the right to risk the well-being of others to satisfy their need for complying with clichés such as “innovation,” and “disruption.”

Nuclear, chemical, biological, gun, aviation, machine, and automobile safety rules do not rely on human nature. They are based on technical rules and procedures. They are enforceable and moral responsibility is typically carried by the hierarchies of their organizations.19

As we master artificial intelligence, human intelligence must take charge.20 The highest values known to mankind remains human life and the qualities and quantities necessary for the best individual life experience.21 For the transactions and transformations in which technology assists, we need simple operational rules to regulate the actions and manners of individuals. Moving the focus to human interactions empowers individuals and society.

Man-Machine Rules

Man-Machine rules should address any tool or machine ever made or to be made. They would be equally applicable to any technology of any period, from the first flaked stone, to the ultimate predictive “emotion machines.” They would be adjudicated by common law.22

1. All material transformations and human transactions are to be conducted by humans.

2. Humans may directly employ hand/desktop/workstation devices in the above.

3. At all times, an individual human is responsible for the activity of any machine or program.

4. Responsibility for errors, omissions, negligence, mischief, or criminal-like activity is shared by every person in the organizational hierarchical chain, from the lowliest coder or operator, to the CEO of the organization, and its last shareholder.

5. Any person can shut off any machine at any time.

6. All computing is visible to anyone [No Black Box].

7. Personal Data are things. They belong to the individual who owns them, and any use of them by a third-party requires permission and compensation.

8. Technology must age before common use, until an Appropriate Technology is selected.

9. Disputes must be adjudicated according to Common Law.

Machines are here to help and advise humans, not replace them, and humans may exhibit a spectrum of responses to them. Some may ignore a robot’s advice and put others at risk. Some may follow recommendations to the point of becoming a zombie. But either way, Man-Machine Rules are based on and meant to support free, individual human choices.

Man-Machine Rules can help organize dialog around questions such as how to secure personal data. Do we need hardcopy and analog formats? How ethical are chips embedded in people and in their belongings? What degrees and controls are contemplatable for personal freedoms and personal risk? Will consumer rights and government organizations audit algorithms?23 Would equipment sabbaticals be enacted for societal and economic balances?

The idea that we can fix the tech world through a voluntary ethical code emergent from itself, paradoxically expects that the people who created the problems will fix them.24 It is not whether the focus should shift to human interactions that leaves more humans in touch with their destiny. The question is at what cost? If not now, when? If not by us, by whom?

If you reading enjoyed this post, please also see:

Prediction Machines: The Simple Economics of Artificial Intelligence

Artificial Intelligence (AI) Trends

Making the Future More Personal: The Oft-Forgotten Human Driver in Future’s Analysis

Nir Buras is a PhD architect and planner with over 30 years of in-depth experience in strategic planning, architecture, and transportation design, as well as teaching and lecturing. His planning, design and construction experience includes East Side Access at Grand Central Terminal, New York; International Terminal D, Dallas-Fort-Worth; the Washington DC Dulles Metro line; work on the US Capitol and the Senate and House Office Buildings in Washington. Projects he has worked on have been published in the New York Times, the Washington Post, local newspapers, and trade magazines. Buras, whose original degree was Architect and Town planner, learned his first lesson in urbanism while planning military bases in the Negev Desert in Israel. Engaged in numerous projects since then, Buras has watched first-hand how urban planning impacted architecture. After the last decade of applying in practice the classical method that Buras learned in post-doctoral studies, his book, *The Art of Classic Planning* (Harvard University Press, 2019), presents the urban design and planning method of Classic Planning as a path forward for homeostatic, durable urbanism.


1 Lovelace, Ada Augusta, Countess, Sketch of The Analytical Engine Invented by Charles Babbage by L. F. Menabrea of Turin, Officer of the Military Engineers, With notes upon the Memoir by the Translator, Bibliothèque Universelle de Genève, October, 1842, No. 82.

2 Oliveira, Arlindo, in Pereira, Vitor, Hippocratic Oath for Algorithms and Artificial Intelligence, Medium.com (website), 23 August 2018, https://medium.com/predict/hippocratic-oath-for-algorithms-and-artificial-intelligence-5836e14fb540; Middleton, Chris, Make AI developers sign Hippocratic Oath, urges ethics report: Industry backs RSA/YouGov report urging the development of ethical robotics and AI, computing.co.uk (website), 22 September 2017, https://www.computing.co.uk/ctg/news/3017891/make-ai-developers-sign-a-hippocratic-oath-urges-ethics-report; N.A., Do AI programmers need a Hippocratic oath?, Techhq.com (website), 15 August 2018, https://techhq.com/2018/08/do-ai-programmers-need-a-hippocratic-oath/

3 Oliveira, 2018; Dellot, Benedict, A Hippocratic Oath for AI Developers? It May Only Be a Matter of Time, Thersa.org (website), 13 February 2017, https://www.thersa.org/discover/publications-and-articles/rsa-blogs/2017/02/a-hippocratic-oath-for-ai-developers-it-may-only-be-a-matter-of-time; See also: Clifford, Catherine, Expert says graduates in A.I. should take oath: ‘I must not play at God nor let my technology do so’, Cnbc.com (website), 14 March 2018, https://www.cnbc.com/2018/03/14/allen-institute-ceo-says-a-i-graduates-should-take-oath.html; Johnson, Khari, AI Weekly: For the sake of us all, AI practitioners need a Hippocratic oath, Venturebeat.com (website), 23 March 2018, https://venturebeat.com/2018/03/23/ai-weekly-for-the-sake-of-us-all-ai-practitioners-need-a-hippocratic-oath/; Work, Robert O., former deputy secretary of defense, in Metz, Cade, Pentagon Wants Silicon Valley’s Help on A.I., New York Times, 15 March 2018.

4 Schotz, Mai, Should Data Scientists Adhere To A Hippocratic Oath?, Wired.com (website), 8 February 2018, https://www.wired.com/story/should-data-scientists-adhere-to-a-hippocratic-oath/; du Preez, Derek, MPs debate ‘hippocratic oath’ for those working with AI, Government.diginomica.com (website), 19 January 2018, https://government.diginomica.com/2018/01/19/mps-debate-hippocratic-oath-working-ai/

5 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Asimov, Isaac, Runaround, in I, Robot, The Isaac Asimov Collection ed., Doubleday, New York City, p. 40.

6 Middleton, 2017.

7 Etzioni, Oren, A Hippocratic Oath for artificial intelligence practitioners, Techcrunch.com (website), 14 March 2018. https://techcrunch.com/2018/03/14/a-hippocratic-oath-for-artificial-intelligence-practitioners/?platform=hootsuite

8 Do AI programmers need a Hippocratic oath?, Techhq, 2018.

9 Goodsmith, Dave, quoted in Schotz, 2018.

10 Schotz, 2018.

11 Do AI programmers need a Hippocratic oath?, Techhq, 2018. Wheeler, Schaun, in Schotz, 2018.

12 Gnambs, T., What makes a computer wiz? Linking personality traits and programming aptitude, Journal of Research in Personality, 58, 2015, pp. 31-34.

13 Oliveira, 2018.

14 Jarrett, Christian, The surprising truth about which personality traits do and don’t correlate with computer programming skills, Digest.bps.org.uk (website), British Psychological Society, 26 October 2015, https://digest.bps.org.uk/2015/10/26/the-surprising-truth-about-which-personality-traits-do-and-dont-correlate-with-computer-programming-skills/; Johnson, 2018.

15 Do AI programmers need a Hippocratic oath?, Techhq, 2018.

16 Harari, Yuval N. Homo Deus: A Brief History of Tomorrow. London: Harvill Secker, 2015.

17 That Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms is not an excuse. AI Twitter bot, Tay had to be deleted after it started making sexual references and declarations such as “Hitler did nothing wrong.”

18 Schotz, 2018.

19 See the example of Dr. Kerstin Dautenhahn, Research Professor of Artificial Intelligence in the School of Computer Science at the University of Hertfordshire, who claims no responsibility in determining the application of the work she creates. She might as well be feeding children shards of glass saying, “It is their choice to eat it or not.” In Middleton, 2017. The principle is that the risk of an unfavorable outcome lies with an individual as well as the entire chain of command, direction, and or ownership of their organization, including shareholders of public companies and citizens of states. Everybody has responsibility the moment they engage in anything that could affect others. Regulatory “sandboxes” for AI developer experiments – equivalent to pathogen or nuclear labs – should have the same types of controls and restrictions. Dellot, 2017.

20 Oliveira, 2018.

21 Sentience and sensibilities of other beings is recognized here, but not addressed.

22 The proposed rules may be appended to the International Covenant on Economic, Social and Cultural Rights (ICESCR, 1976), part of the International Bill of Human Rights, which include the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). International Covenant on Economic, Social and Cultural Rights, www.refworld.org.; EISIL International Covenant on Economic, Social and Cultural Rights, www.eisil.org; UN Treaty Collection: International Covenant on Economic, Social and Cultural Rights, UN. 3 January 1976; Fact Sheet No.2 (Rev.1), The International Bill of Human Rights, UN OHCHR. June 1996.

23 Dellot, 2017.

24 Schotz, 2018.

105. Emerging Technologies as Threats in Non-Kinetic Engagements

[Editor’s Note:  Mad Scientist Laboratory is pleased to present today’s post by returning guest blogger and proclaimed Mad Scientist Dr. James Giordano and CAPT (USN – Ret.) L. R. Bremseth, identifying the national security challenges presented by emerging technologies, specifically when employed by our strategic competitors and non-state actors alike in non-kinetic engagements.

Dr. Giordano’s and CAPT Bremseth’s post is especially relevant, given the publication earlier this month of TRADOC Pamphlet 525-3-1, U.S. Army in Multi-Domain Operations 2028, and its solution to the “problem of layered standoff,” namely “the rapid and continuous integration of all domains of warfare to deter and prevail as we compete short of armed conflict; penetrate and dis-integrate enemy anti-access and area denial systems; exploit the resulting freedom of maneuver to defeat enemy systems, formations and objectives and to achieve our own strategic objectives; and consolidate gains to force a return to competition on terms more favorable to the U.S., our allies and partners.”]

“Victorious warriors seek to win first then go to war, while defeated warriors go to war first then seek to win.” — Sun Tzu

Non-kinetic Engagements

Political and military actions directed at adversely impacting or defeating an opponent often entail clandestine operations which can be articulated across a spectrum that ranges from overt warfare to subtle “engagements.” Routinely, the United States, along with its allies (and adversaries), has employed clandestine tactics and operations across the kinetic and non-kinetic domains of warfare. Arguably, the execution of clandestine kinetic operations is employed more readily as these collective activities often occur after the initiation of conflict (i.e., “Right of Bang”), and their effects may be observed (to various degrees) and/or measured. Given that clandestine non-kinetic activities are less visible and insidious, they may be particularly (or more) effective because often they are unrecognized and occur “Left of Bang.” Other nations, especially adversaries, understand the relative economy of force that non-kinetic engagements enable and increasingly are focused upon developing and articulating advanced methods for operations.

Much has been written about the fog of war. Non-kinetic engagements can create unique uncertainties prior to and/or outside of traditional warfare, precisely because they have qualitatively and quantitatively “fuzzy boundaries” as blatant acts of war. The “intentionally induced ambiguity” of non-kinetic engagements can establish plus-sum advantages for the executor(s) and zero-sum dilemmas for the target(s). For example, a limited scale non-kinetic action, which exerts demonstrably significant effects but does not meet defined criteria for an act of war, places the targeted recipient(s) at a disadvantage:  First, in that the criteria for response (and proportionality) are vague and therefore any response could be seen as questionable; and second, in that if the targeted recipient(s) responds with bellicose action(s), there is considerable likelihood that they may be viewed as (or provoked to be) the aggressor(s) (and therefore susceptible to some form of retribution that may be regarded as sanctionable).

Nominally, non-kinetic engagements often utilize non-military means to expand the effect-space beyond the conventional battlefield. The Department of Defense and Joint Staff do not have a well agreed-upon lexicon to define and to express the full spectrum of current and potential activities that constitute non-kinetic engagements. It is unfamiliar – and can be politically uncomfortable – to use non-military terms and means to describe non-kinetic engagements. As previously noted, it can be politically difficult – if not precarious– to militarily define and respond to non-kinetic activities.

Non-kinetic engagements are best employed to incur disruptive effects in and across various dimensions of effect (e.g., biological, psychological, social) that can lead to intermediate to long-term destructive manifestations (in a number of possible domains, ranging from the economic to the geo-political). The latent disruptive and destructive effects should be framed and regarded as “Grand Strategy” approaches that evoke outcomes in a “long engagement/long war” context rather than merely in more short-term tactical situations.1

Thus, non-kinetic operations must be seen and regarded as “tools of mass disruption,” incurring “rippling results” that can evoke both direct and indirect de-stabilizing effects. These effects can occur and spread:  1) from the cellular (e.g., affecting physiological function of a targeted individual) to the socio-political scales (e.g., to manifest effects in response to threats, burdens and harms incurred by individual and/or groups); and 2) from the personal (e.g., affecting a specific individual or particular group of individuals) to the public dimensions in effect and outcome (e.g., by incurring broad scale reactions and responses to key non-kinetic events).2

Given the increasing global stature, capabilities, and postures of Asian nations, it becomes increasingly important to pay attention to aspects of classical Eastern thought (e.g., Sun Tzu) relevant to bellicose engagement. Of equal importance is the recognition of various nations’ dedicated enterprises in developing methods of non-kinetic operations (e.g., China; Russia), and to understand that such endeavors may not comport with the ethical systems, principles, and restrictions adhered to by the United States and its allies.3, 4 These differing ethical standards and practices, if and when coupled to states’ highly centralized abilities to coordinate and to synchronize activity of the so-called “triple helix” of government, academia, and the commercial sector, can create synergistic force-multiplying effects to mobilize resources and services that can be non-kinetically engaged.5 Thus, these states can target and exploit the seams and vulnerabilities in other nations that do not have similarly aligned, multi-domain, coordinating capabilities.

Emerging Technologies – as Threats

Increasingly, emerging technologies are being leveraged as threats for such non-kinetic engagements. While the threat of radiological, nuclear, and (high yield) explosive technologies have been and remain generally well surveilled and controlled to date, new and convergent innovations in the chemical, biological, cyber sciences, and engineering are yielding tools and methods that currently are not completely, or effectively addressed. An overview of these emerging technologies is provided in Table 1 below.

Table 1

Of key interest are the present viability and current potential value of the brain sciences to be engaged in these ways.6, 7, 8 The brain sciences entail and obtain new technologies that can be applied to affect chemical and biological systems in both kinetic (e.g., chemical and biological ‘warfare’ but in ways that may sidestep definition – and governance – by existing treaties and conventions such as the Biological Toxins and Weapons Convention (BTWC), and Chemical Weapons Convention (CWC), and/or non-kinetic ways (which fall outside of, and therefore are not explicitly constrained by, the scope and auspices of the BTWC or CWC).9, 10

As recent incidents (e.g., “Havana Syndrome”; use of novichok; infiltration of foreign-produced synthetic opioids to US markets) have demonstrated, the brain sciences and technologies have utility to affect “minds and hearts” in (kinetic and non-kinetic) ways that elicit biological, psychological, socio-economic, and political effects which can be clandestine, covert, or attributional, and which evoke multi-dimensional ripple effects in particular contexts (as previously discussed). Moreover, apropos current events, the use of gene editing technologies and techniques to modify existing microorganisms11, and/or selectively alter human susceptibility to disease12 , reveal the ongoing and iterative multi-national interest in and considered weaponizable use(s) of emerging biotechnologies as instruments to incur “precision pathologies” and “immaculate destruction” of selected targets.

Toward Address, Mitigation, and Prevention

Without philosophical understanding of and technical insight into the ways that non-kinetic engagements entail and affect civilian, political, and military domains, the coordinated assessment and response to any such engagement(s) becomes procedurally complicated and politically difficult. Therefore, we advocate and propose increasingly dedicated efforts to enable sustained, successful surveillance, assessment, mitigation, and prevention of the development and use of Emerging Technologies as Threats (ETT) to national security. We posit that implementing these goals will require coordinated focal activities to:  1) increase awareness of emerging technologies that can be utilized as non-kinetic threats; 2) quantify the likelihood and extent of threat(s) posed; 3) counter identified threats; and 4) prevent or delay adversarial development of future threats.

Further, we opine that a coordinated enterprise of this magnitude will necessitate a Whole of Nations approach so as to mobilize the organizations, resources, and personnel required to meet other nations’ synergistic triple helix capabilities to develop and non-kinetically engage ETT.

Utilizing this approach will necessitate establishment of:

1. An office (or network of offices) to coordinate academic and governmental research centers to study and to evaluate current and near-future non-kinetic threats.

2. Methods to qualitatively and quantitatively identify threats and the potential timeline and extent of their development.

3. A variety of means for protecting the United States and allied interests from these emerging threats.

4. Computational approaches to create and to support analytic assessments of threats across a wide range of emerging technologies that are leverageable and afford purchase in non-kinetic engagements.

In light of other nations’ activities in this domain, we view the non-kinetic deployment of emerging technologies as a clear, present, and viable future threat. Therefore, as we have stated in the past13, 14, 15 , and unapologetically re-iterate here, it is not a question of if such methods will be utilized but rather questions of when, to what extent, and by which group(s), and most importantly, if the United States and its allies will be prepared for these threats when they are rendered.

If you enjoyed reading this post, please also see Dr. Giordano’s presentations addressing:

War and the Human Brain podcast, posted by our colleagues at Modern War Institute on 24 July 2018.

Neurotechnology in National Security and Defense from the Mad Scientist Visioning Multi-Domain Battle in 2030-2050 Conference, co-hosted by Georgetown University in Washington, D.C., on 25-26 July 2017.

Brain Science from Bench to Battlefield: The Realities – and Risks – of Neuroweapons from Lawrence Livermore National Laboratory’s Center for Global Security Research (CGSR), on 12 June 2017.

Mad Scientist James Giordano, PhD, is Professor of Neurology and Biochemistry, Chief of the Neuroethics Studies Program, and Co-Director of the O’Neill-Pellegrino Program in Brain Science and Global Law and Policy at Georgetown University Medical Center. He also currently serves as Senior Biosciences and Biotechnology Advisor for CSCI, Springfield, VA, and has served as Senior Science Advisory Fellow of the Strategic Multilayer Assessment Group of the Joint Staff of the Pentagon.

R. Bremseth, CAPT, USN SEAL (Ret.), is Senior Special Operations Forces Advisor for CSCI, Springfield, VA. A 29+ years veteran of the US Navy, he commanded SEAL Team EIGHT, Naval Special Warfare GROUP THREE, and completed numerous overseas assignments. He also served as Deputy Director, Operations Integration Group, for the Department of the Navy.

This blog is adapted with permission from a whitepaper by the authors submitted to the Strategic Multilayer Assessment Group/Joint Staff Pentagon, and from a manuscript currently in review at HDIAC Journal. The opinions expressed in this piece are those of the authors, and do not necessarily reflect those of the United States Department of Defense, and/or the organizations with which the authors are involved. 


1 Davis Z, Nacht M. (Eds.) Strategic Latency- Red, White and Blue: Managing the National and international Security Consequences of Disruptive Technologies. Livermore CA: Lawrence Livermore Press, 2018.

2 Giordano J. Battlescape brain: Engaging neuroscience in defense operations. HDIAC Journal 3:4: 13-16 (2017).

3 Chen C, Andriola J, Giordano J. Biotechnology, commercial veiling, and implications for strategic latency: The exemplar of neuroscience and neurotechnology research and development in China. In: Davis Z, Nacht M. (Eds.) Strategic Latency- Red, White and Blue: Managing the National and international Security Consequences of Disruptive Technologies. Livermore CA: Lawrence Livermore Press, 2018.

4 Palchik G, Chen C, Giordano J. Monkey business? Development, influence and ethics of potentially dual-use brain science on the world stage. Neuroethics, 10:1-4 (2017).

5 Etzkowitz H, Leydesdorff L. The dynamics of innovation: From national systems and “Mode 2” to a Triple Helix of university-industry-government relations. Research Policy, 29: 109-123 (2000).

6 Forsythe C, Giordano J. On the need for neurotechnology in the national intelligence and defense agenda: Scope and trajectory. Synesis: A Journal of Science, Technology, Ethics and Policy 2(1): T5-8 (2011).

7 Giordano J. (Ed.) Neurotechnology in National Security and Defense: Technical Considerations, Neuroethical Concerns. Boca Raton: CRC Press (2015).

8 Giordano J. Weaponizing the brain: Neuroscience advancements spark debate. National Defense, 6: 17-19 (2017).

9 DiEuliis D, Giordano J. Why gene editors like CRISPR/Cas may be a game-changer for neuroweapons. Health Security 15(3): 296-302 (2017).

10 Gerstein D, Giordano J. Re-thinking the Biological and Toxin Weapons Convention? Health Security 15(6): 1-4 (2017).

11 DiEuliis D, Giordano J. Gene editing using CRISPR/Cas9: implications for dual-use and biosecurity. Protein and Cell 15: 1-2 (2017).

12 See, for example: https://www.vox.com/science-and-health/2018/11/30/18119589/crispr-technology-he-jiankui (Accessed 2. December, 2018).

13 Giordano J, Wurzman R. Neurotechnology as weapons in national intelligence and defense. Synesis: A Journal of Science, Technology, Ethics and Policy 2: 138-151 (2011).

14 Giordano J, Forsythe C, Olds J. Neuroscience, neurotechnology and national security: The need for preparedness and an ethics of responsible action. AJOB-Neuroscience 1(2): 1-3 (2010).

15 Giordano J. The neuroweapons threat. Bulletin of the Atomic Scientists 72(3): 1-4 (2016).

104. Critical Thinking: The Neglected Skill Required to Win Future Conflicts

[Editor’s Note: As addressed in last week’s post, entitled The Human Targeting Solution: An AI Story, the incorporation of Artificial Intelligence (AI) as a warfighting capability has the potential to revolutionize combat, accelerating the future fight to machine speeds.  That said, the advanced algorithms underpinning these AI combat multipliers remain dependent on the accuracy and currency of their data feeds. In the aforementioned post, the protagonist’s challenge in overriding the AI-prescribed optimal (yet flawed) targeting solution illustrates the inherent tension between human critical thinking and the benefits of AI.

Today’s guest blog post, submitted by MAJ Cynthia Dehne, expands upon this theme, addressing human critical thinking as the often neglected, yet essential skill required to successfully integrate and employ emergent technologies while simultaneously understanding their limitations on future battlefields.  Warfare will remain an intrinsically human endeavor, the fusion of deliberate and calculating human intellect with ever more lethal technological advances. ]

The future character of war will be influenced by emerging technologies such as AI, robotics, computing, and synthetic biology. Cutting-edge technologies will become increasingly cheaper and readily available, introducing a wider range of actors on the battlefield. Moreover, nation-state actors are no longer the drivers of cutting-edge technology — militaries are leveraging the private sector who are leading research and development in emergent technologies. Proliferation of these cheap, accessible technologies will allow both peer competitors and non-state actors to wage serious threats in the future operational environment.  Due to the abundance of new players on the battlefield combined with emerging technologies, future conflicts will be won by those who both possess “critical thinking” skills and can integrate technology seamlessly to inform decision-making in war instead of relying on technology to win war. Achieving success in the future eras of accelerated human progress and contested equality will require the U.S. Army to develop Soldiers who are adept at seamlessly employing technology on the battlefield while continuously exercising critical thinking skills.

The Foundation for Critical Thinking defines critical thinking as “the art of analyzing and evaluating thinking with a view to improve it.” 1 Furthermore, they assert that a well cultivated critical thinker can do the following: raise vital questions and problems and formulate them clearly and precisely; gather and assess relevant information, using abstract ideas to interpret it effectively; come to well-reasoned conclusions and solutions, testing them against relevant criteria and standards; think open-mindedly within alternative systems of thought, recognizing and assessing, as needed, their assumptions, implications, and practical consequences; and communicate effectively with others in figuring out solutions to complex problems.2

Many experts in education and psychology argue that critical thinking skills are declining. In 2017, Dr. Stephen Camarata wrote about the emerging crisis in critical thinking and college students’ struggles to tackle real world problem solving. He emphasized the essential need for critical thinking and asserted that “a young adult whose brain has been “wired’ to be innovative, think critically, and problem solve is at a tremendous competitive advantage in today’s increasingly complex and competitive world.”3 Although most government agencies, policy makers, and businesses deem critical thinking important, STEM fields continue to be prioritized. However, if creative thinking skills are not fused with STEM, then there will continue to be a decline in those equipped with well-rounded critical thinking abilities. In 2017, Mark Cuban opined during an interview with Bloomberg TV that the nature of work is changing and the future skill that will be more in-demand will be “creative thinking.” Specifically, he stated “I personally think there’s going to be a greater demand in 10 years for liberal arts majors than there were for programming majors and maybe even engineering.”4 Additionally, Forbes magazine published an article in 2018 declaring that “creativity is the skill of the future.”5

Employing future technologies effectively will be key to winning war, but it is only one aspect. During the Vietnam War, the U.S. relied heavily on technology but were defeated by an enemy who leveraged simple guerilla tactics combined with minimal military technology. Emerging technologies will be vital to inform decision-making, but will not negate battlefield friction. Carl von Clausewitz ascertained that although everything is simple in war, the simplest things become difficult and accumulate and create friction.6 Historically, a lack of information caused friction and uncertainty. However, complexity is a driver of friction in current warfare and will heavily influence future warfare. Complex, high-tech weapon systems will dominate the future battlefield and create added friction. Interdependent systems linking communications and warfighting functions will introduce more friction which will require highly skilled thinkers to navigate.

The newly published U.S. Army in Multi-Domain Operations 2028 concept “describes how Army forces fight across all domains, the electromagnetic spectrum (EMS), and the information environment and at echelon7  to “enable the Joint Force to compete with China and Russia below armed conflict, penetrate and dis-integrate their anti-access and area denial systems and ultimately defeat them in armed conflict and consolidate gains, and then return to competition.” Even with technological advances and intelligence improvement, elements of friction will be present in future wars. Both great armies and asymmetric threats have vulnerabilities, due to small things in terms of friction that morph into larger issues capable of crippling a fighting force. Therefore, success in future war is dependent on military commanders that understand these elements and how to overcome friction. Future technologies must be fused with critical thinking to mitigate friction and achieve strategic success. The U.S. Army must simultaneously emphasize integrating critical thinking in doctrine and exercises when training Soldiers on new technologies.

Soldiers should be creative, innovative thinkers; the Army must foster critically thinking as an essential skill.  The Insight Assessment emphasizes that “weakness in critical thinking skill results in loss of opportunities, of financial resources, of relationships, and even loss of life. There is probably no other attribute more worthy of measure than critical thinking skills.”9 Gaining and maintaining competitive advantage over adversaries in a complex, fluid future operational environment requires Soldiers to be both skilled in technology and experts in critical thinking.

If you enjoyed this post, please also see:

Mr. Chris Taylor’s presentation on Problem Solving in the Wild, from the Mad Scientist Learning in 2050 Conference at Georgetown University, 8-9 August 2018;

and the following Mad Scientist Laboratory blog posts:

TRADOC 2028

Making the Future More Personal: The Oft-Forgotten Human Driver in Future’s Analysis

 MAJ Cynthia Dehne is in the U.S. Army Reserve, assigned to the TRADOC G-2 and has operational experience in Afghanistan, Iraq, Kuwait, and Qatar. She is a graduate of the U.S. Army Command and General Staff College and holds masters degrees in International Relations and in Diplomacy and International Commerce.


1 Paul, Richard, and Elder, Linda. Critical Thinking Concepts and Tools. Dillon Beach, CA: Foundation for Critical Thinking, 2016, p. 2.

2 Paul, R., and Elder, L. Foundation for Critical Thinking. Dillon Beach, CA: Foundation for Critical Thinking, 2016, p. 2.

3 Camarata, Stephen. “The Emerging Crisis in Critical Thinking.” Psychology Today, March 21, 2017. Accessed October 10, 2018, from https://www.psychologytoday.com/us/blog/the-intuitive-parent/201703/the-emerging-crisis-in-critical-thinking.

4 Wile, Rob. “Mark Cuban Says This Will Be the No.1 Job Skill in 10 Years.” Time, February 20, 2017. Accessed October 11, 2018. http://time.com/money/4676298/mark-cuban-best-job-skill/.

5 Powers, Anna. “Creativity Is The Skill Of The Future.” Forbes, April 30, 2018. Accessed October 14, 2018. https://www.forbes.com/sites/annapowers/2018/04/30/creativity-is-the-skill-of-the-future/#3dd533f04fd4.

6 Clausewitz, Carl von, Michael Howard, Peter Paret, and Bernard Brodie. On War. Princeton, N.J.: Princeton University Press, 1984, p. 119.

7 U.S. Army. The U.S. Army in Multi-Domain Operations 2028, Department of the Army. TRADOC Pamphlet 525-3-1, December 6, 2018, p. 5.

8 U.S. Army. The U.S. Army in Multi-Domain Operations 2028, Department of the Army. TRADOC Pamphlet 525-3-1, December 6, 2018, p. 15.

9 Insight Assessment. “Risks Associated with Weak Critical Thinkers.” Insight Assessment, 2018. Accessed October 22, 2018, from https://www.insightassessment.com/Uses/Risks-Associated-with-Weak-Critical-Thinkers.