106. Man-Machine Rules

[Editor’s Note:  Mad Scientist Laboratory is pleased to present the first of two guest blog posts by Dr. Nir Buras.  In today’s post, he makes the compelling case for the establishment of man-machine rules.  Given the vast technological leaps we’ve made during the past two centuries (with associated societal disruptions), and the potential game changing technological innovations predicted through the middle of this century, we would do well to consider Dr. Buras’ recommended list of nine rules — developed for applicability to all technologies, from humankind’s first Paleolithic hand axe to the future’s much predicted, but long-awaited strong Artificial Intelligence (AI).]

Two hundred years of massive collateral impacts by technology have brought to the forefront of society’s consciousness the idea that some sort of rules for man-machine interaction are necessary, similar to the rules in place for gun safety, nuclear power, and biological agents. But where their physical effects are clear to see, the power of computing is veiled in virtuality and anthropomorphization. It appears harmless, if not familiar, and it often has a virtuous appearance.

Avid mathematician Ada Augusta Lovelace is often called the first computer programmer

Computing originated in the punched cards of Jacquard looms early in the 19th century. Today it carries the promise of a cloud of electrons from which we make our Emperor’s New Clothes. As far back as 1842, the brilliant mathematician Ada Augusta, Countess of Lovelace (1815-1852), foresaw the potential of computers. A protégé and associate of Charles Babbage (1791-1871), conceptual originator of the programmable digital computer, she realized the “almost incalculable” ultimate potential of such difference engines. She also recognized that, as in all extensions of human power or knowledge, “collateral influences” occur.1

AI presents us with such “collateral influences.”2  The question is not whether machine systems can mimic human abilities and nature, but when. Will the world become dependent on ungoverned algorithms?3  Should there be limits to mankind’s connection to machines? As concerns mount, well-meaning politicians, government officials, and some in the field are trying to forge ethical guidelines to address the collateral challenges of data use, robotics, and AI.4

A Hippocratic Oath of AI?

This cover of Asimov’s I, Robot illustrates the story “Runaround”, the first to list all Three Laws of Robotics.

Asimov’s Three Laws of Robotics are merely a literary ploy to infuse his storylines.5  In the real world, Apple, Amazon, Facebook, Google, DeepMind, IBM, and Microsoft, founded www.partnershiponai.org6 to ensure “… the safety and trustworthiness of AI technologies, the fairness and transparency of systems.” Data scientists from tech companies, governments, and nonprofits gathered to draft a voluntary digital charter for their profession.7  Oren Etzioni, CEO of the Allen Institute for AI and a professor at the University of Washington’s Computer Science Department, proposed a Hippocratic Oath for AI.

But such codes are composed of hard-to-enforce terms and vague goals, such as using AI “responsibly and ethically, with the aim of reducing bias and discrimination.” They pay lip service to privacy and human priority over machines. They appear to sugarcoat a culture which passes the buck to the lowliest Soldier.8

We know that good intentions are inadequate when enforcing confidentiality. Well-meant but unenforceable ideas don’t meet business standards.  It is unlikely that techies and their bosses, caught up in the magic of coding, will shepherd society through the challenges of the petabyte AI world.9  Vague principles, underwriting a non-binding code, cannot counter the cynical drive for profit.10

Indeed, in an area that lacks authorities or legislation to enforce rules, the Association for Computing Machinery (ACM) is itself backpedaling from its own Code of Ethics and Professional Conduct. Their document weakly defines notions of “public good” and “prioritizing the least advantaged.”11 Microsoft’s President Brad Smith admits that his company wouldn’t expect customers of its services to meet even these standards.

In the wake of the Cambridge Analytica scandal, it is clear that coders are not morally superior to other people and that voluntary, unenforceable Codes and Oaths are inadequate.12  Programming and algorithms clearly reflect ethical, philosophical, and moral positions.13  It is false to assume that the so-called “openness” trait of programmers reflects a broad mindfulness.  There is nothing heroic about “disruption for disruption’s sake” or hiding behind “black box computing.”14  The future cannot be left up to an adolescent-centric culture in an economic system that rests on greed.15  The society that adopts “Electronic personhood” deserves it.

Machines are Machines, People are People

After 200 years of the technology tail wagging the humanity dog, it is apparent now that we are replaying history – and don’t know it. Most human cultures have been intensively engaged with technology since before the Iron Age 3,000 years ago. We have been keenly aware of technology’s collateral effects mostly since the Industrial Revolution, but have not yet created general rules for how we want machines to impact individuals and society. The blurring of reality and virtuality that AI brings to the table might prompt us to do so.

Distinctions between the real and the virtual must be maintained if the behavior of the most sophisticated computation machines and robots is captured by legal systems. Nothing in the virtual world should be considered real any more than we believe that the hallucinations of a drunk or drugged person are real.

The simplest way to maintain the distinction is remembering that the real IS, and the virtual ISN’T, and that virtual mimesis is produced by machines. Lovelace reminded us that machines are just machines. While in a dark, distant future, giving machines personhood might lead to the collapse of humanity, Harari’s Homo Deus warns us that AI, robotics, and automation are quickly bringing the economic value of humans to zero.16

From the start of civilization, tools and machines have been used to reduce human drudge labor and increase production efficiency. But while tools and machines obviate physical aspects of human work in the context of the production of goods or processing information, they in no way affect the truth of humans as sentient and emotional living beings, nor the value of transactions among them.

Microsoft’s Tay AI Chatter Bot

The man-machine line is further blurred by our anthropomorphizing machinery, computing, and programming. We speak of machines in terms of human traits, and make programming analogous to human behavior. But there is nothing amusing about GIGO experiments like MIT’s psychotic bot Norman, or Microsoft’s fascist Tay.17 Technologists falling into the trap of considering that AI systems can make decisions, are analogous to children, playing with dolls, marveling that “their dolly is speaking.”

Machines don’t make decisions. Humans do. They may accept suggestions made by machines and when they do, they are responsible for the decisions made. People are and must be held accountable, especially those hiding behind machines. The holocaust taught us that one can never say, “I was just following orders.”

Nothing less than enforceable operational rules is required for any technical activity, including programming. It is especially important for tech companies, since evidence suggests that they take ethical questions to heart only under direct threats to their balance sheets.18

When virtuality offers experiences that humans perceive as real, the outcomes are the responsibility of the creators and distributors, no less than tobacco companies selling cigarettes, or pharmaceutical companies and cartels selling addictive drugs. Individuals do not have the right to risk the well-being of others to satisfy their need for complying with clichés such as “innovation,” and “disruption.”

Nuclear, chemical, biological, gun, aviation, machine, and automobile safety rules do not rely on human nature. They are based on technical rules and procedures. They are enforceable and moral responsibility is typically carried by the hierarchies of their organizations.19

As we master artificial intelligence, human intelligence must take charge.20 The highest values known to mankind remains human life and the qualities and quantities necessary for the best individual life experience.21 For the transactions and transformations in which technology assists, we need simple operational rules to regulate the actions and manners of individuals. Moving the focus to human interactions empowers individuals and society.

Man-Machine Rules

Man-Machine rules should address any tool or machine ever made or to be made. They would be equally applicable to any technology of any period, from the first flaked stone, to the ultimate predictive “emotion machines.” They would be adjudicated by common law.22

1. All material transformations and human transactions are to be conducted by humans.

2. Humans may directly employ hand/desktop/workstation devices in the above.

3. At all times, an individual human is responsible for the activity of any machine or program.

4. Responsibility for errors, omissions, negligence, mischief, or criminal-like activity is shared by every person in the organizational hierarchical chain, from the lowliest coder or operator, to the CEO of the organization, and its last shareholder.

5. Any person can shut off any machine at any time.

6. All computing is visible to anyone [No Black Box].

7. Personal Data are things. They belong to the individual who owns them, and any use of them by a third-party requires permission and compensation.

8. Technology must age before common use, until an Appropriate Technology is selected.

9. Disputes must be adjudicated according to Common Law.

Machines are here to help and advise humans, not replace them, and humans may exhibit a spectrum of responses to them. Some may ignore a robot’s advice and put others at risk. Some may follow recommendations to the point of becoming a zombie. But either way, Man-Machine Rules are based on and meant to support free, individual human choices.

Man-Machine Rules can help organize dialog around questions such as how to secure personal data. Do we need hardcopy and analog formats? How ethical are chips embedded in people and in their belongings? What degrees and controls are contemplatable for personal freedoms and personal risk? Will consumer rights and government organizations audit algorithms?23 Would equipment sabbaticals be enacted for societal and economic balances?

The idea that we can fix the tech world through a voluntary ethical code emergent from itself, paradoxically expects that the people who created the problems will fix them.24 It is not whether the focus should shift to human interactions that leaves more humans in touch with their destiny. The question is at what cost? If not now, when? If not by us, by whom?

If you reading enjoyed this post, please also see:

Prediction Machines: The Simple Economics of Artificial Intelligence

Artificial Intelligence (AI) Trends

Making the Future More Personal: The Oft-Forgotten Human Driver in Future’s Analysis

Nir Buras is a PhD architect and planner with over 30 years of in-depth experience in strategic planning, architecture, and transportation design, as well as teaching and lecturing. His planning, design and construction experience includes East Side Access at Grand Central Terminal, New York; International Terminal D, Dallas-Fort-Worth; the Washington DC Dulles Metro line; work on the US Capitol and the Senate and House Office Buildings in Washington. Projects he has worked on have been published in the New York Times, the Washington Post, local newspapers, and trade magazines. Buras, whose original degree was Architect and Town planner, learned his first lesson in urbanism while planning military bases in the Negev Desert in Israel. Engaged in numerous projects since then, Buras has watched first-hand how urban planning impacted architecture. After the last decade of applying in practice the classical method that Buras learned in post-doctoral studies, his book, *The Art of Classic Planning* (Harvard University Press, 2019), presents the urban design and planning method of Classic Planning as a path forward for homeostatic, durable urbanism.


1 Lovelace, Ada Augusta, Countess, Sketch of The Analytical Engine Invented by Charles Babbage by L. F. Menabrea of Turin, Officer of the Military Engineers, With notes upon the Memoir by the Translator, Bibliothèque Universelle de Genève, October, 1842, No. 82.

2 Oliveira, Arlindo, in Pereira, Vitor, Hippocratic Oath for Algorithms and Artificial Intelligence, Medium.com (website), 23 August 2018, https://medium.com/predict/hippocratic-oath-for-algorithms-and-artificial-intelligence-5836e14fb540; Middleton, Chris, Make AI developers sign Hippocratic Oath, urges ethics report: Industry backs RSA/YouGov report urging the development of ethical robotics and AI, computing.co.uk (website), 22 September 2017, https://www.computing.co.uk/ctg/news/3017891/make-ai-developers-sign-a-hippocratic-oath-urges-ethics-report; N.A., Do AI programmers need a Hippocratic oath?, Techhq.com (website), 15 August 2018, https://techhq.com/2018/08/do-ai-programmers-need-a-hippocratic-oath/

3 Oliveira, 2018; Dellot, Benedict, A Hippocratic Oath for AI Developers? It May Only Be a Matter of Time, Thersa.org (website), 13 February 2017, https://www.thersa.org/discover/publications-and-articles/rsa-blogs/2017/02/a-hippocratic-oath-for-ai-developers-it-may-only-be-a-matter-of-time; See also: Clifford, Catherine, Expert says graduates in A.I. should take oath: ‘I must not play at God nor let my technology do so’, Cnbc.com (website), 14 March 2018, https://www.cnbc.com/2018/03/14/allen-institute-ceo-says-a-i-graduates-should-take-oath.html; Johnson, Khari, AI Weekly: For the sake of us all, AI practitioners need a Hippocratic oath, Venturebeat.com (website), 23 March 2018, https://venturebeat.com/2018/03/23/ai-weekly-for-the-sake-of-us-all-ai-practitioners-need-a-hippocratic-oath/; Work, Robert O., former deputy secretary of defense, in Metz, Cade, Pentagon Wants Silicon Valley’s Help on A.I., New York Times, 15 March 2018.

4 Schotz, Mai, Should Data Scientists Adhere To A Hippocratic Oath?, Wired.com (website), 8 February 2018, https://www.wired.com/story/should-data-scientists-adhere-to-a-hippocratic-oath/; du Preez, Derek, MPs debate ‘hippocratic oath’ for those working with AI, Government.diginomica.com (website), 19 January 2018, https://government.diginomica.com/2018/01/19/mps-debate-hippocratic-oath-working-ai/

5 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. Asimov, Isaac, Runaround, in I, Robot, The Isaac Asimov Collection ed., Doubleday, New York City, p. 40.

6 Middleton, 2017.

7 Etzioni, Oren, A Hippocratic Oath for artificial intelligence practitioners, Techcrunch.com (website), 14 March 2018. https://techcrunch.com/2018/03/14/a-hippocratic-oath-for-artificial-intelligence-practitioners/?platform=hootsuite

8 Do AI programmers need a Hippocratic oath?, Techhq, 2018.

9 Goodsmith, Dave, quoted in Schotz, 2018.

10 Schotz, 2018.

11 Do AI programmers need a Hippocratic oath?, Techhq, 2018. Wheeler, Schaun, in Schotz, 2018.

12 Gnambs, T., What makes a computer wiz? Linking personality traits and programming aptitude, Journal of Research in Personality, 58, 2015, pp. 31-34.

13 Oliveira, 2018.

14 Jarrett, Christian, The surprising truth about which personality traits do and don’t correlate with computer programming skills, Digest.bps.org.uk (website), British Psychological Society, 26 October 2015, https://digest.bps.org.uk/2015/10/26/the-surprising-truth-about-which-personality-traits-do-and-dont-correlate-with-computer-programming-skills/; Johnson, 2018.

15 Do AI programmers need a Hippocratic oath?, Techhq, 2018.

16 Harari, Yuval N. Homo Deus: A Brief History of Tomorrow. London: Harvill Secker, 2015.

17 That Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms is not an excuse. AI Twitter bot, Tay had to be deleted after it started making sexual references and declarations such as “Hitler did nothing wrong.”

18 Schotz, 2018.

19 See the example of Dr. Kerstin Dautenhahn, Research Professor of Artificial Intelligence in the School of Computer Science at the University of Hertfordshire, who claims no responsibility in determining the application of the work she creates. She might as well be feeding children shards of glass saying, “It is their choice to eat it or not.” In Middleton, 2017. The principle is that the risk of an unfavorable outcome lies with an individual as well as the entire chain of command, direction, and or ownership of their organization, including shareholders of public companies and citizens of states. Everybody has responsibility the moment they engage in anything that could affect others. Regulatory “sandboxes” for AI developer experiments – equivalent to pathogen or nuclear labs – should have the same types of controls and restrictions. Dellot, 2017.

20 Oliveira, 2018.

21 Sentience and sensibilities of other beings is recognized here, but not addressed.

22 The proposed rules may be appended to the International Covenant on Economic, Social and Cultural Rights (ICESCR, 1976), part of the International Bill of Human Rights, which include the Universal Declaration of Human Rights (UDHR) and the International Covenant on Civil and Political Rights (ICCPR). International Covenant on Economic, Social and Cultural Rights, www.refworld.org.; EISIL International Covenant on Economic, Social and Cultural Rights, www.eisil.org; UN Treaty Collection: International Covenant on Economic, Social and Cultural Rights, UN. 3 January 1976; Fact Sheet No.2 (Rev.1), The International Bill of Human Rights, UN OHCHR. June 1996.

23 Dellot, 2017.

24 Schotz, 2018.

105. Emerging Technologies as Threats in Non-Kinetic Engagements

[Editor’s Note:  Mad Scientist Laboratory is pleased to present today’s post by returning guest blogger and proclaimed Mad Scientist Dr. James Giordano and CAPT (USN – Ret.) L. R. Bremseth, identifying the national security challenges presented by emerging technologies, specifically when employed by our strategic competitors and non-state actors alike in non-kinetic engagements.

Dr. Giordano’s and CAPT Bremseth’s post is especially relevant, given the publication earlier this month of TRADOC Pamphlet 525-3-1, U.S. Army in Multi-Domain Operations 2028, and its solution to the “problem of layered standoff,” namely “the rapid and continuous integration of all domains of warfare to deter and prevail as we compete short of armed conflict; penetrate and dis-integrate enemy anti-access and area denial systems; exploit the resulting freedom of maneuver to defeat enemy systems, formations and objectives and to achieve our own strategic objectives; and consolidate gains to force a return to competition on terms more favorable to the U.S., our allies and partners.”]

“Victorious warriors seek to win first then go to war, while defeated warriors go to war first then seek to win.” — Sun Tzu

Non-kinetic Engagements

Political and military actions directed at adversely impacting or defeating an opponent often entail clandestine operations which can be articulated across a spectrum that ranges from overt warfare to subtle “engagements.” Routinely, the United States, along with its allies (and adversaries), has employed clandestine tactics and operations across the kinetic and non-kinetic domains of warfare. Arguably, the execution of clandestine kinetic operations is employed more readily as these collective activities often occur after the initiation of conflict (i.e., “Right of Bang”), and their effects may be observed (to various degrees) and/or measured. Given that clandestine non-kinetic activities are less visible and insidious, they may be particularly (or more) effective because often they are unrecognized and occur “Left of Bang.” Other nations, especially adversaries, understand the relative economy of force that non-kinetic engagements enable and increasingly are focused upon developing and articulating advanced methods for operations.

Much has been written about the fog of war. Non-kinetic engagements can create unique uncertainties prior to and/or outside of traditional warfare, precisely because they have qualitatively and quantitatively “fuzzy boundaries” as blatant acts of war. The “intentionally induced ambiguity” of non-kinetic engagements can establish plus-sum advantages for the executor(s) and zero-sum dilemmas for the target(s). For example, a limited scale non-kinetic action, which exerts demonstrably significant effects but does not meet defined criteria for an act of war, places the targeted recipient(s) at a disadvantage:  First, in that the criteria for response (and proportionality) are vague and therefore any response could be seen as questionable; and second, in that if the targeted recipient(s) responds with bellicose action(s), there is considerable likelihood that they may be viewed as (or provoked to be) the aggressor(s) (and therefore susceptible to some form of retribution that may be regarded as sanctionable).

Nominally, non-kinetic engagements often utilize non-military means to expand the effect-space beyond the conventional battlefield. The Department of Defense and Joint Staff do not have a well agreed-upon lexicon to define and to express the full spectrum of current and potential activities that constitute non-kinetic engagements. It is unfamiliar – and can be politically uncomfortable – to use non-military terms and means to describe non-kinetic engagements. As previously noted, it can be politically difficult – if not precarious– to militarily define and respond to non-kinetic activities.

Non-kinetic engagements are best employed to incur disruptive effects in and across various dimensions of effect (e.g., biological, psychological, social) that can lead to intermediate to long-term destructive manifestations (in a number of possible domains, ranging from the economic to the geo-political). The latent disruptive and destructive effects should be framed and regarded as “Grand Strategy” approaches that evoke outcomes in a “long engagement/long war” context rather than merely in more short-term tactical situations.1

Thus, non-kinetic operations must be seen and regarded as “tools of mass disruption,” incurring “rippling results” that can evoke both direct and indirect de-stabilizing effects. These effects can occur and spread:  1) from the cellular (e.g., affecting physiological function of a targeted individual) to the socio-political scales (e.g., to manifest effects in response to threats, burdens and harms incurred by individual and/or groups); and 2) from the personal (e.g., affecting a specific individual or particular group of individuals) to the public dimensions in effect and outcome (e.g., by incurring broad scale reactions and responses to key non-kinetic events).2

Given the increasing global stature, capabilities, and postures of Asian nations, it becomes increasingly important to pay attention to aspects of classical Eastern thought (e.g., Sun Tzu) relevant to bellicose engagement. Of equal importance is the recognition of various nations’ dedicated enterprises in developing methods of non-kinetic operations (e.g., China; Russia), and to understand that such endeavors may not comport with the ethical systems, principles, and restrictions adhered to by the United States and its allies.3, 4 These differing ethical standards and practices, if and when coupled to states’ highly centralized abilities to coordinate and to synchronize activity of the so-called “triple helix” of government, academia, and the commercial sector, can create synergistic force-multiplying effects to mobilize resources and services that can be non-kinetically engaged.5 Thus, these states can target and exploit the seams and vulnerabilities in other nations that do not have similarly aligned, multi-domain, coordinating capabilities.

Emerging Technologies – as Threats

Increasingly, emerging technologies are being leveraged as threats for such non-kinetic engagements. While the threat of radiological, nuclear, and (high yield) explosive technologies have been and remain generally well surveilled and controlled to date, new and convergent innovations in the chemical, biological, cyber sciences, and engineering are yielding tools and methods that currently are not completely, or effectively addressed. An overview of these emerging technologies is provided in Table 1 below.

Table 1

Of key interest are the present viability and current potential value of the brain sciences to be engaged in these ways.6, 7, 8 The brain sciences entail and obtain new technologies that can be applied to affect chemical and biological systems in both kinetic (e.g., chemical and biological ‘warfare’ but in ways that may sidestep definition – and governance – by existing treaties and conventions such as the Biological Toxins and Weapons Convention (BTWC), and Chemical Weapons Convention (CWC), and/or non-kinetic ways (which fall outside of, and therefore are not explicitly constrained by, the scope and auspices of the BTWC or CWC).9, 10

As recent incidents (e.g., “Havana Syndrome”; use of novichok; infiltration of foreign-produced synthetic opioids to US markets) have demonstrated, the brain sciences and technologies have utility to affect “minds and hearts” in (kinetic and non-kinetic) ways that elicit biological, psychological, socio-economic, and political effects which can be clandestine, covert, or attributional, and which evoke multi-dimensional ripple effects in particular contexts (as previously discussed). Moreover, apropos current events, the use of gene editing technologies and techniques to modify existing microorganisms11, and/or selectively alter human susceptibility to disease12 , reveal the ongoing and iterative multi-national interest in and considered weaponizable use(s) of emerging biotechnologies as instruments to incur “precision pathologies” and “immaculate destruction” of selected targets.

Toward Address, Mitigation, and Prevention

Without philosophical understanding of and technical insight into the ways that non-kinetic engagements entail and affect civilian, political, and military domains, the coordinated assessment and response to any such engagement(s) becomes procedurally complicated and politically difficult. Therefore, we advocate and propose increasingly dedicated efforts to enable sustained, successful surveillance, assessment, mitigation, and prevention of the development and use of Emerging Technologies as Threats (ETT) to national security. We posit that implementing these goals will require coordinated focal activities to:  1) increase awareness of emerging technologies that can be utilized as non-kinetic threats; 2) quantify the likelihood and extent of threat(s) posed; 3) counter identified threats; and 4) prevent or delay adversarial development of future threats.

Further, we opine that a coordinated enterprise of this magnitude will necessitate a Whole of Nations approach so as to mobilize the organizations, resources, and personnel required to meet other nations’ synergistic triple helix capabilities to develop and non-kinetically engage ETT.

Utilizing this approach will necessitate establishment of:

1. An office (or network of offices) to coordinate academic and governmental research centers to study and to evaluate current and near-future non-kinetic threats.

2. Methods to qualitatively and quantitatively identify threats and the potential timeline and extent of their development.

3. A variety of means for protecting the United States and allied interests from these emerging threats.

4. Computational approaches to create and to support analytic assessments of threats across a wide range of emerging technologies that are leverageable and afford purchase in non-kinetic engagements.

In light of other nations’ activities in this domain, we view the non-kinetic deployment of emerging technologies as a clear, present, and viable future threat. Therefore, as we have stated in the past13, 14, 15 , and unapologetically re-iterate here, it is not a question of if such methods will be utilized but rather questions of when, to what extent, and by which group(s), and most importantly, if the United States and its allies will be prepared for these threats when they are rendered.

If you enjoyed reading this post, please also see Dr. Giordano’s presentations addressing:

War and the Human Brain podcast, posted by our colleagues at Modern War Institute on 24 July 2018.

Neurotechnology in National Security and Defense from the Mad Scientist Visioning Multi-Domain Battle in 2030-2050 Conference, co-hosted by Georgetown University in Washington, D.C., on 25-26 July 2017.

Brain Science from Bench to Battlefield: The Realities – and Risks – of Neuroweapons from Lawrence Livermore National Laboratory’s Center for Global Security Research (CGSR), on 12 June 2017.

Mad Scientist James Giordano, PhD, is Professor of Neurology and Biochemistry, Chief of the Neuroethics Studies Program, and Co-Director of the O’Neill-Pellegrino Program in Brain Science and Global Law and Policy at Georgetown University Medical Center. He also currently serves as Senior Biosciences and Biotechnology Advisor for CSCI, Springfield, VA, and has served as Senior Science Advisory Fellow of the Strategic Multilayer Assessment Group of the Joint Staff of the Pentagon.

R. Bremseth, CAPT, USN SEAL (Ret.), is Senior Special Operations Forces Advisor for CSCI, Springfield, VA. A 29+ years veteran of the US Navy, he commanded SEAL Team EIGHT, Naval Special Warfare GROUP THREE, and completed numerous overseas assignments. He also served as Deputy Director, Operations Integration Group, for the Department of the Navy.

This blog is adapted with permission from a whitepaper by the authors submitted to the Strategic Multilayer Assessment Group/Joint Staff Pentagon, and from a manuscript currently in review at HDIAC Journal. The opinions expressed in this piece are those of the authors, and do not necessarily reflect those of the United States Department of Defense, and/or the organizations with which the authors are involved. 


1 Davis Z, Nacht M. (Eds.) Strategic Latency- Red, White and Blue: Managing the National and international Security Consequences of Disruptive Technologies. Livermore CA: Lawrence Livermore Press, 2018.

2 Giordano J. Battlescape brain: Engaging neuroscience in defense operations. HDIAC Journal 3:4: 13-16 (2017).

3 Chen C, Andriola J, Giordano J. Biotechnology, commercial veiling, and implications for strategic latency: The exemplar of neuroscience and neurotechnology research and development in China. In: Davis Z, Nacht M. (Eds.) Strategic Latency- Red, White and Blue: Managing the National and international Security Consequences of Disruptive Technologies. Livermore CA: Lawrence Livermore Press, 2018.

4 Palchik G, Chen C, Giordano J. Monkey business? Development, influence and ethics of potentially dual-use brain science on the world stage. Neuroethics, 10:1-4 (2017).

5 Etzkowitz H, Leydesdorff L. The dynamics of innovation: From national systems and “Mode 2” to a Triple Helix of university-industry-government relations. Research Policy, 29: 109-123 (2000).

6 Forsythe C, Giordano J. On the need for neurotechnology in the national intelligence and defense agenda: Scope and trajectory. Synesis: A Journal of Science, Technology, Ethics and Policy 2(1): T5-8 (2011).

7 Giordano J. (Ed.) Neurotechnology in National Security and Defense: Technical Considerations, Neuroethical Concerns. Boca Raton: CRC Press (2015).

8 Giordano J. Weaponizing the brain: Neuroscience advancements spark debate. National Defense, 6: 17-19 (2017).

9 DiEuliis D, Giordano J. Why gene editors like CRISPR/Cas may be a game-changer for neuroweapons. Health Security 15(3): 296-302 (2017).

10 Gerstein D, Giordano J. Re-thinking the Biological and Toxin Weapons Convention? Health Security 15(6): 1-4 (2017).

11 DiEuliis D, Giordano J. Gene editing using CRISPR/Cas9: implications for dual-use and biosecurity. Protein and Cell 15: 1-2 (2017).

12 See, for example: https://www.vox.com/science-and-health/2018/11/30/18119589/crispr-technology-he-jiankui (Accessed 2. December, 2018).

13 Giordano J, Wurzman R. Neurotechnology as weapons in national intelligence and defense. Synesis: A Journal of Science, Technology, Ethics and Policy 2: 138-151 (2011).

14 Giordano J, Forsythe C, Olds J. Neuroscience, neurotechnology and national security: The need for preparedness and an ethics of responsible action. AJOB-Neuroscience 1(2): 1-3 (2010).

15 Giordano J. The neuroweapons threat. Bulletin of the Atomic Scientists 72(3): 1-4 (2016).

102. The Human Targeting Solution: An AI Story

[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by guest blogger CW3 Jesse R. Crifasi, envisioning a combat scenario in the not too distant future, teeing up the twin challenges facing the U.S Army in incorporating Artificial Intelligence (AI) across the force — “human-in-the-loop” versus “human-out-of-the-loop” and trust.  In it, CW3 Crifasi describes the inherent tension between human critical thinking and the benefits of Augmented Intelligence facilitating warfare at machine speed.  Enjoy!]

“CAITT, let’s re-run the targeting solution for tomorrow’s engagement… again,” asked Chief Warrant Officer Five Robert Menendez, in a not altogether annoyed tone of voice. Considering this was the fifth time he had asked, the tone of control Bob was exercising was nothing short of heroic for those knew him well. Fortunately, CAITT, short for Commander’s Artificially Intelligent Targeting Tool, did not seem to notice. Bob quietly thanked the nameless software engineer who had not programmed it to recognize the sarcasm and vitriol that he felt when he made the request.

“Chief, do you really think she is going to come up with anything different this time? You know that old saying about the definition of insanity, right?” asked DeMarcus Austin.  Bob shot the 28-year Captain a glare, clearly indicating that he knew exactly what the young man was implying. It was 0400 hours, and the entire Brigade Combat Team (BCT) was preparing to defend along its forward boundary. This after an exhausting three-day rapid deployment from their forward staging bases in Germany had everyone already on edge. In short, nothing had gone as expected or as planned for in the Operations Plan (OPLAN).

The UBRA’s, short for Unified Belorussian Russian Alliance’s, 323rd Tank Division was a mere 68 kilometers from the BCT’s Forward Line of Troops or FLOT. They would be in the BCT’s primary engagement area in six hours. Between 1EU DIV and the EU’s Expeditionary Air Force’s efforts, nothing was slowing UBRA’s advance towards the critical seaport city of Gdansk, Poland.

All the assumptions about air supremacy and cyber domination went out the window after the first UBRA tactical Electromagnetic Pulse (EMP) weapon detonated over Vilnius, Lithuania,  48 hours prior. A brilliant strategic move, the EMP fried every unshielded computer networked system the Allied Forces possessed. The Coalition AI Partner Network, so heavily relied on to execute the OPLAN, was inaccessible, as was every weapon system that linked to it. Right about now, Bob wished that CAITT was one of those systems.

Luckily for him and his boss, Colonel Steph “Duke” Ducalis, CAITT was designed with an internal Faraday shield preventing it and most of the U.S. Army’s other AI systems from suffering the same catastrophic damage. Unfortunately, the EU Armed Forces did not heed the same warnings and indicators. They were essentially crippled as they fervently worked to repair the damage. With the majority of U.S. military might committed to the Pacific Theatre, Colonel Ducalis’ BCT, a holdover from the old NATO alliance, was the lone American combat unit forward deployed in Western Europe. Alone and unafraid, as they say.

“Sir…” asked CAITT, snapping Bob out of his fatigue induced musings, “all data still indicates that engaging with our M56 Long-Range High-Velocity Missiles against the 323rd’s logistical assembly areas in Elblag will compel them to defeat. I estimate their advance will cease approximately 18 hours after direct fire battle commences. Given all of the variables, this is the optimal targeting solution.” Bob really hated how CAITT dispassionately stated her “optimal targeting solution,” in that sultry female tone. Clearly, that same software engineer who had ensured CAITT was durable also had a soft spot for British accents.

“CAITT, that makes no sense!” Bob stated exasperatedly. “The 323rd has approximately 250 T-90 MBTs — even if they expend all their fuel and munitions in that 18 hours, they will still overrun our defensive positions in less than six. We only have a single armored battalion with 35 FMC LAV3s. Even if they meet 3-1 K-kill ratios, we will not be able to hold our position. If they dislodge the LAVs, the dismounted infantrymen won’t stand a chance. We need to target the C2 nodes of their lead tank regiment now with the M56s. If we can neutralize their centralized command and control and delay their rate of march, it may give the EUAF enough time to get us those CAS and AI sorties they promised,” replied Bob. “That’s the right play, space for time.”

“I am sorry Mr. Menendez, I have no connection to the coalition network and cannot get a status update for the next Air Tasking Order. There is no confirmation that our Air Support Requests were received. I am issuing the target nominations to 2-142 HIMARS, they are moving towards their Position Areas Artillery now, airspace coordination is proceeding, and Colonel Ducalis is receiving his Commander’s Intervention Brief now. Pending his override there is nothing you can do.” CAITTs response almost sounded condescending to Bob; but then again, he remembered a time when human staff officers made recommendations to the boss, not smart-ass video game consoles.

“Chief, shouldn’t we just go with CAITTs solution? I mean she has all the raw data from the S2’s threat template and the weaponeering guidances that you built. CAITT is the joint program of record that we have to use, don’t we?” asked Captain Austin. Bob did not blame the young man for saying that. After all, this is what the Army wanted, staff officers that were more technicians and data managers than tacticians. The young man was simply not trained to question the AI’s conclusions.

“No sir, we should not, and by the way, I really hate how you call it a she,” answered Bob as he pondered his dilemma. Dammit! I’m the freaking Targeting Officer; I own this process, not this stupid thing… he thought for about five seconds before his instincts reasserted control of his senses.

Quickly jumping out of his chair, Bob left Captain Austin to oversee the data refinement and went outside to seek out the Commander’s Joint Lightweight Tactical Vehicle (JLTV). It took him a moment to locate it under the winter camouflage shielding, since Polish winters were just as brutal as advertised.

I must be getting old, Bob mused to himself, the cold air biting into his face. After twenty-five years of service, despite countless combat deployments in the Middle East, he was starting to get complacent. It was easy to think like young Captain Austin. He never should have trusted CAITT in the first place. It was so easy to let it make the decisions for you that many just stopped thinking altogether. The CIB would be Bob’s last chance to convince the boss that CAITT’s solution was wrong and he was right.

Bob entered the camo shield behind the JLTV constructing his argument to the boss in his mind. Colonel Ducalis had no time to entertain lengthy debate, this Bob knew. The fight was moving just too fast. Information is the currency of decision-making, and he would at best get about twenty seconds to make his case before something else grabbed the boss’s attention. CAITT would already be running the targeting solution straight to the boss via his Commanders Oculatory Device, jokingly called “COD,” referencing the old bawdy medieval term. Colonel Ducalis, already wearing the COD when Bob came in, was oblivious to everything else around him. Designed to construct a virtual and interactive battlefield environment, the COD worked almost too well. Even as Bob came in, CAITT was constructing the virtual battlefield, displaying missile aimpoints, HIMARs firing positions, airspace coordination measures, and detailed damage predictions for the target areas.

Bob could not understand how one person could absorb all that visual information in one sitting, but Colonel Ducalis was an exceptional commander. Standing nearby was the boss’s ever-present guardian, Major Lawrence Atlee, BCT XO, acting as always like a consigliere to his boss. His annoyance at Bob’s presence was evident by the scowl he received as he entered unannounced and, more egregiously, unrequested by him.

“Chief, what do you need?” asked Atlee, in his typically hurried tone, indicating that the boss should not be disturbed for all but the most serious reasons.

“Sir, it’s imperative I talk to the boss right now,” Bob demanded, somewhat out of breath — again, old age catching up. Without providing a reason to the XO, Bob moved directly to Colonel Ducalis and gently touched his arm. One did not shake a Brigade Commander, especially a former West Point Rugby player the size of Duke. The XO was not pleased.

“Bob, what’s up? I was just reviewing CAITT’s targeting solution,” said Duke as he lifted the COD off his face and saw his very distraught looking Targeting Officer. That’s hopeful, thought Bob, most Commanders would not even have bothered, simply letting the AI execute its solution.

Bob took a moment to compose himself and as he was about to pitch his case Atlee stepped in, “Sir, I’m very sorry. Chief here was just trying to let you know that he was ready to proceed.” Then turning to Bob he said in a manner that would not be confused as optional, “He was just leaving.”

Bob seized his chance as Duke looked right at him. They had served together for a long time. Bob remembered when Duke had asked him to come down from the 1EU Division Staff to fill his targeting officer billet. Undoubtedly, Duke trusted him and genuinely wanted to know what his concern was when he remove the COD in the first place. Bob owed it to him to give it to him straight.

“Sir, that is not correct,” Bob said speaking hurriedly. “We have a serious problem. CAITT’s targeting solution is completely wrong. The variables and assumptions were all predicated on the EUAF having air and cyber superiority. Those plans went out the window the second that EMP detonated. With all those aircraft down for CPU hardware replacement and software re-installs, those data points are now irrelevant. CAITT doesn’t know how long that will take because it is delinked from the Coalition’s AI Partner Network. I managed to get a low-frequency transmission established with Colonel Collins in Warsaw, and he thinks they can get us some sorties in the next six hours. CAITTs solution is ignoring the time versus space dynamic and going with a simple comparison of forces mathematical model. I’m betting it thinks that our casualties will be within acceptable limits after the 373rd expends all of its pre-staged consumable fuel and ammo. It thinks that we can hold our position if we cut off their re-supply. It may be right, but our losses will render us combat ineffective and unable to hold while 1EU DIV reconsolidates behind us.

“We need to implement this High Payoff Target List and Attack Guidance immediately disrupting and attriting their lead maneuver formations. Sir, we need to play for time and space,” Bob explained, hoping the sports analogy resonated while simultaneously accessing his Fires Forearm Display or FFaD, transmitting the data to Duke’s COD with a wave of his hand.

“Sir, I am not sure we should be deviating from the AI solution,” Atlee started to interject. “To be candid, and no offense to Mr. Menendez, the Army is eliminating their billets anyway since CAITT was fielded last year, same as they did for all the BCT S3s and FSOs. Their type of thinking is just not needed anymore, now that we have CAITT to do it for us.” Bob was amazed at how Major Atlee stated this dispassionately.

Bob, realizing where this was going, took a knee next to Duke.  He was clearly as tired as everyone else. Bob leaned in to speak while Duke started to review the new battlespace geometries and combat projections in his COD. “Duke,” Bob said in a low tone of voice so Major Atlee could not easily overhear him, “We’ve been friends a long time, I’ve never given you a bad recommendation. Please, override CAITT. LTC Givens can reposition his HIMARS battalion, but he has to start doing it now. This is our only chance; once those missiles are gone, we won’t get them back.”

He then stood up and patiently waited. Bob understood that he had pushed things as far as he could. Duke was a good man, a fine commander, and would make the right decision, Bob was certain of it.

Taking off his COD and rubbing his eyes, Duke leaned back and sighed heavily. The weight of command taking its full effect.

“CAITT,” stated Colonel Ducalis. “I am initiating Falcon 06’s override prerogative. Issue Chief Menendez’s targeting solution to LTC Givens immediately. Larry, get a hold of 1EU DIV and tell them we can hold our positions for 24 hours. After that, we may have to withdraw, but we will live to fight another day. Right now, trading time for space may not be the optimal strategy, but it is the human one. Let’s Go!”

If you enjoyed reading this post, please also see the following blog posts:

An Appropriate Level of Trust…

A Primer on Humanity: Iron Man versus Terminator

Takeaways Learned about the Future of the AI Battlefield

Leveraging Artificial Intelligence and Machine Learning to Meet Warfighter Needs

CW3 Jesse R. Crifasi is an active duty Field Artillery Warrant Officer. He has over 24 years in service and is currently serving as the Field Artillery Intelligence Officer (FAIO) for the 82nd Airborne Division.

The views expressed in this article are those of the author and do not reflect the official policy or position of the Department of the Army, DoD, or the U.S. Government.

101. TRADOC 2028

[Editor’s Note:  The U.S. Army Training and Doctrine Command (TRADOC) mission is to recruit, train, and educate the Army, driving constant improvement and change to ensure the Total Army can deter, fight, and win on any battlefield now and into the future. Today’s post addresses how TRADOC will need to transform to ensure that it continues to accomplish this mission with the next generation of Soldiers.]

Per The Army Vision:

The Army of 2028 will be ready to deploy, fight, and win decisively against any adversary, anytime and anywhere, in a joint, multi-domain, high-intensity conflict, while simultaneously deterring others and maintaining its ability to conduct irregular warfare. The Army will do this through the employment of modern manned and unmanned ground combat vehicles, aircraft, sustainment systems, and weapons, coupled with robust combined arms formations and tactics based on a modern warfighting doctrine and centered on exceptional Leaders and Soldiers of unmatched lethality.” GEN Mark A. Milley, Chief of Staff of the Army, and Dr. Mark T. Esper, Secretary of the Army, June 7, 2018.

In order to achieve this vision, the Army of 2028 needs a TRADOC 2028 that will recruit, organize, and train future Soldiers and Leaders to deploy, fight, and win decisively on any future battlefield. This TRADOC 2028 must account for: 1) the generational differences in learning styles; 2) emerging learning support technologies; and 3) how the Army will need to train and learn to maintain cognitive overmatch on the future battlefield. The Future Operational Environment, characterized by the speeding up of warfare and learning, will challenge the artificial boundaries between institutional and organizational learning and training (e.g., Brigade mobile training teams [MTTs] as a Standard Operating Procedure [SOP]).

Soldiers will be “New Humans” – beyond digital natives, they will embrace embedded and integrated sensors, Artificial Intelligence (AI), mixed reality, and ubiquitous communications. “Old Humans” adapted their learning style to accommodate new technologies (e.g., Classroom XXI). New Humans’ learning style will be a result of these technologies, as they will have been born into a world where they code, hack, rely on intelligent tutors and expert avatars (think the nextgen of Alexa / Siri), and learn increasingly via immersive Augmented / Virtual Reality (AR/VR), gaming, simulations, and YouTube-like tutorials, rather than the desiccated lectures and interminable PowerPoint presentations of yore. TRADOC must ensure that our cadre of instructors know how to use (and more importantly, embrace and effectively incorporate) these new learning technologies into their programs of instruction, until their ranks are filled with “New Humans.”

Delivering training for new, as of yet undefined MOSs and skillsets. The Army will have to compete with Industry to recruit the requisite talent for Army 2028. These recruits may enter service with fundamental technical skills and knowledges (e.g., drone creator/maintainer, 3-D printing specialist, digital and cyber fortification construction engineer) that may result in a flattening of the initial learning curve and facilitate more time for training “Green” tradecraft. Cyber recruiting will remain critical, as TRADOC will face an increasingly difficult recruiting environment as the Army competes to recruit new skillsets, from training deep learning tools to robotic repair. Initiatives to appeal to gamers (e.g., the Army’s eSports team) will have to be reflected in new approaches to all TRADOC Lines of Effort. AI may assist in identifying potential recruits with the requisite aptitudes.

“TRADOC in your ruck.” Personal AI assistants bring Commanders and their staffs all of the collected expertise of today’s institutional force. Conducting machine speed collection, collation, and analysis of battlefield information will free up warfighters and commanders to do what they do best — fight and make decisions, respectively. AI’s ability to quickly sift through and analyze the plethora of input received from across the battlefield, fused with the lessons learned data from thousands of previous engagements, will lessen the commander’s dependence on having had direct personal combat experience with conditions similar to his current fight when making command decisions.

Learning in the future will be personalized and individualized with targeted learning at the point of need. Training must be customizable, temporally optimized in a style that matches the individual learners, versus a one size fits all approach. These learning environments will need to bring gaming and micro simulations to individual learners for them to experiment. Similar tools could improve tactical war-gaming and support Commander’s decision making.  This will disrupt the traditional career maps that have defined success in the current generation of Army Leaders.  In the future, courses will be much less defined by the rank/grade of the Soldiers attending them.

Geolocation of Training will lose importance. We must stop building and start connecting. Emerging technologies – many accounted for in the Synthetic Training Environment (STE) – will connect experts and Soldiers, creating a seamless training continuum from the training base to home station to the fox hole. Investment should focus on technologies connecting and delivering expertise to the Soldier rather than brick and mortar infrastructure.  This vision of TRADOC 2028 will require “Big Data” to effectively deliver this personalized, immersive training to our Soldiers and Leaders at the point of need, and comes with associated privacy issues that will have to be addressed.

In conclusion, TRADOC 2028 sets the conditions to win warfare at machine speed. This speeding up of warfare and learning will challenge the artificial boundaries between institutional and organizational learning and training.

If you enjoyed this post, please also see:

– Mr. Elliott Masie’s presentation on Dynamic Readiness from the Learning in 2050 Conference, co-hosted with Georgetown University’s Center for Security Studies in Washington, DC, on 8-9 August 2018.

Top Ten” Takeaways from the Learning in 2050 Conference.

99. “The Queue”

[Editor’s Note: Mad Scientist Laboratory is pleased to present our October edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

1. Table of Disruptive Technologies, by Tech Foresight, Imperial College London, www.imperialtechforesight.com, January 2018.

This innovative Table of Disruptive Technologies, derived from Chemistry’s familiar Periodic Table, lists 100 technological innovations organized into a two-dimensional table, with the x-axis representing Time (Sooner to Later) and the y-axis representing the Potential for Socio-Economic Disruption (Low to High). These technologies are organized into three time horizons, with Current (Horizon 1 – Green) happening now, Near Future (Horizon 2 – Yellow) occurring in 10-20 years, and Distant Future (Horizon 3 – Fuchsia) occurring 20+ years out. The outermost band of Ghost Technologies (Grey) represents fringe science and technologies that, while highly improbable, still remain within the realm of the possible and thus are “worth watching.” In addition to the time horizons, each of these technologies has been assigned a number corresponding to an example listed to the right of the Table; and a two letter code corresponding to five broad themes: DE – Data Ecosystems, SP – Smart Planet, EA – Extreme Automation, HA – Human Augmentation, and MI – Human Machine Interactions. Regular readers of the Mad Scientist Laboratory will find many of these Potential Game Changers familiar, albeit assigned to far more conservative time horizons (e.g., our community of action believes Swarm Robotics [Sr, number 38], Quantum Safe Cryptography [Qs, number 77], and Battlefield Robots [Br, number 84] will all be upon us well before 2038). That said, we find this Table to be a useful tool in exploring future possibilities and will add it to our “basic load” of disruptive technology references, joining the annual Gartner Hype Cycle of Emerging Technologies.

2. The inventor of the web says the internet is broken — but he has a plan to fix it, by Elizabeth Schulze, Cnbc.com, 5 November 2018.

Tim Berners-Lee, who created the World Wide Web in 1989, has said recently that he thinks his original vision is being distorted due to concerns about privacy, access, and fake news. Berners-Lee envisioned the web as a place that is free, open, and constructive, and for most of his invention’s life, he believed that to be true. However, he now feels that the web has undergone a change for the worse. He believes the World Wide Web should be a protected basic human right. In order to accomplish this, he has created the “Contract for the Web” which contains his principles to protect web access and privacy. Berners-Lee’s “World Wide Web Foundation estimates that 1.5 billion… people live in a country with no comprehensive law on personal data protection. The contract requires governments to treat privacy as a fundamental human right, an idea increasingly backed by big tech leaders like Apple CEO Tim Cook and Microsoft CEO Satya Nadella.” This idea for a free and open web stands in contrast to recent news about China and Russia potentially branching off from the main internet and forming their own filtered and censored Alternative Internet, or Alternet, with tightly controlled access. Berners-Lee’s contract aims at unifying all users under one over-arching rule of law, but without China and Russia, we will likely have a splintered and non-uniform Web that sees only an increase in fake news, manipulation, privacy concerns, and lack of access.

3. Chinese ‘gait recognition’ tech IDs people by how they walk, Associated Press News, 6 November 2018.

Source: AP

The Future Operational Environment’s “Era of Contested Equality” (i.e., 2035 through 2050) will be marked by significant breakthroughs in technology and convergences, resulting in revolutionary changes. Under President Xi Jinping‘s leadership, China is becoming a major engine of global innovation, second only to the United States. China’s national strategy of “innovation-driven development” places innovation at the forefront of economic and military development.

Early innovation successes in artificial intelligence, sensors, robotics, and biometrics are being fielded to better control the Chinese population. Many of these capabilities will be tech inserted into Chinese command and control functions and intelligence, security, and reconnaissance networks redefining the timeless competition of finders vs. hiders. These breakthroughs represent homegrown Chinese innovation and are taking place now.

A recent example is the employment of ‘gait recognition’ software capable of identifying people by how they walk. Watrix, a Chinese technology startup, is selling the software to police services in Beijing and Shanghai as a further push to develop an artificial intelligence and data drive surveillance network. Watrix reports the capability can identify people up to 165 feet away without a view of their faces. This capability also fills in the sensor gap where high-resolution imagery is required for facial recognition software.

4. VR Boosts Workouts by Unexpectedly Reducing Pain During Exercise, by Emma Betuel, Inverse.com, 4 October 2018.

Tricking the brain can be fairly low tech, according to Dr. Alexis Mauger, senior lecturer at the University of Kent’s School of Sport and Exercise Sciences. Research has shown that students who participated in a Virtual Reality-based exercise were able to withstand pain a full minute longer on average than their control group counterparts. Dr. Mauger hypothesized that this may be due to a lack of visual cues normally associated with strenuous exercise. In the case of the specific research, participants were asked to hold a dumbbell out in front of them for as long as they could. The VR group didn’t see their forearms shake with exhaustion or their hands flush with color as blood rushed to their aching biceps; that is, they didn’t see the stimuli that could be perceived as signals of pain and exertion. These results could have significant and direct impact on Army training. While experiencing pain and learning through negative outcomes is essential in certain training scenarios, VR could be used to train Soldiers past where they would normally be physically able to train. This could not only save the Army time and money but also provide a boost to exercises as every bit of effectiveness normally left at the margins could now be acquired.

5. How Teaching AI to be Curious Helps Machines Learn for Themselves, by James Vincent, The Verge, 1 November 2018, Reviewed by Ms. Marie Murphy.

Presently, there are two predominant techniques for machine learning: machines analyzing large sets of data from which they extrapolate patterns and apply them to analogous scenarios; and giving the machine a dynamic environment in which it is rewarded for positive outcomes and penalized for negative ones, facilitating learning through trial and error.

In programmed curiosity, the machine is innately motivated to “explore for exploration’s sake.” The example used to illustrate the concept of learning through curiosity details a machine learning project called “OpenAI” which is learning to win a video game in which the reward is not only staying alive but also exploring all areas of the level. This method has yielded better results than the data-heavy and time-consuming traditional methods. Applying this methodology for machine learning in military training scenarios would reduce the human labor required to identify and program every possible outcome because the computer finds new ones on its own, reducing the time between development and implementation of a program. This approach is also more “humanistic,” as it allows the computer leeway to explore its virtual surroundings and discover new avenues like people do. By training AI in this way, the military can more realistically model various scenarios for training and strategic purposes.

6. EU digital tax plan flounders as states ready national moves, by Francesco Guarascio, Reuters.com, 6 November 2018.

A European Union plan to tax internet firms like Google and Facebook on their turnover is on the verge of collapsing. As the plan must be agreed to by all 28 EU countries (a tall order given that it is opposed by a number of them), the EU is announcing national initiatives instead. The proposal calls for EU states to charge a 3 percent levy on the digital revenues of large firms. The plan aims at changing tax rules that have let some of the world’s biggest companies pay unusually low rates of corporate tax on their earnings. These firms, mostly from the U.S., are accused of averting tax by routing their profits to the bloc’s low-tax states.

This is not just about taxation. This is about the issue of citizenship itself. What does it mean for virtual nations – cyber communities which have gained power, influence, or capital comparable to that of a nation-state – that fall outside the traditional rule of law? The legal framework of virtual citizenship turn upside down and globalize the logic of the special economic zone — a geographical space of exception, where the usual rules of state and finance do not apply. How will these entities be taxed or declare revenue?

Currently, for the online world, geography and physical infrastructure remain crucial to control and management. What happens when it is democratized, virtualized, and then control and management change? Google and Facebook still build data centers in Scandinavia and the Pacific Northwest, which are close to cheap hydroelectric power and natural cooling. When looked at in terms of who the citizen is, population movement, and stateless populations, what will the “new normal” be?

7. Designer babies aren’t futuristic. They’re already here, by Laura Hercher, MIT Technology Review, 22 October 2018.

In this article, subtitled “Are we designing inequality into our genes?” Ms. Hercher echoes what proclaimed Mad Scientist Hank Greely briefed at the Bio Convergence and Soldier 2050 Conference last March – advances in human genetics will be applied initially in order to have healthier babies via the genetic sequencing and the testing of embryos. Embryo editing will enable us to tailor / modify embryos to design traits, initially to treat diseases, but this will also provide us with the tools to enhance humans genetically. Ms. Hercher warns us that “If the use of pre-implantation testing grows and we don’t address the disparities in who can access these treatments, we risk creating a society where some groups, because of culture or geography or poverty, bear a greater burden of genetic disease.” A valid concern, to be sure — but who will ensure fair access to these treatments? A new Government agency? And if so, how long after ceding this authority to the Government would we see politically-expedient changes enacted, justified for the betterment of society and potentially perverting its original intent? The possibilities need not be as horrific as Aldous Huxley’s Brave New World, populated with castes of Deltas and Epsilon-minus semi-morons. It is not inconceivable that enhanced combat performance via genetic manipulation could follow, resulting in a permanent caste of warfighters, distinct genetically from their fellow citizens, with the associated societal implications.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!