183. Ethics, Morals, and Legal Implications

[Editor’s Note: The U.S. Army Futures Command (AFC) and Training and Doctrine Command (TRADOC) co-sponsored the Mad Scientist Disruption and the Operational Environment Conference with the Cockrell School of Engineering at The University of Texas at Austin on 24-25 April 2019 in Austin, Texas. Today’s post is excerpted from this conference’s Final Report and addresses how the speed of technological innovation and convergence continues to outpace human governance. The U.S. Army must not only consider how best to employ these advances in modernizing the force, but also the concomitant ethical, moral, and legal implications their use may present in the Operational Environment (see links to the newly published TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, and the complete Mad Scientist Disruption and the Operational Environment Conference Final Report at the bottom of this post).]

Technological advancement and subsequent employment often outpaces moral, ethical, and legal standards. Governmental and regulatory bodies are then caught between technological progress and the evolution of social thinking. The Disruption and the Operational Environment Conference uncovered and explored several tension points that the Army may be challenged by in the future.

Space

Cubesats in LEO / Source: NASA

Space is one of the least explored domains in which the Army will operate; as such, we may encounter a host of associated ethical and legal dilemmas. In the course of warfare, if the Army or an adversary intentionally or inadvertently destroys commercial communications infrastructure – GPS satellites – the ramifications to the economy, transportation, and emergency services would be dire and deadly. The Army will be challenged to consider how and where National Defense measures in space affect non-combatants and American civilians on the ground.

Per proclaimed Mad Scientists Dr. Moriba Jah and Dr. Diane Howard, there are ~500,000 objects orbiting the Earth posing potential hazards to our space-based services. We are currently able to only track less than one percent of them — those that are the size of a smart phone / softball or larger. / Source: NASA Orbital Debris Office

International governing bodies may have to consider what responsibility space-faring entities – countries, universities, private companies – will have for mitigating orbital congestion caused by excessive launching and the aggressive exploitation of space. If the Army is judicious with its own footprint in space, it could reduce the risk of accidental collisions and unnecessary clutter and congestion. It is extremely expensive to clean up space debris and deconflicting active operations is essential. With each entity acting in their own self-interest, with limited binding law or governance and no enforcement, overuse of space could lead to a “tragedy of the commons” effect.1  The Army has the opportunity to more closely align itself with international partners to develop guidelines and protocols for space operations to avoid potential conflicts and to influence and shape future policy. Without this early intervention, the Army may face ethical and moral challenges in the future regarding its addition of orbital objects to an already dangerously cluttered Low Earth Orbit. What will the Army be responsible for in democratized space? Will there be a moral or ethical limit on space launches?

Autonomy in Robotics

AFC’s Future Force Modernization Enterprise of Cross-Functional Teams, Acquisition Programs of Record, and Research and Development centers executed a radio rodeo with Industry throughout June 2019 to inform the Army of the network requirements needed to enable autonomous vehicle support in contested, multi-domain environments. / Source: Army.mil

Robotics have been pervasive and normalized in military operations in the post-9/11 Operational Environment. However, the burgeoning field of autonomy in robotics with the potential to supplant humans in time-critical decision-making will bring about significant ethical, moral, and legal challenges that the Army, and larger DoD are currently facing. This issue will be exacerbated in the Operational Environment by an increased utilization and reliance on autonomy.

The increasing prevalence of autonomy will raise a number of important questions. At what point is it more ethical to allow a machine to make a decision that may save lives of either combatants or civilians? Where does fault, responsibility, or attribution lie when an autonomous system takes lives? Will defensive autonomous operations – air defense systems, active protection systems – be more ethically acceptable than offensive – airstrikes, fire missions – autonomy? Can Artificial Intelligence/Machine Learning (AI/ML) make decisions in line with Army core values?

Deepfakes and AI-Generated Identities, Personas, and Content

Source: U.S. Air Force

A new era of Information Operations (IO) is emerging due to disruptive technologies such as deepfakes – videos that are constructed to make a person appear to say or do something that they never said or did – and AI Generative Adversarial Networks (GANs) that produce fully original faces, bodies, personas, and robust identities.2  Deepfakes and GANs are alarming to national security experts as they could trigger accidental escalation, undermine trust in authorities, and cause unforeseen havoc. This is amplified by content such as news, sports, and creative writing similarly being generated by AI/ML applications.

This new era of IO has many ethical and moral implications for the Army. In the past, the Army has utilized industrial and early information age IO tools such as leaflets, open-air messaging, and cyber influence mechanisms to shape perceptions around the world. Today and moving forward in the Operational Environment, advances in technology create ethical questions such as: is it ethical or legal to use cyber or digital manipulations against populations of both U.S. allies and strategic competitors? Under what title or authority does the use of deepfakes and AI-generated images fall? How will the Army need to supplement existing policy to include technologies that didn’t exist when it was written?

AI in Formations

With the introduction of decision-making AI, the Army will be faced with questions about trust, man-machine relationships, and transparency. Does AI in cyber require the same moral benchmark as lethal decision-making? Does transparency equal ethical AI? What allowance for error in AI is acceptable compared to humans? Where does the Army allow AI to make decisions – only in non-combat or non-lethal situations?

Commanders, stakeholders, and decision-makers will need to gain a level of comfort and trust with AI entities exemplifying a true man-machine relationship. The full integration of AI into training and combat exercises provides an opportunity to build trust early in the process before decision-making becomes critical and life-threatening. AI often includes unintentional or implicit bias in its programming. Is bias-free AI possible? How can bias be checked within the programming? How can bias be managed once it is discovered and how much will be allowed? Finally, does the bias-checking software contain bias? Bias can also be used in a positive way. Through ML – using data from previous exercises, missions, doctrine, and the law of war – the Army could inculcate core values, ethos, and historically successful decision-making into AI.

If existential threats to the United States increase, so does pressure to use artificial and autonomous systems to gain or maintain overmatch and domain superiority. As the Army explores shifting additional authority to AI and autonomous systems, how will it address the second and third order ethical and legal ramifications? How does the Army rectify its traditional values and ethical norms with disruptive technology that rapidly evolves?

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.
    • Ethics and the Future of War panel, facilitated by LTG Dubik (USA-Ret.) at the Mad Scientist Visualizing Multi Domain Battle 2030-2050 Conference, facilitated at Georgetown University, on 25-26 July 2017.

Just Published! TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, 7 October 2019, describes the conditions Army forces will face and establishes two distinct timeframes characterizing near-term advantages adversaries may have, as well as breakthroughs in technology and convergences in capabilities in the far term that will change the character of warfare. This pamphlet describes both timeframes in detail, accounting for all aspects across the Diplomatic, Information, Military, and Economic (DIME) spheres to allow Army forces to train to an accurate and realistic Operational Environment.


1 Munoz-Patchen, Chelsea, “Regulating the Space Commons: Treating Space Debris as Abandoned Property in Violation of the Outer Space Treaty,” Chicago Journal of International Law, Vol. 19, No. 1, Art. 7, 1 Aug. 2018. https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1741&context=cjil

2 Robitzski, Dan, “Amazing AI Generates Entire Bodies of People Who Don’t Exist,” Futurism.com, 30 Apr. 2019. https://futurism.com/ai-generates-entire-bodies-people-dont-exist

111. AI Enhancing EI in War

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s guest blog post by MAJ Vincent Dueñas, addressing how AI can mitigate a human commander’s cognitive biases and enhance his/her (and their staff’s)  decision-making, freeing them to do what they do best — command, fight, and win on future battlefields!]

Humans are susceptible to cognitive biases and these biases sometimes result in catastrophic outcomes, particularly in the high stress environment of war-time decision-making. Artificial Intelligence (AI) offers the possibility of mitigating the susceptibility of negative outcomes in the commander’s decision-making process by enhancing the collective Emotional Intelligence (EI) of the commander and his/her staff. AI will continue to become more prevalent in combat and as such, should be integrated in a way that advances the EI capacity of our commanders. An interactive AI that feels like one is communicating with a staff officer, which has human-compatible principles, can support decision-making in high-stakes, time-critical situations with ambiguous or incomplete information.

Mission Command in the Army is the exercise of authority and direction by the commander using mission orders to enable disciplined initiative within the commander’s intent.i It requires an environment of mutual trust and shared understanding between the commander and his subordinates in order to understand, visualize, describe, and direct throughout the decision-making Operations Process and mass the effects of combat power.ii

The mission command philosophy necessitates improved EI. EI is defined as the capacity to be aware of, control, and express one’s emotions, and to handle interpersonal relationships judiciously and empathetically, at much quicker speeds in order seize the initiative in war.iii The more effective our commanders are at EI, the better they lead, fight, and win using all the tools available.

AI Staff Officer

To conceptualize how AI can enhance decision-making on the battlefields of the future, we must understand that AI today is advancing more quickly in narrow problem solving domains than in those that require broad understanding.iv This means that, for now, humans continue to retain the advantage in broad information assimilation. The advent of machine-learning algorithms that could be applied to autonomous lethal weapons systems has so far resulted in a general predilection towards ensuring humans remain in the decision-making loop with respect to all aspects of warfare.v, vi AI’s near-term niche will continue to advance rapidly in narrow domains and become a more useful interactive assistant capable of analyzing not only the systems it manages, but the very users themselves. AI could be used to provide detailed analysis and aggregated assessments for the commander at the key decision points that require a human-in-the-loop interface.

The Battalion is a good example organization to visualize this framework. A machine-learning software system could be connected into different staff systems to analyze data produced by the section as they execute their warfighting functions. This machine-learning software system would also assess the human-in-the-loop decisions against statistical outcomes and aggregate important data to support the commander’s assessments. Over time, this EI-based machine-learning software system could rank the quality of the staff officers’ judgements. The commander can then consider the value of the staff officers’ assessments against the officers’ track-record of reliability and the raw data provided by the staff sections’ systems. The Bridgewater financial firm employs this very type of human decision-making assessment algorithm in order to assess the “believability” of their employees’ judgements before making high-stakes, and sometimes time-critical, international financial decisions.vii Included in such a multi-layered machine-learning system applied to the battalion, there would also be an assessment made of the commander’s own reliability, to maximize objectivity.

Observations by the AI of multiple iterations of human behavioral patterns during simulations and real-world operations would improve its accuracy and enhance the trust between this type of AI system and its users. Commanders’ EI skills would be put front and center for scrutiny and could improve drastically by virtue of the weight of the responsibility of consciously knowing the cognitive bias shortcomings of the staff with quantifiable evidence, at any given time. This assisted decision-making AI framework would also consequently reinforce the commander’s intuition and decisions as it elevates the level of objectivity in decision-making.

Human-Compatibility

The capacity to understand information broadly and conduct unsupervised learning remains the virtue of humans for the foreseeable future.viii The integration of AI into the battlefield should work towards enhancing the EI of the commander since it supports mission command and complements the human advantage in decision-making. Giving the AI the feel of a staff officer implies also providing it with a framework for how it might begin to understand the information it is receiving and the decisions being made by the commander.

Stuart Russell offers a construct of limitations that should be coded into AI in order to make it most useful to humanity and prevent conclusions that result in an AI turning on humanity. These three concepts are:  1) principle of altruism towards the human race (and not itself), 2) maximizing uncertainty by making it follow only human objectives, but not explaining what those are, and 3) making it learn by exposing it to everything and all types of humans.ix

Russell’s principles offer a human-compatible guide for AI to be useful within the human decision-making process, protecting humans from unintended consequences of the AI making decisions on its own. The integration of these principles in battlefield AI systems would provide the best chance of ensuring the AI serves as an assistant to the commander, enhancing his/her EI to make better decisions.

Making AI Work

The potential opportunities and pitfalls are abundant for the employment of AI in decision-making. Apart from the obvious danger of this type of system being hacked, the possibility of the AI machine-learning algorithms harboring biased coding inconsistent with the values of the unit employing it are real.

The commander’s primary goal is to achieve the mission. The future includes AI, and commanders will need to trust and integrate AI assessments into their natural decision-making process and make it part of their intuitive calculus. In this way, they will have ready access to objective analyses of their units’ potential biases, enhancing their own EI, and be able overcome them to accomplish their mission.

If you enjoyed this post, please also read:

An Appropriate Level of Trust…

Takeaways Learned about the Future of the AI Battlefield

Bias and Machine Learning

Man-Machine Rules

MAJ Vincent Dueñas is an Army Foreign Area Officer and has deployed as a cavalry and communications officer. His writing on national security issues, decision-making, and international affairs has been featured in Divergent Options, Small Wars Journal, and The Strategy Bridge. MAJ Dueñas is a member of the Military Writers Guild and a Term Member with the Council on Foreign Relations. The views reflected are his own and do not represent the opinion of the United States Government or any of its agencies.


i United States, Army, States, United. “ADRP 5-0 2012: The Operations Process.” ADRP 5-0 2012: The Operations Process, Headquarters, Dept. of the Army., 2012, pp. 1–1.

ii Ibid. pp. 1-1 – 1-3.

iiiEmotional Intelligence | Definition of Emotional Intelligence in English by Oxford Dictionaries.” Oxford Dictionaries | English, Oxford Dictionaries, 2018, en.oxforddictionaries.com/definition/emotional_intelligence.

iv Trent, Stoney, and Scott Lathrop. “A Primer on Artificial Intelligence for Military Leaders.” Small Wars Journal, 2018, smallwarsjournal.com/index.php/jrnl/art/primer-artificial-intelligence-military-leaders.

v Scharre, Paul. ARMY OF NONE: Autonomous Weapons and the Future of War. W W NORTON, 2019.

vi Evans, Hayley. “Lethal Autonomous Weapons Systems at the First and Second U.N. CGE Meetings.” Lawfare, 2018, https://www.lawfareblog.com/lethal-autonomous-weapons-systems-first-and-second-un-gge-meetings.

vii Dalio, Ray. Principles. Simon and Schuster, 2017.

viii Trent and Lathrop.

ix Russell, Stuart, director. Three Principles for Creating Safer AI. TED: Ideas Worth Spreading, 2017, www.ted.com/talks/stuart_russell_3_principles_for_creating_safer_ai.

100. Prediction Machines: The Simple Economics of Artificial Intelligence

[Editor’s Note: Mad Scientist Laboratory is pleased to review Prediction Machines: The Simple Economics of Artificial Intelligence by Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Harvard Business Review Press, 17 April 2018.  While economics is not a perfect analog to warfare, this book will enhance our readers’ understanding of narrow Artificial Intelligence (AI) and its tremendous potential to change the character of future warfare by disrupting human-centered battlefield rhythms and facilitating combat at machine speed.]

This insightful book by economists Ajay Agrawal, Joshua Gans, and Avi Goldfarb penetrates the hype often associated with AI by describing its base functions and roles and providing the economic framework for its future applications.  Of particular interest is their perspective of AI entities as prediction machines. In simplifying and de-mything our understanding of AI and Machine Learning (ML) as prediction tools, akin to computers being nothing more than extremely powerful mathematics machines, the authors effectively describe the economic impacts that these prediction machines will have in the future.

The book addresses the three categories of data underpinning AI / ML:

Training: This is the Big Data that trains the underlying AI algorithms in the first place. Generally, the bigger and most robust the data set is, the more effective the AI’s predictive capability will be. Activities such as driving (with millions of iterations every day) and online commerce (with similar large numbers of transactions) in defined environments lend themselves to efficient AI applications.

Input: This is the data that the AI will be taking in, either from purposeful, active injects or passively from the environment around it. Again, defined environments are far easier to cope with in this regard.

Feedback: This data comes from either manual inputs by users and developers or from AI understanding what effects took place from its previous applications. While often overlooked, this data is critical to iteratively enhancing and refining the AI’s performance as well as identifying biases and askew decision-making. AI is not a static, one-off product; much like software, it must be continually updated, either through injects or learning.

The authors explore narrow AI rather than a general, super, or “strong” AI.  Proclaimed Mad Scientist Paul Scharre and Michael Horowitz define narrow AI as follows:

their expertise is confined to a single domain, as opposed to hypothetical future “general” AI systems that could apply expertise more broadly. Machines – at least for now – lack the general-purpose reasoning that humans use to flexibly perform a range of tasks: making coffee one minute, then taking a phone call from work, then putting on a toddler’s shoes and putting her in the car for school.”  – from Artificial Intelligence What Every Policymaker Needs to Know, Center for New American Security, 19 June 2018

These narrow AI applications could have significant implications for U.S. Armed Forces personnel, force structure, operations, and processes. While economics is not a direct analogy to warfare, there are a number of aspects that can be distilled into the following ramifications:

Internet of Battle Things (IOBT) / Source: Alexander Kott, ARL

1. The battlefield is dynamic and has innumerable variables that have great potential to mischaracterize the ground truth with limited, purposely subverted, or “dirty” input data. Additionally, the relative short duration of battles and battlefield activities means that AI would not receive consistent, plentiful, and defined data, similar to what it would receive in civilian transportation and economic applications.

2. The U.S. military will not be able to just “throw AI on it” and achieve effective results. The effective application of AI will require a disciplined and comprehensive review of all warfighting functions to determine where AI can best augment and enhance our current Soldier-centric capabilities (i.e., identify those workflows and processes – Intelligence and Targeting Cycles – that can be enhanced with the application of AI).  Leaders will also have to assess where AI can replace Soldiers in workflows and organizational architecture, and whether AI necessitates the discarding or major restructuring of either.  Note that Goldman-Sachs is in the process of conducting this type of self-evaluation right now.

3. Due to its incredible “thirst” for Big Data, AI/ML will necessitate tradeoffs between security and privacy (the former likely being more important to the military) and quantity and quality of data.

 

4. In the near to mid-term future, AI/ML will not replace Leaders, Soldiers, and Analysts, but will allow them to focus on the big issues (i.e., “the fight”) by freeing them from the resource-intensive (i.e., time and manpower) mundane and rote tasks of data crunching, possibly facilitating the reallocation of manpower to growing need areas in data management, machine training, and AI translation.

This book is a must-read for those interested in obtaining a down-to-earth assessment on the state of narrow AI and its potential applications to both economics and warfare.

If you enjoyed this review, please also read the following Mad Scientist Laboratory blog posts:

Takeaways Learned about the Future of the AI Battlefield

Leveraging Artificial Intelligence and Machine Learning to Meet Warfighter Needs

… and watch the following presentations from the Mad Scientist Robotics, AI, and Autonomy – Visioning Multi-Domain Battle in 2030-2050 Conference, 7-8 March 2017, co-sponsored by Georgia Tech Research Institute:

Artificial Intelligence and Machine Learning: Potential Application in Defense Today and Tomorrow,” presented by Mr. Louis Maziotta, Armament Research, Development, and Engineering Center (ARDEC).

Unmanned and Autonomous Systems, presented by Paul Scharre, CNAS.

99. “The Queue”

[Editor’s Note: Mad Scientist Laboratory is pleased to present our October edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

1. Table of Disruptive Technologies, by Tech Foresight, Imperial College London, www.imperialtechforesight.com, January 2018.

This innovative Table of Disruptive Technologies, derived from Chemistry’s familiar Periodic Table, lists 100 technological innovations organized into a two-dimensional table, with the x-axis representing Time (Sooner to Later) and the y-axis representing the Potential for Socio-Economic Disruption (Low to High). These technologies are organized into three time horizons, with Current (Horizon 1 – Green) happening now, Near Future (Horizon 2 – Yellow) occurring in 10-20 years, and Distant Future (Horizon 3 – Fuchsia) occurring 20+ years out. The outermost band of Ghost Technologies (Grey) represents fringe science and technologies that, while highly improbable, still remain within the realm of the possible and thus are “worth watching.” In addition to the time horizons, each of these technologies has been assigned a number corresponding to an example listed to the right of the Table; and a two letter code corresponding to five broad themes: DE – Data Ecosystems, SP – Smart Planet, EA – Extreme Automation, HA – Human Augmentation, and MI – Human Machine Interactions. Regular readers of the Mad Scientist Laboratory will find many of these Potential Game Changers familiar, albeit assigned to far more conservative time horizons (e.g., our community of action believes Swarm Robotics [Sr, number 38], Quantum Safe Cryptography [Qs, number 77], and Battlefield Robots [Br, number 84] will all be upon us well before 2038). That said, we find this Table to be a useful tool in exploring future possibilities and will add it to our “basic load” of disruptive technology references, joining the annual Gartner Hype Cycle of Emerging Technologies.

2. The inventor of the web says the internet is broken — but he has a plan to fix it, by Elizabeth Schulze, Cnbc.com, 5 November 2018.

Tim Berners-Lee, who created the World Wide Web in 1989, has said recently that he thinks his original vision is being distorted due to concerns about privacy, access, and fake news. Berners-Lee envisioned the web as a place that is free, open, and constructive, and for most of his invention’s life, he believed that to be true. However, he now feels that the web has undergone a change for the worse. He believes the World Wide Web should be a protected basic human right. In order to accomplish this, he has created the “Contract for the Web” which contains his principles to protect web access and privacy. Berners-Lee’s “World Wide Web Foundation estimates that 1.5 billion… people live in a country with no comprehensive law on personal data protection. The contract requires governments to treat privacy as a fundamental human right, an idea increasingly backed by big tech leaders like Apple CEO Tim Cook and Microsoft CEO Satya Nadella.” This idea for a free and open web stands in contrast to recent news about China and Russia potentially branching off from the main internet and forming their own filtered and censored Alternative Internet, or Alternet, with tightly controlled access. Berners-Lee’s contract aims at unifying all users under one over-arching rule of law, but without China and Russia, we will likely have a splintered and non-uniform Web that sees only an increase in fake news, manipulation, privacy concerns, and lack of access.

3. Chinese ‘gait recognition’ tech IDs people by how they walk, Associated Press News, 6 November 2018.

Source: AP

The Future Operational Environment’s “Era of Contested Equality” (i.e., 2035 through 2050) will be marked by significant breakthroughs in technology and convergences, resulting in revolutionary changes. Under President Xi Jinping‘s leadership, China is becoming a major engine of global innovation, second only to the United States. China’s national strategy of “innovation-driven development” places innovation at the forefront of economic and military development.

Early innovation successes in artificial intelligence, sensors, robotics, and biometrics are being fielded to better control the Chinese population. Many of these capabilities will be tech inserted into Chinese command and control functions and intelligence, security, and reconnaissance networks redefining the timeless competition of finders vs. hiders. These breakthroughs represent homegrown Chinese innovation and are taking place now.

A recent example is the employment of ‘gait recognition’ software capable of identifying people by how they walk. Watrix, a Chinese technology startup, is selling the software to police services in Beijing and Shanghai as a further push to develop an artificial intelligence and data drive surveillance network. Watrix reports the capability can identify people up to 165 feet away without a view of their faces. This capability also fills in the sensor gap where high-resolution imagery is required for facial recognition software.

4. VR Boosts Workouts by Unexpectedly Reducing Pain During Exercise, by Emma Betuel, Inverse.com, 4 October 2018.

Tricking the brain can be fairly low tech, according to Dr. Alexis Mauger, senior lecturer at the University of Kent’s School of Sport and Exercise Sciences. Research has shown that students who participated in a Virtual Reality-based exercise were able to withstand pain a full minute longer on average than their control group counterparts. Dr. Mauger hypothesized that this may be due to a lack of visual cues normally associated with strenuous exercise. In the case of the specific research, participants were asked to hold a dumbbell out in front of them for as long as they could. The VR group didn’t see their forearms shake with exhaustion or their hands flush with color as blood rushed to their aching biceps; that is, they didn’t see the stimuli that could be perceived as signals of pain and exertion. These results could have significant and direct impact on Army training. While experiencing pain and learning through negative outcomes is essential in certain training scenarios, VR could be used to train Soldiers past where they would normally be physically able to train. This could not only save the Army time and money but also provide a boost to exercises as every bit of effectiveness normally left at the margins could now be acquired.

5. How Teaching AI to be Curious Helps Machines Learn for Themselves, by James Vincent, The Verge, 1 November 2018, Reviewed by Ms. Marie Murphy.

Presently, there are two predominant techniques for machine learning: machines analyzing large sets of data from which they extrapolate patterns and apply them to analogous scenarios; and giving the machine a dynamic environment in which it is rewarded for positive outcomes and penalized for negative ones, facilitating learning through trial and error.

In programmed curiosity, the machine is innately motivated to “explore for exploration’s sake.” The example used to illustrate the concept of learning through curiosity details a machine learning project called “OpenAI” which is learning to win a video game in which the reward is not only staying alive but also exploring all areas of the level. This method has yielded better results than the data-heavy and time-consuming traditional methods. Applying this methodology for machine learning in military training scenarios would reduce the human labor required to identify and program every possible outcome because the computer finds new ones on its own, reducing the time between development and implementation of a program. This approach is also more “humanistic,” as it allows the computer leeway to explore its virtual surroundings and discover new avenues like people do. By training AI in this way, the military can more realistically model various scenarios for training and strategic purposes.

6. EU digital tax plan flounders as states ready national moves, by Francesco Guarascio, Reuters.com, 6 November 2018.

A European Union plan to tax internet firms like Google and Facebook on their turnover is on the verge of collapsing. As the plan must be agreed to by all 28 EU countries (a tall order given that it is opposed by a number of them), the EU is announcing national initiatives instead. The proposal calls for EU states to charge a 3 percent levy on the digital revenues of large firms. The plan aims at changing tax rules that have let some of the world’s biggest companies pay unusually low rates of corporate tax on their earnings. These firms, mostly from the U.S., are accused of averting tax by routing their profits to the bloc’s low-tax states.

This is not just about taxation. This is about the issue of citizenship itself. What does it mean for virtual nations – cyber communities which have gained power, influence, or capital comparable to that of a nation-state – that fall outside the traditional rule of law? The legal framework of virtual citizenship turn upside down and globalize the logic of the special economic zone — a geographical space of exception, where the usual rules of state and finance do not apply. How will these entities be taxed or declare revenue?

Currently, for the online world, geography and physical infrastructure remain crucial to control and management. What happens when it is democratized, virtualized, and then control and management change? Google and Facebook still build data centers in Scandinavia and the Pacific Northwest, which are close to cheap hydroelectric power and natural cooling. When looked at in terms of who the citizen is, population movement, and stateless populations, what will the “new normal” be?

7. Designer babies aren’t futuristic. They’re already here, by Laura Hercher, MIT Technology Review, 22 October 2018.

In this article, subtitled “Are we designing inequality into our genes?” Ms. Hercher echoes what proclaimed Mad Scientist Hank Greely briefed at the Bio Convergence and Soldier 2050 Conference last March – advances in human genetics will be applied initially in order to have healthier babies via the genetic sequencing and the testing of embryos. Embryo editing will enable us to tailor / modify embryos to design traits, initially to treat diseases, but this will also provide us with the tools to enhance humans genetically. Ms. Hercher warns us that “If the use of pre-implantation testing grows and we don’t address the disparities in who can access these treatments, we risk creating a society where some groups, because of culture or geography or poverty, bear a greater burden of genetic disease.” A valid concern, to be sure — but who will ensure fair access to these treatments? A new Government agency? And if so, how long after ceding this authority to the Government would we see politically-expedient changes enacted, justified for the betterment of society and potentially perverting its original intent? The possibilities need not be as horrific as Aldous Huxley’s Brave New World, populated with castes of Deltas and Epsilon-minus semi-morons. It is not inconceivable that enhanced combat performance via genetic manipulation could follow, resulting in a permanent caste of warfighters, distinct genetically from their fellow citizens, with the associated societal implications.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!