119. The Queue

[Editor’s Note:  Mad Scientist Laboratory is pleased to present our next edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Mad Scientist Initiative has come across during the previous month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment (OE). We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

1. How Satellites the Size of a Grilled Cheese Sandwich Could Change the World, by Aaron Pressman, Fortune (via Yahoo! Finance), 24 January 2019.

One of Swam Technologies’ miniaturized satellites / Source:  Swarm Technologies

Space is rapidly democratizing and the death of tactical and operational surprise might be the casualty. Sara Spangelo and her startup, Swarm Technologies, is on a quest to deliver global communications at the lowest possible cost. This is a shared objective with companies like Elon Musk’s Starlink, but his solution includes thousands of satellites requiring many successful rocket launches. Swarm Technologies takes the decrease in launch costs due to commercialization and the miniaturization of satellites to the max. Swarm Technologies satellites will be the size of a grilled cheese sandwich and will harness the currents coursing through space to maneuver. This should reduce the required cost and time to create a worldwide network of connectivity for texting and collecting Internet of Things (IoT) data to approximately 25 million dollars and eighteen months.

The work at Starlink and Swarm Technologies only represents a small part of a new space race led by companies rather than the governments that built and manage much of space capability today. In the recent Mad Sci blog post “War Laid Bare,” Matthew Ader described this explosion and how access to global communications and sensing might tip the scales of warfare in favor of the finder, providing an overwhelming advantage over competitors that require stealth or need to hide their signatures to be effective in 21st Century warfare.

Eliminating dead zones in global coverage / Source: Swarm Technologies

The impact of this level of global transparency not only weighs on governments and their militaries, but businesses will find it more difficult to hide from competitors and regulators. Cade Metz writes in the New York TimesBusinesses Will Not Be Able to Hide: Spy Satellites May Give Edge from Above” about the impact this will have on global competition. It is a brave new world unless you have something to hide!

 

2. New Rules Takes the Guesswork out of Human Gene Editing, by Kristin Houser, Futurism, 14 December 2018.

Subtitled, “This will fundamentally change the way we use CRISPR,” the subject article was published following Dr. He Jiankui’s announcement in November 2018 that he had successfully gene-edited two human babies. Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) associated protein 9, or CRISPR/Cas9, has become the “go to” tool for genomic engineering. When Dr. He announced that he had altered (as embryos) the twin girls Lulu and Nana’s genes in order to make them HIV-resistant, there was a global outcry from scientists, bio-ethicists, and politicians alike for a variety of reasons. One was the potential imprecision of the genetic editing performed, with the associated risk of unintended genomic damage leading to future health issues for the twins.

With the publication of “Target-Specific Precision of CRISPR-Mediated Genome Editing” in the scientific journal Molecular Cell by research scientists at The Francis Crick Institute in London, however, this particular concern appears to have been mitigated with a set of simple rules that determine the precision of CRISPR/Cas9 editing in human cells.

The effects of CRISPR were thought to be unpredictable and seemingly random,” Crick researcher and group leader Paola Scaffidi said in their news release, “but by analysing hundreds of edits we were shocked to find that there are actually simple, predictable patterns behind it all.”

Per Scaffidi, “Until now, editing genes with CRISPR has involved a lot of guesswork, frustration and trial and error…. The effects of CRISPR were thought to be unpredictable and seemingly random, but by analysing hundreds of edits we were shocked to find that there are actually simple, predictable patterns behind it all. This will fundamentally change the way we use CRISPR, allowing us to study gene function with greater precision and significantly accelerating our science.”

As predicted by Stanford’s bio-ethicist Hank Greely at last March’s Mad Scientist Bio Convergence and Soldier 2050 Conference in Menlo Park, CA, “killer apps” like healthier babies will help overcome the initial resistance to human enhancement via genetic engineering. The Crick Institute’s discovery, with its associated enhanced precision and reliability, may pave the way for market-based designer eugenics. Ramifications for the Future Operational Environment include further societal polarization between the privileged few that will have access to the genomic protocols providing such enhancements and the majority that do not (as in the 2013 film Elysium); the potential for unscrupulous regimes, non-state actors, and super-empowered individuals to breed and employ cadres of genetically enhanced thugs, “button men,” and super soldiers; and the relative policing / combat disadvantage experienced by those powers that outlaw such human genetic enhancements.

 

3. Radical Speaker Series: Countering Weaponized Information, SOFWERX and USSOCOM / J5 Donovan Group, 14 December 2018.

SOFWERX, in collaboration with USSOCOM / J5 Donovan Group, hosted a Radical Speaker Series on weaponized information. Mass influence operations, deep fakes, and social media metrics have been used by state and non-state actors in attempts to influence everything from public sentiment on policy issues to election results. The type and extent of influence operations has laid bare policy and technology gaps. This represents an emerging new threat vector for global competition.

As discussed in the TRADOC G-2’s The Operational Environment and the Changing Character of Future Warfare, Social Media and the Internet of Things has connected “all aspects of human engagement where cognition, ideas, and perceptions, are almost instantaneously available.” While this connectivity has been a global change agent, some are suggesting starting over and abandoning the internet as we know it in favor of alternative internet or “Alternet” solutions.  LikeWar authors Singer and Brookings provide examples of how our adversaries are weaponizing Social Media to augment their operations in the physical domain. One example is the defeat ISIS and re-capture of Mosul, “… Who was involved in the fight, where they were located, and even how they achieved victory had been twisted and transformed. Indeed, if what was online could swing the course of a battle — or eliminate the need for battle entirely — what, exactly, could be considered ‘war’ at all?”

Taken to the next level in the battle for the brain, novel neuroweapons could grant adversaries (and perhaps the United States) the ability to disrupt, degrade, damage, kill, and even “hack” human brains to influence populations. The resulting confusion and panic could disrupt government and society, without mass casualties. These attacks against the human brain facilitate personalized warfare. Neuroweapons are “Weapons of Mass Disruption” that may characterize segments of warfare in the future. These capabilities come with a host of ethical and moral considerations — does affecting someone’s brain purposely, even temporarily, violate ethical codes, treaties, conventions, and international norms followed by the U.S. military? As posed by Singer and Brookings — “what, exactly, could be considered ‘war’ at all?”

 

4. Nano, short film directed by Mike Manning, 2017.

Nano / Source: IMDb

This short film noir focuses on invasive technology and explores themes of liberty, control, and what citizens are willing to trade for safety and security. In a future America, technology has progressed to the point where embedded devices in humans are not only possible and popular, but the norm. These devices, known as Nano, can sync with one’s physiology, alter genomes, change hair and eye color, and, most importantly to law enforcement and government entities, control motor functions. Nano has resulted in a safer society, with tremendous reductions in gun violence. In the film, a new law has passed mandating that all citizens must be upgraded to Nano 2.0 – this controversial move means that the Government will now have access to everyone’s location, will be able to monitor them in real-time, and control their physiology. The Government could, were they so inclined, change someone’s hair color remotely, without permission or, perhaps, more frighteningly, induce indefinite paralysis.

Nano explores and, in some cases, answers the questions about future technologies and their potential impact on society. Nano illustrates how with many of the advantages and services we gain through new technologies, we sometimes have to give up things just as valuable. Technology no longer operates in a vacuum – meaning control over ourselves doesn’t exist. When we use a cellphone, when we access a website, when we, in Nano, change the color of our hair, our actions are being monitored, logged, and tracked by something. With cellphone use, we are willing to live with the fact that we give off a signature that could be traced by a number of agencies, including our service providers, as a net positive outweighing the associated negatives. But where does that line fall? How far would the average citizen go if they could have an embedded device installed that would heal minor wounds and lacerations? What becomes of privacy and what would we be willing to give up? Nano shows the negative consequences of this progression and the dystopian nature of technological slavery. It proposes questions of trust, both in the state and in individuals, and how blurred the lines can be, both in terms of freedoms and physical appearance.

 

5.Artificial Intelligence and the Future of Humans,” by Janna Anderson, Lee Rainie, and Alex Luchsinger, The Pew Research Center, 10 December 2018 (reviewed by Ms. Marie Murphy).

Source: Flikr

The Pew Research Center canvassed a host of technology innovators and business and policy leaders on whether artificial intelligence (AI) and related technology will enhance human capabilities and improve human life, or will it lessen human autonomy and agency to a detrimental level. A majority of the experts who responded to this query agreed that AI will better the lives of most people, but qualified this by noting significant negative outcomes will likely accompany the proliferation and integration of AI systems.

Most agree that AI will greatly benefit humanity and increase the quality of life for many, such as eliminating poverty and disease, while conveniently supplementing human intelligence helping to solve crucial problems. However, there are concerns that AI will conflict with and eventually overpower human autonomy, intelligence, decision-making, analysis, and many other uniquely “human” characteristics. Professionals in the field expressed concerns over the potential for data abuse and cybercrime, job loss, and becoming dependent on AI resulting in the loss of the ability to think independently.

Amy Webb, the founder of the Future Today Institute and professor of strategic foresight at New York University posits that the integration of AI will last for the next 50 years until every industry is reliant on AI systems, requiring workers to possess hybrid skills to compete for jobs that do not yet exist. Simon Briggs, professor of interdisciplinary arts at the University of Edinburgh, predicts that the potential negative outcomes of AI will be the result of a failure of humanity, and that “in 2030 AI will be in routine use to fight wars and kill people, far more effectively than humans can currently kill,” and, “we cannot expect our AI systems to be ethical on our behalf”.

As the U.S. Army continues to explore and experiment with how best to employ AI on the battlefield, there is the great challenge of ensuring that they are being used in the most effective and beneficial capacity, without reducing the efficiency and relevance of the humans working alongside the machines. Warfare will become more integrated with this technology, so monitoring the transition carefully is important for the successful application of AI to military strategy and operations to mitigate its potential negative effects.

 

6.Automation Will Replace Production, Food, and Transportation Jobs First,” by James Dennin, INVERSE, 28 January 2019.

A newly released paper from the Brookings Institute indicated that the advent of autonomy and advanced automation will have unevenly distributed positive and negative effects on varying job and career sectors. According to the report, the three fields most vulnerable to reduction through automation will be production, food service, and transportation jobs. Additionally, certain geographic categories (especially rural, less populated areas) will suffer graver effects of this continuous push towards autonomy.

Though automation is expected to displace labor in 72% of businesses in 2019, the prospects of future workers is not all doom and gloom. As the report notes, automation in a general sense replaces tasks and not entire jobs, although AI and autonomy makes the specter of total job replacement more likely. Remaining tasks make humans even more critical though there may be less of them. While a wide variety of workers are at risk, young people face higher risks of labor displacement (16-24 year olds) partially due to a large amount of their jobs being in the aforementioned sectors.

All of these automation impacts have significant implications for the Future Operational Environment, U.S. Army, and the Future of Warfare. An increase in automation and autonomy in production, food service, and transportation may mean that Soldiers can focus more exclusively on warfighting – moving, shooting, communicating – and in many cases will be complemented and made more lethal through automation. The dynamic nature of work due to these shifts could cause significant unrest requiring military attention in unexpected places. Additionally, the labor displacement of so much youth could be both a boon and a hindrance to the Army. On one hand, there could be a glut of new recruits due to poor employment outlook in the private sector; contrariwise, many of the freshly available recruits may not inherently have the required skills or even aptitude for becoming Warfighters.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future OE, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

107. “The Queue”

[Editor’s Note: Mad Scientist Laboratory is pleased to present our November edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the previous month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment (OE). We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

1. Is China a global leader in research and development? China Power Project, Center for Strategic and International Studies (CSIS), 2018. 

The United States Army’s concept of Multi-Domain Operations 2028 describes Russia and China as strategic competitors working to synthesize emerging technologies, such as artificial intelligence, hypersonics, machine learning, nanotechnology, and robotics, with their analysis of military doctrine and operations. The Future OE’s Era of Contested Equality (i.e., 2035 through 2050) describes China’s ascent to a peer competitor and our primary pacing threat. The fuel for these innovations is research and development funding from the Chinese Government and businesses.

CSIS’s China Power Project recently published an assessment of the rise in China’s research and development funding. There are three key facts that demonstrate the remarkable increase in funding and planning that will continue to drive Chinese innovation. First, “China’s R&D expenditure witnessed an almost 30-fold increase from 1991 to 2015 – from $13 billion to $376 billion. Presently, China spends more on R&D than Japan, Germany, and South Korea combined, and only trails the United States in terms of gross expenditure. According to some estimates, China will overtake the US as the top R&D spender by 2020.”

Second, globally businesses are funding the majority of the research and development activities. China is now following this trend with its “businesses financing 74.7 percent ($282 billion) of the country’s gross expenditure on R&D in 2015.” Tracking the origin of this funding is difficult with the Chinese government also operating a number of State Owned Entities. This could prove to be a strength for the Chinese Army’s access to commercial innovation.

China’s Micius quantum satellite, part of their Quantum Experiments at Space Scale (QUESS) program

Third, the Chinese government is funding cutting edge technologies where they are seeking to be global leaders. “Expenditures by the Chinese government stood at 16.2 percent of total R&D usage in 2015. This ratio is similar to that of advanced economies, such as the United States (11.2 percent). Government-driven expenditure has contributed to the development of the China National Space Administration. The Tiangong-2 space station and the “Micius’ quantum satellite – the first of its kind – are just two such examples.”

2. Microsoft will give the U.S. military access to ‘all the technology we create’, by Samantha Masunaga, Los Angeles Times (on-line), 1 December 2018.

Success in the future OE relies on many key assumptions. One such assumption is that the innovation cycle has flipped. Where the DoD used to drive technological innovation in this country, we now see private industry (namely Silicon Valley) as the driving force with the Army consuming products and transitioning technology for military use. If this system is to work, as the assumption implies, the Army must be able to work easily with the country’s leading technology companies.  Microsoft’s President Brad Smith stated recently that his company will “provide the U.S. military with access to the best technology … all the technology we create. Full stop.”

This is significant to the DoD for two reasons: It gives the DoD, and thus the Army, access to one of the leading technology developers in the world (with cloud computing and AI solutions), and it highlights that the assumptions we operate under are never guaranteed. Most recently, Google made the decision not to renew its contract with the DoD to provide AI support to Project Maven – a decision motivated, in part, by employee backlash.

Our near-peer competitors do not appear to be experiencing similar tensions or friction between their respective governments and private industry.  China’s President Xi is leveraging private sector advances for military applications via a “whole of nation” strategy, leading China’s Central Military-Civil Fusion Development Commission to address priorities including intelligent unmanned systems, biology and cross-disciplinary technologies, and quantum technologies.  Russia seeks to generate innovation by harnessing its defense industries with the nation’s military, civilian, and academic expertise at their Era Military Innovation Technopark to concentrate on advances in “information and telecommunication systems, artificial intelligence, robotic complexes, supercomputers, technical vision and pattern recognition, information security, nanotechnology and nanomaterials, energy tech and technology life support cycle, as well as bioengineering, biosynthetic, and biosensor technologies.”

Microsoft openly declaring its willingness to work seamlessly with the DoD is a substantial step forward toward success in the new innovation cycle and success in the future OE.

3. The Truth About Killer Robots, directed by Maxim Pozdorovkin, Third Party Films, premiered on HBO on 26 November 2018.

This documentary film could have been a highly informative piece on the disruptive potential posed by robotics and autonomous systems in future warfare. While it presents a jumble of interesting anecdotes addressing the societal changes wrought by the increased prevalence of autonomous systems, it fails to deliver on its title. Indeed, robot lethality is only tangentially addressed in a few of the documentary’s storylines:  the accidental death of a Volkswagen factory worker crushed by autonomous machinery; the first vehicular death of a driver engrossed by a Harry Potter movie while sitting behind the wheel of an autonomous-driving Tesla in Florida, and the use of a tele-operated device by the Dallas police to neutralize a mass shooter barricaded inside a building.

Russian unmanned, tele-operated BMP-3 shooting its 30mm cannon on a test range / Zvezda Broadcasting via YouTube

Given his choice of title, Mr. Pozdorovkin would have been better served in interviewing activists from the Campaign to Stop Killer Robots and participants at the Convention on Certain Conventional Weapons (CCW) who are negotiating in good faith to restrict the proliferation of lethal autonomy. A casual search of the Internet reveals a number of relevant video topics, ranging from the latest Russian advances in unmanned Ground Combat Vehicles (GCV) to a truly dystopian vision of swarming killer robots.

Instead, Mr. Pozdorovkin misleads his viewers by presenting a number creepy autonomy outliers (including a sad Chinese engineer who designed and then married his sexbot because of his inability to attract a living female mate given China’s disproportionately male population due to their former One-Child Policy); employing a sinister soundtrack and facial recognition special effects; and using a number of vapid androids (e.g., Japan’s Kodomoroid) to deliver contrived narration hyping a future where the distinction between humanity and machines is blurred. Where are Siskel and Ebert when you need ’em?

4. Walmart will soon use hundreds of AI robot janitors to scrub the floors of U.S. stores,” by Tom Huddleston Jr., CNBC, 5 December 2018.

The retail superpower Walmart is employing hundreds of robots in stores across the country, starting next month. These floor-scrubbing janitor robots will keep the stores’ floors immaculate using autonomous navigation that will be able to sense both people and obstacles.

The introduction of these autonomous cleaners will not be wholly disruptive to Walmart’s workforce operations, as they are only supplanting a task that is onerous for humans. But is this just the beginning? As humans’ comfort levels grow with the robots, will there then be an introduction of robot stocking, not unlike what is happening with Amazon? Will robots soon handle routine exchanges? And what of the displaced or under-employed workers resulting from this proliferation of autonomy, the widening economic gap between the haves and the have-nots, and the potential for social instability from neo-luddite movements in the Future OE?   Additionally, as these robots become increasingly conspicuous throughout our everyday lives in retail, food service, and many other areas, nefarious actors could hijack them or subvert them for terroristic, criminal, or generally malevolent uses.

The introduction of floor-cleaning robots at Walmart has larger implications than one might think. Robots are being considered for all the dull, dirty, and dangerous tasks assigned to the Army and the larger Department of Defense. The autonomous technology behind robots in Walmart today could have implications for our Soldiers at their home stations or on the battlefield of the future, conducting refueling and resupply runs, battlefield recovery, medevac, and other logistical and sustainment tasks.

5. What our science fiction says about us, by Tom Cassauwers, BBC News, 3 December 2018.

Right now the most interesting science fiction is produced in all sorts of non-traditional places,” says Anindita Banerjee, Associate Professor at Cornell University, whose research focuses on global sci-fi.  Sci-Fi and story telling enable us to break through our contemporary, mainstream echo chamber of parochialism to depict future technological possibilities and imagined worlds, political situations, and conflict. Unsurprisingly, different visions of the future imagining alternative realities are being written around the world – in China, Russia, and Africa. This rise of global science fiction challenges how we think about the evolution of the genre.  Historically, our occidental bias led us to believe that sci-fi was spreading from Western centers out to the rest of the world, blinding us to the fact that other regions also have rich histories of sci-fi depicting future possibilities from their cultural perspectives. Chinese science fiction has boomed in recent years, with standout books like Cixin Liu’s The Three-Body ProblemAfrofuturism is also on the rise since the release of the blockbuster Black Panther.

The Mad Scientist Initiative uses Crowdsourcing and Story Telling as two innovative tools to help us envision future possibilities and inform the OE through 2050. Strategic lessons learned from looking at the Future OE show us that the world of tomorrow will be far more challenging and dynamic. In our FY17 Science Fiction Writing Contest, we asked our community of action to describe Warfare in 2030-2050.  The stories submitted showed virtually every new technology is connected to and intersecting with other new technologies and advances.  The future OE presents us with a combination of new technologies and societal changes that will intensify long-standing international rivalries, create new security dynamics, and foster instability as well as opportunities. Sci-fi transcends beyond a global reflection on resistance; non-Western science fiction also taps into a worldwide consciousness – helping it conquer audiences beyond their respective home markets.

6. NVIDIA Invents AI Interactive Graphics, Nvidia.com, 3 December 2018.

A significant barrier to the modeling and simulation of dense urban environments has been the complexity of these areas in terms of building, vehicle, pedestrian, and foliage density. Megacities and their surrounding environments have such a massive concentration of entities that it has been a daunting task to re-create them digitally.  Nvidia has recently developed a first-step solution to this ongoing problem. Using neural networks and generative models, the developers are able to train AI to create realistic urban environments based off of real-world video.

As Nvidia admits, “One of the main obstacles developers face when creating virtual worlds, whether for game development, telepresence, or other applications is that creating the content is expensive. This method allows artists and developers to create at a much lower cost, by using AI that learns from the real world.” This process could significantly compress the development timeline, and while it wouldn’t address the other dimensions of urban operations — those entities that are underground or inside buildings (multi-floor and multi-room) — it would allow the Army to divert and focus more resources in those areas. The Chief of Staff of the Army has made readiness his #1 priority and stated, “In the future, I can say with very high degrees of confidence, the American Army is probably going to be fighting in urban areas,” and the Army “need[s] to man, organize, train and equip the force for operations in urban areas, highly dense urban areas.” 1  Nvidia’s solution could enable and empower the force to meet that goal.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future OE, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!


1Commentary: The missing link to preparing for military operations in megacities and dense urban areas,” by Claudia ElDib and John Spencer, Army Times, 20 July 2018, https://www.armytimes.com/opinion/commentary/2018/07/20/commentary-the-missing-link-to-preparing-for-military-operations-in-megacities-and-dense-urban-areas/.

89. “The Queue”

[Editor’s Note:  Mad Scientist Laboratory is pleased to present our September edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

 

1. Can you tell a fake video from a real one? and How hard is it to make a believable deepfake? by Tim Leslie, Nathan Hoad, and Ben Spraggon, Australian Broadcast Corporation (ABC) News, 26 and 27 September 2018, respectively.

and

Deep Video Portraits by Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Nießner, Patrick Perez, Christian Richardt, Michael Zollhöfer, and Christian Theobalt, YouTube, 17 May 2018.

Mad Scientist has previously sounded the alarm regarding the advent and potential impact of “DeepFakes” – deceptive files created using artificial neural networks and graphics processors that yield nearly undetectably fake imagery and videos. When distributed via Social Media, these files have the potential to “go viral” — duping, deceiving, and manipulating whole populations of viewers.

ABC’s first news piece provides several video samples, enabling you to test your skills in trying to detect which of the videos provided are real or fake. ABC then goes on to warn that “We are careening toward an infocalypse,” where we may soon find ourselves in living in “A world without truth”.

Source: ABC News

In their second piece, ABC delves into the step-by-step mechanics of how DeepFakes are created, using former Australian PM Malcolm Turnbull as a use case, and posits placing this fabricated imagery into different, possibly compromising, scenes, manipulating reality for a credulous public.

The Deep Video Portraits YouTube video (snippets of which were used to illustrate both of the aforementioned ABC news pieces) was presented at the Generations SIGGRAPH conference, convened in Vancouver, BC, on 12-16 August 2018. In conjunction with the ABC articles, the combined narration and video in Deep Video Portraits provide a comprehensive primer on how photo realistic, yet completely synthetic (i.e., fictional) re-animations can be accomplished using source and target videos.

Source: Deep Video Portraits – SIGGRAPH 2018 via YouTube /
Christian Theobalt

When combined with the ubiquity of Social Media, these public domain AI algorithms (e.g., FakeApp, DerpFakes, DeepFakes) are democratizing an incredibly disruptive capability. The U.S. must develop and implement means (e.g., education) to “inoculate” its citizenry and mitigate this potentially devastating Gray Zone weapon.

Attacking an adversary’s most important center of gravity — the spirit of its people — no longer requires massive bombing runs or reams of propaganda. All it takes is a smartphone and a few idle seconds. And anyone can do it.” — P.W. Singer and Emerson T. Brooking in LikeWar – The Weaponization of Social Media

 

2.The first “social network” of brains lets three people transmit thoughts to each other’s heads,” by Emerging Technology from the arXiv, MIT Technology Review, 29 September 2018.

In 2015, scientists at University of Washington in Seattle connected two people via a brain to brain interface. The connected individuals were able to play a 20 questions game. Now these scientists have announced the first group brain to brain network. They call the network the “BrainNet” and the individuals were able to play a collaborative Tetris-like game.

Source: BrainNet: A Multi-Person Brain-to-Brain Interface for
Direct Collaboration Between Brains / https://arxiv.org/pdf/1809.08632.pdf

To date, our future operational environment has described the exploding Internet of Things and even the emerging concept of an Internet of Battle Things. The common idea here is connecting things – sensors, weapons, and AI to a human in the or on the loop. The idea of adding the brain to this network opens incredible opportunities and vulnerabilities. We should start asking ourselves questions about this idea: 1) Could humans control connected sensors and weapons with thought alone, 2) Could this be a form of secure communications in the future, and 3) Could the brain be hacked and what vulnerabilities does this add? (Read Battle of the Brain) There are many more questions, but for now maybe we should broaden our ideas about connectivity to the Internet of Everything and Everyone.

 

3.Scientists get funding to grow neural networks in petri dishes,” Lehigh University, 14 September 2018.

An overview of running image recognition on living neuron testbed / Source: Xiaochen Guo / Lehigh University

The future of computing may not necessarily be silicon or quantum-based — it may be biological! The National Science Foundation (NSF) recently awarded an interdisciplinary team of biologists and computer engineers $500,000. This is in support of Understanding the Brain and the BRAIN Initiative, a coordinated research effort that seeks to accelerate the development of new neurotechnologies. The intent is to help computer engineers develop new ways to think about the design of solid state machines, and may influence other brain-related research using optogenetics, a biological technique that uses light to control cells —  “spike train stimuli” — similar to a two-dimensional bar code. The encoding of the spike train will then be optically applied to a group of networked in vitro neurons with optogenetic labels. This hybrid project could lead to a better understanding of how organic computers and brains function.  This suggests a radically different vision of future computing where, potentially, everything from buildings to computers could be grown in much the same way that we “grow” plants or animals today.

 

4.These “Robotic Skins” Turn Everyday Objects into Robots,” by Rachael Lallensack, Smithsonian.com, 19 September 2018, reviewed by Ms. Marie Murphy.

Source: Yale via Smithsonian.com

A team of roboticists out of Yale University published a report announcing the development of OmniSkins, pliable material with embedded sensors that can animate ordinary, inert objects. OmniSkins turn ordinary objects into robots on contact. These removable sheets can be reused and reconfigured for a variety of functions, from making foam tubes crawl like worms to creating a device which can grab and hold onto objects out of static foam arms. Initially developed for NASA, demonstrations reveal that OmniSkins can make a stuffed animal walk when wrapped around its legs and correct the posture of a person when embedded in their shirt. While these are fun examples, the realistic military applications are vast and varied. OmniSkins could represent a new development in performance-enhancing exoskeletons, enabling them to be lighter and more flexible. These sheets can turn ordinary objects into useful machines in a matter of minutes and can be modified with cameras or other sensors to fit the needs of the mission.

 

5.Movement-enhancing exoskeletons may impair decision making,” by Jennifer Chu,  MIT, 5 October 2018.

PowerWalk / Source: Bionic Power Inc. via MIT

Researchers from MIT have discovered that the use of exoskeletons to enhance speed, power, and endurance could have a negative effect on attention, decision-making, and cognition. The researchers found that 7 out of 12 subjects actually performed worse on cognitive tasks while wearing an exoskeleton through an obstacle course. The researchers tested them on several cognitive tasks from responding to visual signals to following their squad leader at a defined distance. They concluded that more than half of the subjects wearing the exoskeleton showed a marked decline in reaction time to the various tests. This presents an interesting challenge for technology developers. Does a positive solution in one area negatively affect another, seemingly unrelated, area? Would the subjects in the test have performed better if they had prolonged training with the exoskeletons as opposed to a few days? If so, this presents an additional burden and training demand on Soldiers and the Army. Will a trade study involving not just physical measures, but cognitive ones now need to be integrated into all new Army technology developments and what does this do to the development timeline?

 

6.Researchers Create “Spray On” 2-D Antennas,” by Michael Koziol, IEEE Spectrum, 21 September 2018.

Drexel’s MXene “Antenna Spray Paint” / Source: YouTube via IEEE Spectrum

Researchers from Drexel University have developed a novel solution to reducing the size and weight of traditional antennas. Using a metal like titanium or molybdenum, bonded with carbides or nitrides called MXene, they were able to produce a spray-on antenna solution. By dissolving the MXene in water, and using a commercial off-the-shelf spray gun, one can rapidly design, customize, and deploy a working antenna. The spray-on antenna is 100nm thick (versus a traditional copper antenna that is 3,000nm) and has a higher conductivity than carbon nanotubes – a previous solution to the small and thin antenna problem.  On a hyperactive battlefield where Soldiers may need on-demand solutions in a compressed timeline, MXene spray-on antennas may be a potential game changer. How much time, materials, and processing can be saved in an operational environment if a Soldier can quickly produce a low profile antenna to a custom specification? What does this mean for logistics if repair parts for antennas no longer need to be shipped from outside the theater of operations?

 

7.NASA’s Asteroid-Sampling Spacecraft Begins Its Science Work Today,” by Mike Wall, Space.com, 11 September 2018.

NASA Infographic on the OSIRIS-REx Mission / Source: https://www.space.com/11808-nasa-asteroid-mission-osiris-rex-1999-rq36-infographic.html

NASA’s OSIRIS-REx (short for Origins, Spectral Interpretation, Resource Identification, Security – Regolith Explorer) spacecraft commenced studying near-Earth asteroid Bennu’s dust plumes from afar on 11 September 2018. Once the probe achieves orbit around the ~500m-wide space rock on 31 December 2018, it will further explore that body’s dust, dirt, and gravel. Then, in mid-2020, OSIRIS-REx will swoop down to the surface to collect and return a sample of material to Earth in a special return capsule. While this piece represents very cool extraterrestrial science, it is also significant for what it bodes for the future Operational Environment, Multi-Dimensional Operations in the Space Domain, and our newly established Space Force.

“The $800 million OSIRIS-REx mission will … contribute to planetary-defense efforts. For example, the probe’s observations should help researchers better understand the forces that shape potentially dangerous asteroids’ paths through space… (Bennu itself is potentially hazardous; there’s a very small chance that it could hit Earth in the late 22nd century.)”

OSIRIS-REx is not the only probe sampling asteroids – Japan’s Hayabus2 spacecraft is preparing to touch down on the asteroid Ryugu this month. NASA has estimated the total value of resources locked in asteroids is equivalent to $100 Billion for every man, woman, and child on Earth.

This century’s new space race to capitalize on and exploit our solar system’s heretofore untapped mineral wealth, while defending critical space assets, will demand that the U.S. budgets for, develops, and maintains future space-based capabilities (initially unmanned, but eventually manned, as required by mission) to protect and defend our national and industrial space interests.

 

8.Soldiers who obliterate enemy fighters with drones will be guided on the morality of their actions by specially trained army chaplains,” by Roy Tingle, Daily Mail Online, 25 September 2018.

Source: Defense Visual Information Distribution Service (DIVIDS)

In possibly an all-time record for the worst news article title, it has been revealed that the British Army is training ethicists to teach soldiers about the morality of killing with drones. Chaplains will spend one year studying and obtaining a Master’s degree in Ethics at Cardiff University so that they can instruct officers on the moral dilemmas involved in killing an enemy from thousands of miles away. Officials have long been concerned about the emotional trauma suffered by drone pilots, as well as the risk that they will be more likely to use deadly force if the confrontation is being played out on a computer screen. This is about the speed of future combat and the decisive action that will be needed on the battlefield in the future. War will remain a human endeavor, but our Soldiers will be stressed to exercise judgement and fight at the ever increasing machine speed. The Army must be prepared to enter new ethical territory and make difficult decisions about the creation and employment of cutting edge technologies. While the Army holds itself to a high ethical standard, new converging technologies may come at an ethical cost. Updating guidance, policy, and law must keep up with what is employed on the battlefield. Many of these ethical dilemmas and questions lack definite answers and are ethical considerations that most of our future adversaries are unlikely to consider.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

80. “The Queue”

[Editor’s Note:  Mad Scientist Laboratory is pleased to present our August edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

Gartner Hype Cycle / Source:  Nicole Saraco Loddo, Gartner

1.5 Trends Emerge in the Gartner Hype Cycle for Emerging Technologies,” by Kasey Panetta, Gartner, 16 August 2018.

Gartner’s annual hype cycle highlights many of the technologies and trends explored by the Mad Scientist program over the last two years. This year’s cycle added 17 new technologies and organized them into five emerging trends: 1) Democratized Artificial Intelligence (AI), 2) Digitalized Eco-Systems, 3) Do-It-Yourself Bio-Hacking, 4) Transparently Immersive Experiences, and 5) Ubiquitous Infrastructure. Of note, many of these technologies have a 5–10 year horizon until the Plateau of Productivity. If this time horizon is accurate, we believe these emerging technologies and five trends will have a significant role in defining the Character of Future War in 2035 and should have modernization implications for the Army of 2028. For additional information on the disruptive technologies identified between now and 2035, see the Era of Accelerated Human Progress portion of our Potential Game Changers broadsheet.

[Gartner disclaimer:  Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.]

Artificial Intelligence by GLAS-8 / Source: Flickr

2.Should Evil AI Research Be Published? Five Experts Weigh In,” by Dan Robitzski, Futurism, 27 August 2018.

The following rhetorical (for now) question was posed to the “AI Race and Societal Impacts” panel during last month’s The Joint Multi-Conference on Human-Level Artificial Intelligence in Prague, The Czech Republic:

“Let’s say you’re an AI scientist, and you’ve found the holy grail of your field — you figured out how to build an artificial general intelligence (AGI). That’s a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas — things no computer can do today.

That’s great, right? Except for one big catch: your AGI system is evil or could only be used for malicious purposes.

So, now a conundrum. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil? Do you file a patent so that no one else (except for you) could bring such an algorithm into existence? Or do you sit on your research, protecting the world from your creation but also passing up on the astronomical paycheck that would surely arrive in the wake of such a discovery?”

The panel’s responses ranged from controlling — “Don’t publish it!” and treat it like a grenade, “one would not hand it to a small child, but maybe a trained soldier could be trusted with it”; to the altruistic — “publish [it]… immediately” and “there is no evil technology, but there are people who would misuse it. If that AGI algorithm was shared with the world, people might be able to find ways to use it for good”; to the entrepreneurial – “sell the evil AGI to [me]. That way, they wouldn’t have to hold onto the ethical burden of such a powerful and scary AI — instead, you could just pass it to [me and I will] take it from there.

While no consensus of opinion was arrived at, the panel discussion served a useful exercise in illustrating how AI differs from previous eras’ game changing technologies. Unlike Nuclear, Biological, and Chemical weapons, no internationally agreed to and implemented control protocols can be applied to AI, as there are no analogous gas centrifuges, fissile materials, or triggering mechanisms; no restricted access pathogens; no proscribed precursor chemicals to control. Rather, when AGI is ultimately achieved, it is likely to be composed of nothing more than diffuse code; a digital will’o wisp that can permeate across the global net to other nations, non-state actors, and super-empowered individuals, with the potential to facilitate unprecedentedly disruptive Information Operation (IO) campaigns and Virtual Warfare, revolutionizing human affairs. The West would be best served in emulating the PRC with its Military-Civil Fusion Centers and integrate the resources of the State with the innovation of industry to achieve their own AGI solutions soonest. The decisive edge will “accrue to the side with more autonomous decision-action concurrency on the Hyperactive Battlefield” — the best defense against a nefarious AGI is a friendly AGI!

Scales Sword Of Justice / Source: https://www.maxpixel.net/

3.Can Justice be blind when it comes to machine learning? Researchers present findings at ICML 2018,” The Alan Turing Institute, 11 July 2018.

Can justice really be blind? The International Conference on Machine Learning (ICML) was held in Stockholm, Sweden, in July 2018. This conference explored the notion of machine learning fairness and proposed new methods to help regulators provide better oversight and practitioners to develop fair and privacy-preserving data analyses. Like ethical discussions taking place within the DoD, there are rising legal concerns that commercial machine learning systems (e.g., those associated with car insurance pricing) might illegally or unfairly discriminate against certain subgroups of the population. Machine learning will play an important role in assisting battlefield decisions (e.g., the targeting cycle and commander’s decisions) – especially lethal decisions. There is a common misperception that machines will make unbiased and fair decisions, divorced from human bias. Yet the issue of machine learning bias is significant because humans, with their host of cognitive biases, code the very programming that will enable machines to learn and make decisions. Making the best, unbiased decisions will become critical in AI-assisted warfighting. We must ensure that machine-based learning outputs are verified and understood to preclude the inadvertent introduction of human biases.  Read the full report here.

Robot PNG / Source: pngimg.com

4.Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans,” by Katyanna Quach, The Register, 3 August 2018.

In a study published by PLOS ONE, researchers found that a robot’s personality affected a human’s decision-making. In the study, participants were asked to dialogue with a robot that was either sociable (chatty) or functional (focused). At the end of the study, the researchers let the participants know that they could switch the robot off if they wanted to. At that moment, the robot would make an impassioned plea to the participant to resist shutting them down. The participants’ actions were then recorded. Unexpectedly, there were  a large number of participants who resisted shutting down the functional robots after they made their plea, as opposed to the sociable ones. This is significant. It shows, beyond the unexpected result, that decision-making is affected by robotic personality. Humans will form an emotional connection to artificial entities despite knowing they are robotic if they mimic and emulate human behavior. If the Army believes its Soldiers will be accompanied and augmented heavily by robots in the near future, it must also understand that human-robot interaction will not be the same as human-computer interaction. The U.S. Army must explore how attain the appropriate level of trust between Soldiers and their robotic teammates on the future battlefield. Robots must be treated more like partners than tools, with trust, cooperation, and even empathy displayed.

IoT / Source: Pixabay

5.Spending on Internet of Things May More Than Double to Over Half a Trillion Dollars,” by Aaron Pressman, Fortune, 8 August 2018.

While the advent of the Internet brought home computing and communication even deeper into global households, the revolution of smart phones brought about the concept of constant personal interconnectivity. Today and into the future, not only are humans being connected to the global commons via their smart devices, but a multitude of devices, vehicles, and various accessories are being integrated into the Internet of Things (IoT). Previously, the IoT was addressed as a game changing technology. The IoT is composed of trillions of internet-linked items, creating opportunities and vulnerabilities. There has been explosive growth in low Size Weight and Power (SWaP) and connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness).

Large companies are expected to quickly grow their spending on Internet-connected devices (i.e., appliances, home devices [such as Google Home, Alexa, etc.], various sensors) to approximately $520 billion. This is a massive investment into what will likely become the Internet of Everything (IoE). While growth is focused on known devices, it is likely that it will expand to embedded and wearable sensors – think clothing, accessories, and even sensors and communication devices embedded within the human body. This has two major implications for the Future Operational Environment (FOE):

– The U.S. military is already struggling with the balance between collecting, organizing, and using critical data, allowing service members to use personal devices, and maintaining operations and network security and integrity (see banning of personal fitness trackers recently). A segment of the IoT sensors and devices may be necessary or critical to the function and operation of many U.S. Armed Forces platforms and weapons systems, inciting some critical questions about supply chain security, system vulnerabilities, and reliance on micro sensors and microelectronics

– The U.S. Army of the future will likely have to operate in and around dense urban environments, where IoT devices and sensors will be abundant, degrading blue force’s ability to sense the battlefield and “see” the enemy, thereby creating a veritable needle in a stack of needles.

6.Battlefield Internet: A Plan for Securing Cyberspace,” by Michèle Flournoy and Michael Sulmeyer, Foreign Affairs, September/October 2018. Review submitted by Ms. Marie Murphy.

With the possibility of a “cyber Pearl Harbor” becoming increasingly imminent, intelligence officials warn of the rising danger of cyber attacks. Effects of these attacks have already been felt around the world. They have the power to break the trust people have in institutions, companies, and governments as they act in the undefined gray zone between peace and all-out war. The military implications are quite clear: cyber attacks can cripple the military’s ability to function from a command and control aspect to intelligence communications and materiel and personnel networks. Besides the military and government, private companies’ use of the internet must be accounted for when discussing cyber security. Some companies have felt the effects of cyber attacks, while others are reluctant to invest in cyber protection measures. In this way, civilians become affected by acts of cyber warfare, and attacks on a country may not be directed at the opposing military, but the civilian population of a state, as in the case of power and utility outages seen in eastern Europe. Any actor with access to the internet can inflict damage, and anyone connected to the internet is vulnerable to attack, so public-private cooperation is necessary to most effectively combat cyber threats.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

65. “The Queue”

[Editor’s Note:  Now that another month has flown by, Mad Scientist Laboratory is pleased to present our June edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

Source: KUO CHENG LIAO

1. Collaborative Intelligence: Humans and AI are Joining Forces, by H. James Wilson and Paul R. Daugherty, Harvard Business Review, July – August 2018.

 

Source: OpenAI

A Team of AI Algorithms just crushed Expert Humans in a Complex Computer Game, by Will Knight, MIT Technology Review, June 25, 2018.

I know — I cheated and gave you two articles to read. These “dueling” articles demonstrate the early state of our understanding of the role of humans in decision-making. The Harvard Business Review article describes findings where human – Artificial Intelligence (AI) partnerships take advantage of the leadership, teamwork, creativity, and social skills of humans with the speed, scalability, and quantitative capabilities of AI. This is basically the idea of “centaur” chess which has been prevalent in discussions of human and AI collaboration. Conversely, the MIT Technology Review article describes the ongoing work to build AI algorithms that are incentivized to collaborate with other AI teammates. Could it be that collaboration is not a uniquely human attribute? The ongoing work on integration of AI into the workforce and in support of CEO decision-making could inform the Army’s investment strategy for AI. Julianne Gallina, one of our proclaimed Mad Scientists, described a future where everyone would have an entourage and Commanders would have access to a “Patton in the Pocket.” How the human operates on or in the loop and how Commanders make decisions at machine speed will be informed by this research. In August, the Mad Scientist team will conduct a conference focused on Learning in 2050 to further explore the ideas of human and AI teaming with intelligent tutors and mentors.

Source: Doubleday

2. Origin: A Novel, by Dan Brown, Doubleday, October 3, 2017, reviewed by Ms. Marie Murphy.

Dan Brown’s famous symbologist Robert Langdon returns to avenge the murder of his friend, tech developer and futurist Edmund Kirsch. Killed in the middle of presenting what he advertised as a life-changing discovery, Langdon teams up with Kirsch’s most faithful companion, his AI assistant Winston, in order to release Edmund’s presentation to the public. Winston is able to access Kirsch’s entire network, give real-time directions, and make decisions based on ambiguous commands — all via Kirsch’s smartphone. However, this AI system doesn’t appear to know Kirsch’s personal password, and can only enable Langdon in his mission to find it. An omnipresent and portable assistant like Winston could greatly aid future warfighters and commanders. Having this scope of knowledge on command is beneficial, but future AI will be able to not only regurgitate data, but present the Soldier with courses of action analyses and decision options based on the data. Winston was also able to mimic emotion via machine learning, which can reduce Soldier stress levels and present information in a humanistic manner. Once an AI has been attached to a Soldier for a period of time, it can learn the particular preferences and habits of that Soldier, and make basic or routine decisions and assumptions for that individual, anticipating their needs, as Winston does for Kirsch and Langdon.

Source: Getty Images adapted by CNAS

3. Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority, by Richard Danzig, Center for a New American Security, 30 May 2018.

Mad Scientist Laboratory readers are already familiar with the expression, “warfare at machine speed.” As our adversaries close the technology gap and potentially overtake us in select areas, there is clearly a “need for speed.”

“… speed matters — in two distinct dimensions. First, autonomy can increase decision speed, enabling the U.S. to act inside an adversary’s operations cycle. Secondly, ongoing rapid transition of autonomy into warfighting capabilities is vital if the U.S. is to sustain military advantage.” — Defense Science Board (DSB) Report on Autonomy, June 2016 (p. 3).

In his monograph, however, author and former Clinton Administration Secretary of the Navy Richard Danzig contends that “superiority is not synonymous with security;” citing the technological proliferation that almost inevitably follows technological innovations and the associated risks of unintended consequences resulting from the loss of control of military technologies. Contending that speed is a form of technological roulette, former Secretary Danzig proposes a control methodology of five initiatives to help mitigate the associated risks posed by disruptive technologies, and calls for increased multilateral planning with both our allies and opponents. Unfortunately, as with the doomsday scenario played out in Nevil Shute’s novel On the Beach, it is “… the little ones, the Irresponsibles…” that have propagated much of the world’s misery in the decades following the end of the Cold War. It is the specter of these Irresponsible nations, along with non-state actors and Super-Empowered Individuals, experimenting with and potentially unleashing disruptive technologies, who will not be contained by any non-proliferation protocols or controls. Indeed, neither will our near-peer adversaries, if these technologies promise to offer a revolutionary, albeit fleeting, Offset capability.

U.S. Vice Chairman of the Joint Chiefs of Staff Air Force Gen. Paul Selva, Source: Alex Wong/Getty Images

4. The US made the wrong bet on radiofrequency, and now it could pay the price, by Aaron Metha, C4ISRNET, 21 Jun 2018.

This article illustrates how the Pentagon’s faith in its own technology drove the Department of Defense to trust it would maintain dominance over the electromagnetic spectrum for years to come.  That decision left the United States vulnerable to new leaps in technology made by our near-peers. GEN Paul Selva, Vice Chairman of the Joint Chiefs of Staff, has concluded that the Pentagon must now keep up with near-peer nations and reestablish our dominance of electronic warfare and networking (spoiler alert – we are not!).  This is an example of a pink flamingo (a known, known), as we know our near-peers have surpassed us in technological dominance in some cases.  In looking at technological forecasts for the next decade, we must ensure that the U.S. is making the right investments in Science and Technology to keep up with our near-peers. This article demonstrates that timely and decisive policy-making will be paramount in keeping up with our adversaries in the fast changing and agile Operational Environment.

Source: MIT CSAIL

5. MIT Device Uses WiFi to ‘See’ Through Walls and Track Your Movements, by Kaleigh Rogers, MOTHERBOARD, 13 June 2018.

Researchers at MIT have discovered a way to “see” people through walls by tracking WiFi signals that bounce off of their bodies. Previously, the technology limited fidelity to “blobs” behind a wall, essentially telling you that someone was present but no indication of behavior. The breakthrough is using a trained neural network to identify the bouncing signals and compare those with the shape of the human skeleton. This is significant because it could give an added degree of specificity to first responders or fire teams clearing rooms. The ability to determine if an individual on the other side of the wall is potentially hostile and holding a weapon or a non-combatant holding a cellphone could be the difference between life and death. This also brings up questions about countermeasures. WiFi signals are seemingly everywhere and, with this technology, could prove to be a large signature emitter. Will future forces need to incorporate uniforms or materials that absorb these waves or scatter them in a way that distorts them?

Source: John T. Consoli / University of Maryland

6. People recall information better through virtual reality, says new UMD study, University of Maryland, EurekaAlert, 13 June 2018.

A study performed by the University of Maryland determined that people will recall information better when seeing it first in a 3D virtual environment, as opposed to a 2D desktop or mobile screen. The Virtual Reality (VR) system takes advantage of what’s called “spatial mnemonic encoding” which allows the brain to not only remember something visually, but assign it a place in three-dimensional space which helps with retention and recall. This technique could accelerate learning and enhance retention when we train our Soldiers and Leaders. As the VR hardware becomes smaller, lighter, and more affordable, custom mission sets, or the skills necessary to accomplish them, could be learned on-the-fly, in theater in a compressed timeline. This also allows for education to be distributed and networked globally without the need for a traditional classroom.

Source: Potomac Books

7. Strategy Strikes Back: How Star Wars Explains Modern Military Conflict, edited by Max Brooks, John Amble, ML Cavanaugh, and Jaym Gates; Foreword by GEN Stanley McChrystal, Potomac Books, May 1, 2018.

This book is fascinating for two reasons:  1) It utilizes one of the greatest science fiction series (almost a genre unto itself) in order to brilliantly illustrate some military strategy concepts and 2) It is chock full of Mad Scientists as contributors. One of the editors, John Amble, is a permanent Mad Scientist team member, while another, Max Brooks, author of World War Z, and contributor, August Cole, are officially proclaimed Mad Scientists.

The book takes a number of scenes and key battles in Star Wars and uses historical analogies to help present complex issues like civil-military command structure, counterinsurgency pitfalls, force structuring, and battlefield movement and maneuver.

One of the more interesting portions of the book is the concept of ‘droid armies vs. clone soldiers and the juxtaposition of that with the future testing of manned-unmanned teaming (MUM-T) concepts. There are parallels in how we think about what machines can and can’t do and how they think and learn.

 
If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

49. “The Queue”

(Editor’s Note: Beginning today, the Mad Science Laboratory will publish a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the previous month. In this anthology, we will address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!)

1. Army of None: Autonomous Weapons and the Future of War, by Paul Scharre, Senior Fellow and Director of the Technology and National Security Program, Center for a New American Security.

One of our favorite Mad Scientists, Paul Scharre, has authored a must read for all military Leaders. This book will help Leaders understand the definitions of robotic and autonomous weapons, how they are proliferating across states, non-states, and super-empowered individuals (his chapter on Garage Bots makes it clear this is not state proliferation analogous), and lastly the ethical considerations that come up at every Mad Scientist Conference. During these Conferences, we have discussed the idea of algorithm vs algorithm warfare and what role human judgement plays in this version of future combat. Paul’s chapters on flash war really challenge our ideas of how a human operates in the loop and his analogies using the financial markets are helpful for developing the questions needed to explore future possibilities and develop policies for dealing with warfare at machine speed.

Source: Rosoboronexport via YouTube
2. “Convergence on retaining human control of weapons systems,” in Campaign to Stop Killer Robots, 13 April 2018.

April 2018 marked the fifth anniversary of the Campaign to Stop Killer Robots. Earlier this month, 82 countries and numerous NGOs also convened at the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland, where many stressed the need to retain human control over weapons systems and the use of force. While the majority in attendance proposed moving forward this November to start negotiations towards a legally binding protocol addressing fully autonomous weapons, five key states rejected moving forward in negotiating new international law – France, Israel, Russia, the United Kingdom, and the United States. Mad Scientist notes that the convergence of a number of emerging technologies (synthetic prototyping, additive manufacturing, advanced modeling and simulations, software-defined everything, advanced materials) are advancing both the feasibility and democratization of prototype warfare, enabling and improving the engineering of autonomous weapons by non-state actors and super-empowered individuals alike. The genie is out of the bottle – with the advent of the Hyperactive Battlefield, advanced engagements will collapse the decision-action cycle to mere milliseconds, granting a decisive edge to the side with more autonomous decision-action.

Source: The Stack
3. “China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems,” by Elsa Kania, Adjunct Fellow with the Technology and National Security Program, Center for a New American Security, in Lawfare, 17 Apr 18.

Mad Scientist Elsa Kania addresses the People’s Republic of China’s apparent juxtaposition between their diplomatic commitment to limit the use of fully autonomous lethal weapons systems and the PLA’s active pursuit of AI dominance on the battlefield. The PRC’s decision on lethal autonomy and how it defines the role of human judgement in lethal operations will have tactical, operational, and strategic implications. In TRADOC’s Changing Character of Warfare assessment, we addressed the idea of an asymmetry in ethics where the differing ethical choices non-state and state adversaries make on the integration of emerging technologies could have real battlefield overmatch implications. This is a clear pink flamingo where we know the risks but struggle with addressing the threat. It is also an area where technological surprise is likely, as systems could have the ability to move from human in the loop mode to fully autonomous with a flip of a switch.

Source: HBO.com
4. “Maeve’s Dilemma in Westworld: What Does It Mean to be Free?,” by Marco Antonio Azevedo and Ana Azevedo, in Institute of Art and Ideas, 12 Apr 18. [Note: Best viewed on your personal device as access to this site may be limited by Government networks]

While this article focuses primarily on a higher-level philosophical interpretation of human vs. machine (or artificial intelligence, being, etc.), the core arguments and discussion remain relevant to an Army that is looking to increase its reliance on artificial intelligence and robotics. Technological advancements in these areas continue to trend toward modeling humans (both in form and the brain). However, the closer we get to making this a reality, the closer we get to confronting questions about consciousness and artificial humanity. Are we prepared to face these questions earnestly? Do we want an artificial entity that is, essentially, human? What do we do when that breakthrough occurs? Does biological vs. synthetic matter if the being “achieves” personhood? For additional insights on this topic, watch Linda MacDonald Glenn‘s Ethics and Law around the Co-Evolution of Humans and AI presentation from the Mad Scientist Visualizing Multi Domain Battle in 2030-2050 Conference at Georgetown University, 25-26 Jul 17.

5. Do You Trust This Computer?, directed by Chris Paine, Papercut Films, 2018.

The Army, and society as a whole, is continuing to offload certain tasks and receive pieces of information from artificial intelligence sources. Future Army Leaders will be heavily influenced by AI processing and distributing information used for decision making. But how much trust should we put in the information we get? Is it safe to be so reliant? What should the correct ratio be of human/machine contribution to decision-making? Army Leaders need to be prepared to make AI one tool of many, understand its value, and know how to interpret its information, when to question its output, and apply appropriate context. Elon Musk has shown his support for this documentary and tweeted about its importance.

6. Ready Player One, directed by Steven Spielberg, Amblin Entertainment, 2018.

Adapted from the novel of the same name, this film visualizes a future world where most of society is consumed by a massive online virtual reality “game” known as the OASIS. As society transitions from the physical to the virtual (texting, email, skype, MMORPG, Amazon, etc.), large groups of people will become less reliant on the physical world’s governmental and economic systems that have been established for centuries. As virtual money begins to have real value, physical money will begin to lose value. If people can get many of their goods and services through a virtual world, they will become less reliant on the physical world. Correspondingly, physical world social constructs will have less control of the people who still inhabit it, but spend increasing amounts of time interacting in the virtual world. This has huge implications for the future geo-political landscape as many varied and geographically diverse groups of people will begin congregating and forming virtual allegiances across all of the pre-established, but increasingly irrelevant physical world geographic borders. This will dilute the effectiveness, necessity, and control of the nation-state and transfer that power to the company(ies) facilitating the virtual environment.

Source: XO, “SoftEcologies,” suckerPUNCH
7. “US Army could enlist robots inspired by invertebrates,” by Bonnie Burton, in c/net, 22 Apr 18.

As if Boston Dynamic’s SpotMini isn’t creepy enough, the U.S. Army Research Laboratory (ARL) and the University of Minnesota are developing a flexible, soft robot inspired by squid and other invertebrates that Soldiers can create on-demand using 3-D printers on the battlefield. Too often, media visualizations have conditioned us to think of robots in anthropomorphic terms (with corresponding limitations). This and other breakthroughs in “soft,” polymorphic, printable robotics may grant Soldiers in the Future Operational Environment with hitherto unimagined on-demand, tailorable autonomous systems that will assist operations in the tight confines of complex, congested, and non-permissive environments (e.g., dense urban and subterranean). Soft robotics may also prove to be more resilient in arduous conditions. This development changes the paradigm for how robotics are imagined in both design and application.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

For additional insights into the Mad Scientist Initiative and how we continually explore the future through collaborative partnerships and continuous dialogue with academia, industry, and government, check out this Spy Museum’s SPYCAST podcast.