80. “The Queue”

[Editor’s Note:  Mad Scientist Laboratory is pleased to present our August edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

Gartner Hype Cycle / Source:  Nicole Saraco Loddo, Gartner

1.5 Trends Emerge in the Gartner Hype Cycle for Emerging Technologies,” by Kasey Panetta, Gartner, 16 August 2018.

Gartner’s annual hype cycle highlights many of the technologies and trends explored by the Mad Scientist program over the last two years. This year’s cycle added 17 new technologies and organized them into five emerging trends: 1) Democratized Artificial Intelligence (AI), 2) Digitalized Eco-Systems, 3) Do-It-Yourself Bio-Hacking, 4) Transparently Immersive Experiences, and 5) Ubiquitous Infrastructure. Of note, many of these technologies have a 5–10 year horizon until the Plateau of Productivity. If this time horizon is accurate, we believe these emerging technologies and five trends will have a significant role in defining the Character of Future War in 2035 and should have modernization implications for the Army of 2028. For additional information on the disruptive technologies identified between now and 2035, see the Era of Accelerated Human Progress portion of our Potential Game Changers broadsheet.

[Gartner disclaimer:  Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.]

Artificial Intelligence by GLAS-8 / Source: Flickr

2.Should Evil AI Research Be Published? Five Experts Weigh In,” by Dan Robitzski, Futurism, 27 August 2018.

The following rhetorical (for now) question was posed to the “AI Race and Societal Impacts” panel during last month’s The Joint Multi-Conference on Human-Level Artificial Intelligence in Prague, The Czech Republic:

“Let’s say you’re an AI scientist, and you’ve found the holy grail of your field — you figured out how to build an artificial general intelligence (AGI). That’s a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas — things no computer can do today.

That’s great, right? Except for one big catch: your AGI system is evil or could only be used for malicious purposes.

So, now a conundrum. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil? Do you file a patent so that no one else (except for you) could bring such an algorithm into existence? Or do you sit on your research, protecting the world from your creation but also passing up on the astronomical paycheck that would surely arrive in the wake of such a discovery?”

The panel’s responses ranged from controlling — “Don’t publish it!” and treat it like a grenade, “one would not hand it to a small child, but maybe a trained soldier could be trusted with it”; to the altruistic — “publish [it]… immediately” and “there is no evil technology, but there are people who would misuse it. If that AGI algorithm was shared with the world, people might be able to find ways to use it for good”; to the entrepreneurial – “sell the evil AGI to [me]. That way, they wouldn’t have to hold onto the ethical burden of such a powerful and scary AI — instead, you could just pass it to [me and I will] take it from there.

While no consensus of opinion was arrived at, the panel discussion served a useful exercise in illustrating how AI differs from previous eras’ game changing technologies. Unlike Nuclear, Biological, and Chemical weapons, no internationally agreed to and implemented control protocols can be applied to AI, as there are no analogous gas centrifuges, fissile materials, or triggering mechanisms; no restricted access pathogens; no proscribed precursor chemicals to control. Rather, when AGI is ultimately achieved, it is likely to be composed of nothing more than diffuse code; a digital will’o wisp that can permeate across the global net to other nations, non-state actors, and super-empowered individuals, with the potential to facilitate unprecedentedly disruptive Information Operation (IO) campaigns and Virtual Warfare, revolutionizing human affairs. The West would be best served in emulating the PRC with its Military-Civil Fusion Centers and integrate the resources of the State with the innovation of industry to achieve their own AGI solutions soonest. The decisive edge will “accrue to the side with more autonomous decision-action concurrency on the Hyperactive Battlefield” — the best defense against a nefarious AGI is a friendly AGI!

Scales Sword Of Justice / Source: https://www.maxpixel.net/

3.Can Justice be blind when it comes to machine learning? Researchers present findings at ICML 2018,” The Alan Turing Institute, 11 July 2018.

Can justice really be blind? The International Conference on Machine Learning (ICML) was held in Stockholm, Sweden, in July 2018. This conference explored the notion of machine learning fairness and proposed new methods to help regulators provide better oversight and practitioners to develop fair and privacy-preserving data analyses. Like ethical discussions taking place within the DoD, there are rising legal concerns that commercial machine learning systems (e.g., those associated with car insurance pricing) might illegally or unfairly discriminate against certain subgroups of the population. Machine learning will play an important role in assisting battlefield decisions (e.g., the targeting cycle and commander’s decisions) – especially lethal decisions. There is a common misperception that machines will make unbiased and fair decisions, divorced from human bias. Yet the issue of machine learning bias is significant because humans, with their host of cognitive biases, code the very programming that will enable machines to learn and make decisions. Making the best, unbiased decisions will become critical in AI-assisted warfighting. We must ensure that machine-based learning outputs are verified and understood to preclude the inadvertent introduction of human biases.  Read the full report here.

Robot PNG / Source: pngimg.com

4.Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans,” by Katyanna Quach, The Register, 3 August 2018.

In a study published by PLOS ONE, researchers found that a robot’s personality affected a human’s decision-making. In the study, participants were asked to dialogue with a robot that was either sociable (chatty) or functional (focused). At the end of the study, the researchers let the participants know that they could switch the robot off if they wanted to. At that moment, the robot would make an impassioned plea to the participant to resist shutting them down. The participants’ actions were then recorded. Unexpectedly, there were  a large number of participants who resisted shutting down the functional robots after they made their plea, as opposed to the sociable ones. This is significant. It shows, beyond the unexpected result, that decision-making is affected by robotic personality. Humans will form an emotional connection to artificial entities despite knowing they are robotic if they mimic and emulate human behavior. If the Army believes its Soldiers will be accompanied and augmented heavily by robots in the near future, it must also understand that human-robot interaction will not be the same as human-computer interaction. The U.S. Army must explore how attain the appropriate level of trust between Soldiers and their robotic teammates on the future battlefield. Robots must be treated more like partners than tools, with trust, cooperation, and even empathy displayed.

IoT / Source: Pixabay

5.Spending on Internet of Things May More Than Double to Over Half a Trillion Dollars,” by Aaron Pressman, Fortune, 8 August 2018.

While the advent of the Internet brought home computing and communication even deeper into global households, the revolution of smart phones brought about the concept of constant personal interconnectivity. Today and into the future, not only are humans being connected to the global commons via their smart devices, but a multitude of devices, vehicles, and various accessories are being integrated into the Internet of Things (IoT). Previously, the IoT was addressed as a game changing technology. The IoT is composed of trillions of internet-linked items, creating opportunities and vulnerabilities. There has been explosive growth in low Size Weight and Power (SWaP) and connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness).

Large companies are expected to quickly grow their spending on Internet-connected devices (i.e., appliances, home devices [such as Google Home, Alexa, etc.], various sensors) to approximately $520 billion. This is a massive investment into what will likely become the Internet of Everything (IoE). While growth is focused on known devices, it is likely that it will expand to embedded and wearable sensors – think clothing, accessories, and even sensors and communication devices embedded within the human body. This has two major implications for the Future Operational Environment (FOE):

– The U.S. military is already struggling with the balance between collecting, organizing, and using critical data, allowing service members to use personal devices, and maintaining operations and network security and integrity (see banning of personal fitness trackers recently). A segment of the IoT sensors and devices may be necessary or critical to the function and operation of many U.S. Armed Forces platforms and weapons systems, inciting some critical questions about supply chain security, system vulnerabilities, and reliance on micro sensors and microelectronics

– The U.S. Army of the future will likely have to operate in and around dense urban environments, where IoT devices and sensors will be abundant, degrading blue force’s ability to sense the battlefield and “see” the enemy, thereby creating a veritable needle in a stack of needles.

6.Battlefield Internet: A Plan for Securing Cyberspace,” by Michèle Flournoy and Michael Sulmeyer, Foreign Affairs, September/October 2018. Review submitted by Ms. Marie Murphy.

With the possibility of a “cyber Pearl Harbor” becoming increasingly imminent, intelligence officials warn of the rising danger of cyber attacks. Effects of these attacks have already been felt around the world. They have the power to break the trust people have in institutions, companies, and governments as they act in the undefined gray zone between peace and all-out war. The military implications are quite clear: cyber attacks can cripple the military’s ability to function from a command and control aspect to intelligence communications and materiel and personnel networks. Besides the military and government, private companies’ use of the internet must be accounted for when discussing cyber security. Some companies have felt the effects of cyber attacks, while others are reluctant to invest in cyber protection measures. In this way, civilians become affected by acts of cyber warfare, and attacks on a country may not be directed at the opposing military, but the civilian population of a state, as in the case of power and utility outages seen in eastern Europe. Any actor with access to the internet can inflict damage, and anyone connected to the internet is vulnerable to attack, so public-private cooperation is necessary to most effectively combat cyber threats.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

65. “The Queue”

[Editor’s Note:  Now that another month has flown by, Mad Scientist Laboratory is pleased to present our June edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

Source: KUO CHENG LIAO

1. Collaborative Intelligence: Humans and AI are Joining Forces, by H. James Wilson and Paul R. Daugherty, Harvard Business Review, July – August 2018.

 

Source: OpenAI

A Team of AI Algorithms just crushed Expert Humans in a Complex Computer Game, by Will Knight, MIT Technology Review, June 25, 2018.

I know — I cheated and gave you two articles to read. These “dueling” articles demonstrate the early state of our understanding of the role of humans in decision-making. The Harvard Business Review article describes findings where human – Artificial Intelligence (AI) partnerships take advantage of the leadership, teamwork, creativity, and social skills of humans with the speed, scalability, and quantitative capabilities of AI. This is basically the idea of “centaur” chess which has been prevalent in discussions of human and AI collaboration. Conversely, the MIT Technology Review article describes the ongoing work to build AI algorithms that are incentivized to collaborate with other AI teammates. Could it be that collaboration is not a uniquely human attribute? The ongoing work on integration of AI into the workforce and in support of CEO decision-making could inform the Army’s investment strategy for AI. Julianne Gallina, one of our proclaimed Mad Scientists, described a future where everyone would have an entourage and Commanders would have access to a “Patton in the Pocket.” How the human operates on or in the loop and how Commanders make decisions at machine speed will be informed by this research. In August, the Mad Scientist team will conduct a conference focused on Learning in 2050 to further explore the ideas of human and AI teaming with intelligent tutors and mentors.

Source: Doubleday

2. Origin: A Novel, by Dan Brown, Doubleday, October 3, 2017, reviewed by Ms. Marie Murphy.

Dan Brown’s famous symbologist Robert Langdon returns to avenge the murder of his friend, tech developer and futurist Edmund Kirsch. Killed in the middle of presenting what he advertised as a life-changing discovery, Langdon teams up with Kirsch’s most faithful companion, his AI assistant Winston, in order to release Edmund’s presentation to the public. Winston is able to access Kirsch’s entire network, give real-time directions, and make decisions based on ambiguous commands — all via Kirsch’s smartphone. However, this AI system doesn’t appear to know Kirsch’s personal password, and can only enable Langdon in his mission to find it. An omnipresent and portable assistant like Winston could greatly aid future warfighters and commanders. Having this scope of knowledge on command is beneficial, but future AI will be able to not only regurgitate data, but present the Soldier with courses of action analyses and decision options based on the data. Winston was also able to mimic emotion via machine learning, which can reduce Soldier stress levels and present information in a humanistic manner. Once an AI has been attached to a Soldier for a period of time, it can learn the particular preferences and habits of that Soldier, and make basic or routine decisions and assumptions for that individual, anticipating their needs, as Winston does for Kirsch and Langdon.

Source: Getty Images adapted by CNAS

3. Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority, by Richard Danzig, Center for a New American Security, 30 May 2018.

Mad Scientist Laboratory readers are already familiar with the expression, “warfare at machine speed.” As our adversaries close the technology gap and potentially overtake us in select areas, there is clearly a “need for speed.”

“… speed matters — in two distinct dimensions. First, autonomy can increase decision speed, enabling the U.S. to act inside an adversary’s operations cycle. Secondly, ongoing rapid transition of autonomy into warfighting capabilities is vital if the U.S. is to sustain military advantage.” — Defense Science Board (DSB) Report on Autonomy, June 2016 (p. 3).

In his monograph, however, author and former Clinton Administration Secretary of the Navy Richard Danzig contends that “superiority is not synonymous with security;” citing the technological proliferation that almost inevitably follows technological innovations and the associated risks of unintended consequences resulting from the loss of control of military technologies. Contending that speed is a form of technological roulette, former Secretary Danzig proposes a control methodology of five initiatives to help mitigate the associated risks posed by disruptive technologies, and calls for increased multilateral planning with both our allies and opponents. Unfortunately, as with the doomsday scenario played out in Nevil Shute’s novel On the Beach, it is “… the little ones, the Irresponsibles…” that have propagated much of the world’s misery in the decades following the end of the Cold War. It is the specter of these Irresponsible nations, along with non-state actors and Super-Empowered Individuals, experimenting with and potentially unleashing disruptive technologies, who will not be contained by any non-proliferation protocols or controls. Indeed, neither will our near-peer adversaries, if these technologies promise to offer a revolutionary, albeit fleeting, Offset capability.

U.S. Vice Chairman of the Joint Chiefs of Staff Air Force Gen. Paul Selva, Source: Alex Wong/Getty Images

4. The US made the wrong bet on radiofrequency, and now it could pay the price, by Aaron Metha, C4ISRNET, 21 Jun 2018.

This article illustrates how the Pentagon’s faith in its own technology drove the Department of Defense to trust it would maintain dominance over the electromagnetic spectrum for years to come.  That decision left the United States vulnerable to new leaps in technology made by our near-peers. GEN Paul Selva, Vice Chairman of the Joint Chiefs of Staff, has concluded that the Pentagon must now keep up with near-peer nations and reestablish our dominance of electronic warfare and networking (spoiler alert – we are not!).  This is an example of a pink flamingo (a known, known), as we know our near-peers have surpassed us in technological dominance in some cases.  In looking at technological forecasts for the next decade, we must ensure that the U.S. is making the right investments in Science and Technology to keep up with our near-peers. This article demonstrates that timely and decisive policy-making will be paramount in keeping up with our adversaries in the fast changing and agile Operational Environment.

Source: MIT CSAIL

5. MIT Device Uses WiFi to ‘See’ Through Walls and Track Your Movements, by Kaleigh Rogers, MOTHERBOARD, 13 June 2018.

Researchers at MIT have discovered a way to “see” people through walls by tracking WiFi signals that bounce off of their bodies. Previously, the technology limited fidelity to “blobs” behind a wall, essentially telling you that someone was present but no indication of behavior. The breakthrough is using a trained neural network to identify the bouncing signals and compare those with the shape of the human skeleton. This is significant because it could give an added degree of specificity to first responders or fire teams clearing rooms. The ability to determine if an individual on the other side of the wall is potentially hostile and holding a weapon or a non-combatant holding a cellphone could be the difference between life and death. This also brings up questions about countermeasures. WiFi signals are seemingly everywhere and, with this technology, could prove to be a large signature emitter. Will future forces need to incorporate uniforms or materials that absorb these waves or scatter them in a way that distorts them?

Source: John T. Consoli / University of Maryland

6. People recall information better through virtual reality, says new UMD study, University of Maryland, EurekaAlert, 13 June 2018.

A study performed by the University of Maryland determined that people will recall information better when seeing it first in a 3D virtual environment, as opposed to a 2D desktop or mobile screen. The Virtual Reality (VR) system takes advantage of what’s called “spatial mnemonic encoding” which allows the brain to not only remember something visually, but assign it a place in three-dimensional space which helps with retention and recall. This technique could accelerate learning and enhance retention when we train our Soldiers and Leaders. As the VR hardware becomes smaller, lighter, and more affordable, custom mission sets, or the skills necessary to accomplish them, could be learned on-the-fly, in theater in a compressed timeline. This also allows for education to be distributed and networked globally without the need for a traditional classroom.

Source: Potomac Books

7. Strategy Strikes Back: How Star Wars Explains Modern Military Conflict, edited by Max Brooks, John Amble, ML Cavanaugh, and Jaym Gates; Foreword by GEN Stanley McChrystal, Potomac Books, May 1, 2018.

This book is fascinating for two reasons:  1) It utilizes one of the greatest science fiction series (almost a genre unto itself) in order to brilliantly illustrate some military strategy concepts and 2) It is chock full of Mad Scientists as contributors. One of the editors, John Amble, is a permanent Mad Scientist team member, while another, Max Brooks, author of World War Z, and contributor, August Cole, are officially proclaimed Mad Scientists.

The book takes a number of scenes and key battles in Star Wars and uses historical analogies to help present complex issues like civil-military command structure, counterinsurgency pitfalls, force structuring, and battlefield movement and maneuver.

One of the more interesting portions of the book is the concept of ‘droid armies vs. clone soldiers and the juxtaposition of that with the future testing of manned-unmanned teaming (MUM-T) concepts. There are parallels in how we think about what machines can and can’t do and how they think and learn.

 
If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

49. “The Queue”

(Editor’s Note: Beginning today, the Mad Science Laboratory will publish a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the previous month. In this anthology, we will address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!)

1. Army of None: Autonomous Weapons and the Future of War, by Paul Scharre, Senior Fellow and Director of the Technology and National Security Program, Center for a New American Security.

One of our favorite Mad Scientists, Paul Scharre, has authored a must read for all military Leaders. This book will help Leaders understand the definitions of robotic and autonomous weapons, how they are proliferating across states, non-states, and super-empowered individuals (his chapter on Garage Bots makes it clear this is not state proliferation analogous), and lastly the ethical considerations that come up at every Mad Scientist Conference. During these Conferences, we have discussed the idea of algorithm vs algorithm warfare and what role human judgement plays in this version of future combat. Paul’s chapters on flash war really challenge our ideas of how a human operates in the loop and his analogies using the financial markets are helpful for developing the questions needed to explore future possibilities and develop policies for dealing with warfare at machine speed.

Source: Rosoboronexport via YouTube
2. “Convergence on retaining human control of weapons systems,” in Campaign to Stop Killer Robots, 13 April 2018.

April 2018 marked the fifth anniversary of the Campaign to Stop Killer Robots. Earlier this month, 82 countries and numerous NGOs also convened at the Convention on Certain Conventional Weapons (CCW) in Geneva, Switzerland, where many stressed the need to retain human control over weapons systems and the use of force. While the majority in attendance proposed moving forward this November to start negotiations towards a legally binding protocol addressing fully autonomous weapons, five key states rejected moving forward in negotiating new international law – France, Israel, Russia, the United Kingdom, and the United States. Mad Scientist notes that the convergence of a number of emerging technologies (synthetic prototyping, additive manufacturing, advanced modeling and simulations, software-defined everything, advanced materials) are advancing both the feasibility and democratization of prototype warfare, enabling and improving the engineering of autonomous weapons by non-state actors and super-empowered individuals alike. The genie is out of the bottle – with the advent of the Hyperactive Battlefield, advanced engagements will collapse the decision-action cycle to mere milliseconds, granting a decisive edge to the side with more autonomous decision-action.

Source: The Stack
3. “China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems,” by Elsa Kania, Adjunct Fellow with the Technology and National Security Program, Center for a New American Security, in Lawfare, 17 Apr 18.

Mad Scientist Elsa Kania addresses the People’s Republic of China’s apparent juxtaposition between their diplomatic commitment to limit the use of fully autonomous lethal weapons systems and the PLA’s active pursuit of AI dominance on the battlefield. The PRC’s decision on lethal autonomy and how it defines the role of human judgement in lethal operations will have tactical, operational, and strategic implications. In TRADOC’s Changing Character of Warfare assessment, we addressed the idea of an asymmetry in ethics where the differing ethical choices non-state and state adversaries make on the integration of emerging technologies could have real battlefield overmatch implications. This is a clear pink flamingo where we know the risks but struggle with addressing the threat. It is also an area where technological surprise is likely, as systems could have the ability to move from human in the loop mode to fully autonomous with a flip of a switch.

Source: HBO.com
4. “Maeve’s Dilemma in Westworld: What Does It Mean to be Free?,” by Marco Antonio Azevedo and Ana Azevedo, in Institute of Art and Ideas, 12 Apr 18. [Note: Best viewed on your personal device as access to this site may be limited by Government networks]

While this article focuses primarily on a higher-level philosophical interpretation of human vs. machine (or artificial intelligence, being, etc.), the core arguments and discussion remain relevant to an Army that is looking to increase its reliance on artificial intelligence and robotics. Technological advancements in these areas continue to trend toward modeling humans (both in form and the brain). However, the closer we get to making this a reality, the closer we get to confronting questions about consciousness and artificial humanity. Are we prepared to face these questions earnestly? Do we want an artificial entity that is, essentially, human? What do we do when that breakthrough occurs? Does biological vs. synthetic matter if the being “achieves” personhood? For additional insights on this topic, watch Linda MacDonald Glenn‘s Ethics and Law around the Co-Evolution of Humans and AI presentation from the Mad Scientist Visualizing Multi Domain Battle in 2030-2050 Conference at Georgetown University, 25-26 Jul 17.

5. Do You Trust This Computer?, directed by Chris Paine, Papercut Films, 2018.

The Army, and society as a whole, is continuing to offload certain tasks and receive pieces of information from artificial intelligence sources. Future Army Leaders will be heavily influenced by AI processing and distributing information used for decision making. But how much trust should we put in the information we get? Is it safe to be so reliant? What should the correct ratio be of human/machine contribution to decision-making? Army Leaders need to be prepared to make AI one tool of many, understand its value, and know how to interpret its information, when to question its output, and apply appropriate context. Elon Musk has shown his support for this documentary and tweeted about its importance.

6. Ready Player One, directed by Steven Spielberg, Amblin Entertainment, 2018.

Adapted from the novel of the same name, this film visualizes a future world where most of society is consumed by a massive online virtual reality “game” known as the OASIS. As society transitions from the physical to the virtual (texting, email, skype, MMORPG, Amazon, etc.), large groups of people will become less reliant on the physical world’s governmental and economic systems that have been established for centuries. As virtual money begins to have real value, physical money will begin to lose value. If people can get many of their goods and services through a virtual world, they will become less reliant on the physical world. Correspondingly, physical world social constructs will have less control of the people who still inhabit it, but spend increasing amounts of time interacting in the virtual world. This has huge implications for the future geo-political landscape as many varied and geographically diverse groups of people will begin congregating and forming virtual allegiances across all of the pre-established, but increasingly irrelevant physical world geographic borders. This will dilute the effectiveness, necessity, and control of the nation-state and transfer that power to the company(ies) facilitating the virtual environment.

Source: XO, “SoftEcologies,” suckerPUNCH
7. “US Army could enlist robots inspired by invertebrates,” by Bonnie Burton, in c/net, 22 Apr 18.

As if Boston Dynamic’s SpotMini isn’t creepy enough, the U.S. Army Research Laboratory (ARL) and the University of Minnesota are developing a flexible, soft robot inspired by squid and other invertebrates that Soldiers can create on-demand using 3-D printers on the battlefield. Too often, media visualizations have conditioned us to think of robots in anthropomorphic terms (with corresponding limitations). This and other breakthroughs in “soft,” polymorphic, printable robotics may grant Soldiers in the Future Operational Environment with hitherto unimagined on-demand, tailorable autonomous systems that will assist operations in the tight confines of complex, congested, and non-permissive environments (e.g., dense urban and subterranean). Soft robotics may also prove to be more resilient in arduous conditions. This development changes the paradigm for how robotics are imagined in both design and application.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

For additional insights into the Mad Scientist Initiative and how we continually explore the future through collaborative partnerships and continuous dialogue with academia, industry, and government, check out this Spy Museum’s SPYCAST podcast.