[Editor’s Note: Mad Scientist Laboratory is pleased to present our latest edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Mad Scientist Initiative has come across during the previous month. In this anthology, we address how each of these works either informs or challenges our understanding of the Operational Environment (OE). We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]
1. “A brain-controlled exoskeleton has let a paralyzed man walk in the lab,” by Charlotte Jee, MIT Technology Review, 4 October 2019.
“Amputees merge with their bionic leg,” in ScienceDaily, 2 October 2019.
“Boston Dynamics’ Atlas can now do an impressive gymnastics routine,” by Jon Porter, The Verge, 24 September 2019.
So what is the OE nexus for a new experimental capability that restores mobility to a quadriplegic man, prosthetics that provide realistic sensory feedback, and a bipedal robot that can split kick better than David Lee Roth in his Van Halen heyday? Scientists from Swiss universities ETH Zürich and EPFL and the latter’s spin-off SensArs Neuroprosthetics have fitted three amputees with prosthetic legs that provide sensory feedback, enabling subjects to feel the device. “Thanks to detailed sensations from [the] sole of the artificial foot and from the artificial knee, all three patients could maneuver through obstacles without the burden of looking at their artificial limb as they walked. They could stumble over objects yet mitigate falling. Most importantly, brain imaging and psychophysical tests confirmed that the brain is less solicited with the bionic leg, leaving more mental capacity available to successfully complete the various tasks.” Meanwhile, researchers at Grenoble University Hospital and Clinatec in France implanted an epidural wireless brain-machine interface into a paralyzed subject, enabling him to walk again via a four-limb neuroprosthetic exoskeleton in a laboratory proof-of-concept demonstration. Per MIT Technology Review, “researchers need to find a way to get the suit to safely balance itself before it can be used outside the laboratory.” Here’s where Boston Dynamic’s Atlas comes in — its “model predictive controller” allows the robot to anticipate forward momentum to “blend… one maneuver to the next,” without losing balance.
These three weak signals presage the potential for a dismounted Manned Unmanned Teaming (MUM-T) capability on a not-so-distant future battlefield, with Soldiers in the rear area (or even the Strategic Support Area!) controlling via cerebral interfaces whole platoons of agile fighting systems at what was once the bleeding edge of combat. This potential for a nimble, semi-autonomous close quarters combat fighting capability could keep future Warfighters out of harm’s way on particularly hazardous, “forlorn hope”-type assaults against heavily defended positions, while simultaneously maintaining humans-in-the-loop in future conflicts.
2. “Coming Soon to a Battlefield: Robots That Can Kill,” by Zachary Fryer-Biggs, The Atlantic, 3 September 2019.
This detailed but engrossing piece by defense and technology reporter Zachary Fryer-Biggs provides an in-depth look at work on robotics being conducted across the Department of Defense but also helps to visualize and contextualize what a battlefield of the future looks like. This article shows the distinction between what is being explored, built, and tested by the DoD and the dystopian nightmare often portrayed in movies like The Terminator. The article addresses a number of current technologies – the U.S. Navy’s Phalanx Close-in Weapon System (CIWS) and Sea Hunter autonomous Unmanned Surface Vehicle (USV), and Israel’s Harpy autonomous anti-radiation loitering munition – that are on the pathway to autonomy already.
Several prominent figures in the DoD Artificial Intelligence (AI) innovation space, such as Mr. Bob Work (former Deputy Secretary of Defense and Dr. Bill Roper (former Director of Strategic Capabilities Office). Mr. Work is quoted as saying, “AI will make weapons more discriminant and better, less likely to violate the laws of war, less likely to kill civilians, less likely to cause collateral damage.” In a similar vein, Dr. Roper stated that the country that integrates AI into its arsenal first might have “an advantage forever.”
One of the biggest implications from this article is the following question — at what point does autonomy become completely self-deciding? Many of the technologies featured are still struggling with contextual understanding and complex cognitive processes that humans can frankly breeze through right now. The current challenge is not intelligent machines capable of eliminating humans – it’s that the machines aren’t smart enough or capable of complex decision-making. As Dr. Fei-Fei Li, the inaugural Sequoia Professor in the Computer Science Department at Stanford University, and Co-Director of Stanford’s Human-Centered AI Institute, noted “human vision is enormously complex and includes capabilities such as reading facial movements and other subtle cues.” But visual, facial, and object recognition in AI is improving rapidly in a short amount of time; we could see more autonomous systems on the battlefield faster than anticipated. Senior military leadership will be challenged with what approach must be taken: human-in-the-loop, human-out-of-the-loop, or human-starts-the-loop. Additionally, how much must be divested from manned or optionally-manned systems and invested into autonomous and optionally-teleoperated systems?
3. “China’s AI Talent Base Is Growing, and then Leaving,” by Joy Dantong Ma, MacroPolo.org, 30 July 2019.
MacroPolo is the in-house think tank of the Paulson Institute in Chicago that focuses on the U.S.-China economic relationship. China’s initiative to become a global leader in Artificial Intelligence is excelling at producing talent, but failing at retaining it. The author uses invitation data from the NeurIPS conference, one of the premier events that gathers researchers exploring artificial neural networks. Prospective attendees submit related research papers for consideration to be invited to the event. Analysis of the data shows that there were 2,800 Chinese scientists accepted over the past ten years. Of those, more than 2,000 were working outside of China and 85% of them were working in the U.S. So while China has made great strides and invested heavily in creating a population of AI researchers, scientists, and engineers, they haven’t been able to insulate themselves from the competition for AI talent raging outside their borders. In 2017, they took steps to curb this exodus by offering incentives and better compensation but it remains to be seen if that is enough to compete with some of the most innovative and sought after companies in the world, like Google, IBM, and Microsoft.
For the Army, this is both an opportunity and a challenge. As the Army increasingly develops and integrates AI into formations, the need for the highest quality technical personnel will follow. A larger pool of experts available to the Army will allow for quicker and better solutions to problems Soldiers face on the battlefield. However, the Army will still be in competition with the same industry that China is losing to, while simultaneously challenged by the prospect of Chinese intellectual property theft and industrial espionage. How can the Army and the United States at large best take advantage of and retain this resource? What are the potential security implications? Even if the experts are here, can the U.S. compete with the tech industry to recruit the best and brightest AI talent?
4. “Tree Planting Drones are Firing ‘Seed Missiles’ Into the Ground,” by Leo Shvedsky, Good, 17 April 2019.
In an effort to combat climate change, tech company BioCarbon Engineering has outfitted commercial drones with pre-germinated seed pods that they disperse and plant by firing them into the ground. This method is cheaper and exponentially faster than planting by hand. From a dual-use perspective, one can see many nefarious possibilities. Just as a drone can spread seeds waiting to sprout, it could just as easily spread a biological or chemical agent. The light payload of seeds used today could be replaced with a payload of powder or liquid that could infect humans, livestock, crops, or waterways. Further, this method, when employed judiciously, could be almost undetectable. Generally, when we think of weapons, we think of devices designed to destroy or kill, but with so many commercial products offering advanced capabilities and access, it’s becoming easier to achieve the same effects with none of the footprint.
5. “Netflix’s ‘Unnatural Selection’ Trailer Makes Crispr Personal,” by Megan Molteni, WIRED, 04 October 2019.
Netflix’s new four-part docuseries, which debuted last week (18 October 2019), explores the democratization of genomic engineering — “Using the bacterial quirk that is CRISPR, scientists have essentially given anyone with a micropipette and an internet connection the power to manipulate the genetic code of any living thing.” This scientific revolution raises a host of new political, legal, and moral concerns that public policy, laws, and opinion are only now beginning to address. As research using this game changing technology becomes an ever-more international enterprise, distinctions in cultural norms, mores, and practices will be challenged. Within the brave new world of genetic engineering facilitated by CRISPER, all of the bright possibilities of revolutionary new treatments and cures for disease are tempered by equally dark prospects when coupled with nefarious intent.
Calls to control open source research and counter the potential use of gene-editing to produce biological weapons and/or affect global health may not be encompassing or sufficient enough. As Mad Scientist has previously explored, a growing community of individual biohackers and DIYers are pushing the boundaries of DNA editing, implants, embedded technologies, and unapproved chemical and biological injections. Not limited to the DIY community, China has announced their intent to become a global superpower and gene editing is one area where they seek to leap ahead of the United States. Their commitment to this objective is evidenced in their gene editing of 86 individuals and the births of “CRISPR babies.” In comparison, the United States is just now approaching human genomic trials (in a well-regulated environment).
Ethical principles are not standardized across cultures. State and non-state actors alike are now able to weaponize biotechnology with relative ease. The decisions we make today are crucial as we articulate, implement, and enforce public policy and international laws governing genetic engineering. These decisions will affect Soldiers and civilians alike, who face the threat of non-kinetic, genetically engineered capabilities on both the battlefield and in our Homeland.
If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: firstname.lastname@example.org — we may select it for inclusion in our next edition of “The Queue”!