[Editor’s Note: Today’s post poses four central questions to our Mad Scientist community of action regarding bias in machine learning and the associated ramifications for artificial intelligence, autonomy, lethality, and decision-making on future warfighting.]
“We thought that we had the answers, it was the questions we had wrong” – Bono, U2
As machine learning and deep learning algorithms become more commonplace, it is clear that the utopian ideal of a bias-neutral Artificial Intelligence (AI) is exactly just that. These algorithms have underlying biases embedded in their coding, imparted by their human programmers (either consciously or unconsciously). These algorithms can develop further biases during the machine learning and training process. Dr. Tolga Bolukbasi, Boston University, recently described algorithms as not being capable of distinguishing right from wrong, unlike humans that can judge their actions, even when they act against ethical norms. For algorithms, data is the ultimate determining factor.
Realizing that algorithms supporting future Intelligence, Surveillance, and Reconnaissance (ISR) networks and Commander’s decision support aids will have inherent biases — what is the impact on future warfighting? This question is exceptionally relevant as Soldiers and Leaders consider the influence of biases in man-machine relationships, and their potential ramifications on the battlefield, especially with regard to the rules of engagement (i.e., mission execution and combat efficiency versus the proportional use of force and minimizing civilian casualties and collateral damage).
“It is difficult to make predictions, particularly about the future.” This quote has been attributed to anyone ranging from Mark Twain to Niels Bohr to Yogi Berra. Point prediction is a sucker’s bet. However, asking the right questions about biases in AI is incredibly important.
The Mad Scientist Initiative has developed a series of questions to help frame the discussion regarding what biases we are willing to accept and in what cases they will be acceptable. Feel free to share your observations and questions in the comments section of this blog post (below) or email them to us at: firstname.lastname@example.org.
1) What types of bias are we willing to accept? Will a so-called cognitive bias that forgoes a logical, deliberative process be allowable? What about a programming bias that is discriminative towards any specific gender(s), ethnicity(ies), race(s), or even age(s)?
2) In what types of systems will we accept biases? Will machine learning applications in supposedly non-lethal warfighting functions like sustainment, protection, and intelligence be given more leeway with regards to bias?
3) Will the biases in machine learning programming and algorithms be more apparent and/or outweigh the inherent biases of humans-in-the-loop? How will perceived biases affect trust and reliance on machine learning applications?
4) At what point will the pace of innovation and introduction of this technology on the battlefield by our adversaries cause us to forego concerns of bias and rapidly field systems to gain a decisive Observe, Orient, Decide, and Act (OODA) loop and combat speed advantage on theHyperactive Battlefield?
For additional information impacting on this important discussion, please see the following:
[Editor’s Note: Since its inception last November, the Mad Scientist Laboratory has enabled us to expand our reach and engage global innovators from across industry, academia, and the Government regarding emergent disruptive technologies and their individual and convergent impacts on the future of warfare. For perspective, our blog has accrued almost 60K views by over 30K visitors from around the world!
Our Mad Scientist Community of Action continues to grow — in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. Almost half (36 out of 81) of the blog posts published have been submitted by guest bloggers. We challenge you to contribute your ideas!
In particular, we would like to recognize Mad Scientist Mr. Sam Bendett by re-posting his submission entitled “Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward,” originally published on 25 June 2018. This post generated a record number of visits and views during the past six month period. Consequently, we hereby declare Sam to be the Mad Scientist Laboratory’s “Maddest” Guest Blogger! for the latter half of FY18. In recognition of his achievement, Sam will receive much coveted Mad Scientist swag.
While Sam’s post revealed the many challenges Russia has experienced in combat testing the Uran-9 Unmanned Ground Vehicle (UGV) in Syria, it is important to note that Russia has designed, prototyped, developed, and operationally tested this system in a combat environment, demonstrating a disciplined and proactive approach to innovation. Russia is learning how to integrate robotic lethal ground combat systems….
Enjoy re-visiting Sam’s informative post below, noting that many of the embedded links are best accessed using non-DoD networks.]
Russia, like many other nations, is investing in the development of various unmanned military systems. The Russian defense establishment sees such systems as mission multipliers, highlighting two major advantages: saving soldiers’ lives and making military missions more effective. In this context, Russian developments are similar to those taking place around the world. Various militaries are fielding unmanned systems for surveillance, intelligence, logistics, or attack missions to make their forces or campaigns more effective. In fact, the Russian military has been successfully using Unmanned Aerial Vehicles (UAVs) in training and combat since 2013. It has used them with great effect in Syria, where these UAVs flew more mission hours than manned aircraft in various Intelligence, Surveillance, and Reconnaissance (ISR) roles.
Russia is also busy designing and testing many unmanned maritime and ground vehicles for various missions with diverse payloads. To underscore the significance of this emerging technology for the nation’s armed forces, Russian Defense Minister Sergei Shoigurecently stated that the serial production of ground combat robots for the military “may start already this year.”
But before we see swarms of ground combat robots with red stars emblazoned on them, the Russian military will put these weapons through rigorous testing in order to determine if they can correspond to battlefield realities. Russian military manufacturers and contractors are not that different from their American counterparts in sometimes talking up the capabilities of their creations, seeking to create the demand for their newest achievement before there is proof that such technology can stand up to harsh battlefield conditions. It is for this reason that the Russian Ministry of Defense (MOD) finally established several centers such as Main Research and Testing Center of Robotics, tasked with working alongside thedefense-industrial sector to create unmanned military technology standards and better communicate warfighters’ needs. The MOD is also running conferences such as the annual “Robotization of the Armed Forces” that bring together military and industry decision-makers for a better dialogue on the development, growth, and evolution of the nation’s unmanned military systems.
This brings us to one of the more interesting developments in Russian UGVs. Then Russian Deputy Defense Minister Borisov recentlyconfirmed that the Uran-9 combat UGV was tested in Syria, which would be the first time this much-discussed system was put into combat. This particular UGV is supposed to operate in teams of three or four and is armed with a 30mm cannon and 7.62 mm machine guns, along with avariety of other weapons.
Just as importantly, it was designed to operate at a distance of up to three kilometers (3000 meters or about two miles) from its operator — a range that could be extended up to six kilometers for a team of these UGVs. This range is absolutely crucial for these machines, which must be operated remotely. Russian designers are developing operational electronics capable of rendering the Uran-9more autonomous, thereby moving the operators to a safer distance from actual combat engagement. The size of a small tank, the Uran-9 impressed the international military community when first unveiled and it was definitely designed to survive battlefield realities….
However, just as “no plan survives first contact with the enemy,” the Uran-9, though built to withstand punishment, came up short in its first trial run in Syria. In a candid admission, Andrei P. Anisimov, Senior Research Officer at the 3rd Central Research Institute of the Ministry of Defense, reported on the Uran-9’s critical combat deficiencies during the 10th All-Russian Scientific Conference entitled “Actual Problems of Defense and Security,” held in April 2018. In particular, the following issues came to light during testing:
• Instead of its intended range of several kilometers, the Uran-9 could only be operated at distance of “300-500 meters among low-rise buildings,” wiping out up to nine-tenths of its total operational range.
• There were “17 cases of short-term (up to one minute) and two cases of long-term (up to 1.5 hours) loss of Uran-9 control” recorded, which rendered this UGV practically useless on the battlefield.
• The UGV’s running gear had problems – there were issues with supporting and guiding rollers, as well as suspension springs.
• The electro-optic stations allowed for reconnaissance and identification of potential targets at a range of no more than two kilometers.
• The OCH-4 optical system did not allow for adequate detection of adversary’s optical and targeting devices and created multiple interferences in the test range’s ground and airspace.
• Unstable operation of the UGV’s 30mm automatic cannon was recorded, with firing delays and failures. Moreover, the UGV could fire only when stationary, which basically wiped out its very purpose of combat “vehicle.”
• The Uran-9’s combat, ISR, and targeting weapons and mechanisms were also not stabilized.
On one hand, these many failures are a sign that this much–discussed and much-advertised machine is in need of significant upgrades, testing, and perhaps even a redesign before it gets put into another combat situation. The Russian militarydid say that it tested nearly 200 types of weapons in Syria, so putting the Uran-9 through its combat paces was a logical step in the long development of this particular UGV. If the Syrian trial was the first of its kind for this UGV, such significant technical glitches would not be surprising.
However, the MOD has been testing this Uran-9 for a while now, showing videosof this machine at a testing range, presumably in Russia. The truly unexpected issue arising during operations in Syria had to do with the failure of the Uran-9 to effectively engage targets with its cannon while in motion (along with a number of other issues). Still, perhaps many observers bought into the idea that this vehicle would perform as built – tracks, weapons, and all. A closer examination of the publicly-releasedtesting video probably foretold some of the Syrian glitches – in this particular one, Uran-9 is shown firing its machine guns while moving, but its cannon was fired only when the vehicle was stationary. Another interesting aspect that is significant in hindsight is that the testing range in the video was a relatively open space – a large field with a few obstacles around, not the kind of complex terrain, dense urban environment encountered in Syria. While today’s and future battlefields will range greatly from open spaces to megacities, a vehicle like the Uran-9 would probably be expected to perform in all conditions. Unless, of course, Syrian tests would effectively limit its use in future combat.
On another hand, so many failures at once point to much larger issues with the Russian development of combat UGVs, issues that Anisimov also discussed during his presentation. He highlighted the following technological aspects that are ubiquitous worldwide at this point in the global development of similar unmanned systems:
• Low level of current UGV autonomy;
• Low level of automation of command and control processes of UGV management, including repairs and maintenance;
• Low communication range, and;
• Problems associated with “friend or foe” target identification.
Judging from the Uran-9’s Syrian test, Anisimov made the following key conclusions which point to the potential trajectory of Russian combat UGV development – assuming thatother unmanned systems may have similar issues when placed in a simulated (or real) combat environment:
• These types of UGVs are equipped with a variety of cameras and sensors — and since the operator is presumably located a safe distance from combat, he may have problems understanding, processing, and effectively responding to what is taking place with this UGV in real-time.
• For the next 10-15 years, unmanned military systems will be unable to effectively take part in combat, with Russians proposing to use them in storming stationary and well-defended targets (effectively giving such combat UGVs a kamikaze role).
• One-time and preferably stationary use of these UGVs would be more effective, with maintenance and repair crews close by.
• These UGVs should be used with other military formations in order to target and destroy fortified and firing enemy positions — but never on their own, since their breakdown would negatively impact the military mission.
The presentation proposed that some of the above-mentioned problems could be overcome by domestic developments in the following UGV technology and equipment areas:
• Creating secure communication channels;
• Building miniaturized hi-tech navigation systems with a high degree of autonomy, capable of operating with a loss of satellite navigation systems;
• Developing miniaturized and effective ISR components;
• Integrating automated command and control systems, and;
• Better optics, electronics and data processing systems.
According to Anisimov’s report, the overall Russian UGV and unmanned military systems development arch is similar to the one proposed by the United States Army Capabilities Integration Center (ARCIC): the gradual development of systems capable of more autonomy on the battlefield, leading to “smart” robots capable of forming “mobile networks” and operating in swarm configurations. Such systems should be “multifunctional” and capable of being integrated into existing armed forces formations for various combat missions, as well as operate autonomously when needed. Finally, each military robot should be able to function within existing and future military technology and systems.
Such a candid review and critique of the Uran-9 in Syria, if true, may point to the Russian Ministry of Defense’s attitude towards its domestic manufacturers. The potential combat effectiveness of this UGV was advertised for the past two years, but its actual performance fell far short of expectations. It is a sign for developers of other Russian unmanned ground vehicles – like Soratnik, Vihr, and Nerehta — since it displays the full range of deficiencies that take place outside of well-managed testing ranges where such vehicles are currently undergoing evaluation. It also brought to light significant problems with ISR equipment — this type of technology is absolutely crucial to any unmanned system’s successful deployment, and its failures during Uran-9 tests exposed a serious combat weakness.
It is also a useful lesson for many other designers of domestic combat UGVs who are seeking to introduce similar systems into existing order of battle. It appears that the Uran-9’s full effectiveness can only be determined at a much later time if it can perform its mission autonomously in the rapidly-changing and complex battlefield environment. Fully autonomous operation so far eludes its Russian developers, who are nonetheless still working towards achieving such operational goals for their combat UGVs. Moreover, Russian deliberations on using their existing combat UGV platforms in one-time attack mode against fortified adversary positions or firing points, tracking closely with ways that Western military analysts arethinking that such weapons could be used in combat.
The Uran-9 is still a test bed and much has to take place before it could be successfully integrated into current Russian concept of operations. We could expect more eye-opening “lessons learned” from its and other UGVs potential deployment in combat. Given the rapid proliferation of unmanned and autonomous technology, we are already in the midst of a new arms race. Many states are now designing, building, exporting, or importing various technologies for their military and security forces.
To make matters more interesting, the Russians have been public with both their statements about new technology being tested and evaluated, and with the possible use of such weapons in current and future conflicts. There should be no strategic or tactical surprise when military robotics are finally encountered in future combat.
Samuel Bendett is a Research Analyst at the CNA Corporation and a Russia Studies Fellow at the American Foreign Policy Council. He is an official Mad Scientist, having presented and been so proclaimed at a previous Mad Scientist Conference. The views expressed here are his own.
[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by guest blogger LTC Rob Taber, U.S. Army Training and Doctrine Command (TRADOC) G-2 Futures Directorate, clarifying the often confused character and nature of warfare, and addressing their respective mutability.]
No one is arguing that warfare is not changing. Where people disagree, however, is whether the nature of warfare, the character of warfare, or both are changing.
Take, for example, the National Intelligence Council’s assertion in “Global Trends: Paradox of Progress.” They state, “The nature of conflict is changing. The risk of conflict will increase due to diverging interests among major powers, an expanding terror threat, continued instability in weak states, and the spread of lethal, disruptive technologies. Disrupting societies will become more common, with long-range precision weapons, cyber, androbotic systems to target infrastructure from afar, and more accessible technology to create weapons of mass destruction.”[I]
Additionally, Brad D. Williams, in an introductionto an interview he conducted with Amir Husain, asserts, “Generals and military theorists have sought to characterize the nature of war for millennia, and for long periods of time, warfare doesn’t dramatically change. But, occasionally, new methods for conducting war cause a fundamental reconsideration of its very nature and implications.”[II] Williams then cites “cavalry, the rifled musket and Blitzkrieg as three historical examples”[III] from Husain and General John R. Allen’s (ret.) article, “On Hyperwar.”
Unfortunately, the NIC and Mr. Williams miss the reality that the nature of war is not changing, and it is unlikely to ever change. While these authors may have simply interchanged “nature” when they meant “character,” it is important to be clear on the difference between the two and the implications for the military. To put it more succinctly, words have meaning.
The nature of something is the basic make up of that thing. It is, at core, what that “thing” is. The character of something is the combination of all the different parts and pieces that make up that thing. In the context of warfare, it is useful to ask every doctrine writer’s personal hero, Carl Von Clausewitz, what his views are on the matter.
He argues that war is “subjective,”[IV] “an act of policy,”[V] and “a pulsation of violence.”[VI] Put another way, the nature of war is chaotic, inherently political, and violent. Clausewitz then states that despite war’s “colorful resemblance to a game of chance, all the vicissitudes of its passion, courage, imagination, and enthusiasm it includes are merely its special characteristics.”[VII] In other words, all changes in warfare are those smaller pieces that evolve and interact to make up the character of war.
The argument thatartificial intelligence (AI) and other technologies will enable military commanders to have “a qualitatively unsurpassed level of situational awareness and understanding heretofore unavailable to strategic commander[s]”[VIII] is a grand claim, but one that has been made many times in the past, and remains unfulfilled. The chaos of war, its fog, friction, and chance will likely never be deciphered, regardless of what technology we throw at it. While it is certain that AI-enabled technologies will be able to gather, assess, and deliver heretofore unimaginable amounts of data, these technologies will remain vulnerable to age-old practices ofdenial, deception, and camouflage.
The enemy gets a vote, and in this case, the enemy also gets to play with their AI-enabled technologies that are doing their best to provide decision advantage over us. The information sphere in war will be more cluttered and more confusing than ever.
Regardless of the tools of warfare, be they robotic,autonomous, and/or AI-enabled, they remain tools. And while they will be the primary tools of the warfighter, the decision to enable the warfighter to employ those tools will, more often than not, come from political leaders bent on achieving a certain goal with military force.
Finally, the violence of warfare will not change. Certainly robotics and autonomy will enable machines that can think and operate without humans in the loop. Imagine the future in which the unmanned bomber gets blown out of the sky by the AI-enabled directed energy integrated air defense network. That’s still violence. There are still explosions and kinetic energy with the potential for collateral damage to humans, both combatants and civilians.
Not to mention the bomber carried a payload meant to destroy something in the first place. A military force, at its core, will always carry the mission to kill things and break stuff. What will be different is what tools they use to execute that mission.
To learn more about the changing character of warfare:
– Watch videos of each of the conference presentations on the TRADOC G-2 Operational Environment (OE) Enterprise YouTube Channelhere.
– Review the conference presentation slides (with links to the associated videos) on the Mad Scientist All Partners Access Network (APAN) sitehere.
LTC Rob Taber is currently the Deputy Director of the Futures Directorate within the TRADOC G-2. He is an Army Strategic Intelligence Officer and holds a Master of Science of Strategic Intelligence from the National Intelligence University. His operational assignments include 1st Infantry Division, United States European Command, and the Defense Intelligence Agency.
Note: The featured graphic at the top of this post captures U.S. cavalrymen on General John J. Pershing’s Punitive Expedition into Mexico in 1916. Less than two years later, the United States would find itself fully engaged in Europe in a mechanized First World War. (Source: Tom Laemlein / Armor Plate Press, courtesy of Neil Grant, The Lewis Gun, Osprey Publishing, 2014, page 19)
[I] National Intelligence Council, “Global Trends: Paradox of Progress,” January 2017, https://www.dni.gov/files/documents/nic/GT-Full-Report.pdf, p. 6. [II] Brad D. Williams, “Emerging ‘Hyperwar’ Signals ‘AI-Fueled, machine waged’ Future of Conflict,” Fifth Domain, August 7, 2017, https://www.fifthdomain.com/dod/2017/08/07/emerging-hyperwar-signals-ai-fueled-machine-waged-future-of-conflict/. [III] Ibid. [VI] Carl Von Clausewitz, On War, ed. Michael Howard and Peter Paret (Princeton: Princeton University Press, 1976), 85. [V] Ibid, 87. [VI] Ibid. [VII] Ibid, 86. [VIII] John Allen, Amir Hussain, “On Hyper-War,” Fortuna’s Corner, July 10, 2017, https://fortunascorner.com/2017/07/10/on-hyper-war-by-gen-ret-john-allenusmc-amir-hussain/.
On 19-20 June 2018, the U.S. Army Training and Doctrine Command (TRADOC) Mad Scientist Initiative co-hosted the Installations of the Future Conference with the Office of the Assistant Secretary of the Army for Installations, Energy and Environment (OASA (IE&E)) and Georgia Tech Research Institute (GTRI). Emerging technologies supporting the hyper-connectivity revolution will enable improved training capabilities, security, readiness support (e.g., holistic medical facilities and brain gyms), and quality of life programs at Army installations. Our concepts and emerging doctrine for multi-domain operations recognizes this as increasingly important by including Army installations in the Strategic Support Area. Installations of the Future will serve as mission command platforms to project virtual power and expertise as well as Army formations directly to the battlefield.
We have identified the following “Top 10” takeaways related to our future installations:
1. Threats and Tensions. “Army Installations are no longer sanctuaries” — Mr. Richard G. Kidd IV, Deputy Assistant Secretary of the Army, Strategic Integration. There is a tension between openness and security that will need balancing to take advantage of smart technologies at our Army installations. The revolution in connected devices and the ability to virtually project power and expertise will increase the potential for adversaries to target our installations. Hyper-connectivity increases the attack surface for cyber-attacks and the access to publicly available information on our Soldiers and their families, making personalized warfare and the use ofpsychological attacks and deep fakes likely.
2. Exclusion vs. Inclusion. The role of and access to future Army installations depends on the balance between these two extremes. The connections between local communities and Army installations will increase potential threat vectors, but resilience might depend on expanding inclusion. Additionally, access to specialized expertise inrobotics, autonomy, and information technologies will require increased connections with outside-the-gate academic institutions and industry.
3. Infrastructure Sensorization. Increased sensorization of infrastructure runs the risk of driving efficiencies to the point of building in unforeseen risks. In the business world, these efficiencies are profit-driven, with clearer risks and rewards. Use of table top exercises can explore hidden risks and help Garrison Commanders to build resilient infrastructure and communities. Automation can causecascading failures as people begin to fall “out of the loop.”
4. Army Modernization Challenge. Installations of the Future is a microcosm of overarching Army Modernization challenges. We are simultaneously invested in legacy infrastructure that we need to upgrade, and making decisions to build new smart facilities. Striking an effective and efficient balance will start with public-private partnerships to capture the expertise that exists in our universities and in industry. The expertise needed to succeed in this modernization effort does not exist in the Army. There are significant opportunities for Army Installations to participate in ongoing consortiums like the “Middle Georgia” Smart City Community and the Global Cities Challenge to pilot innovations in spaces such as energy resilience.
5. Technology is outpacing regulations and policy. The sensorization and available edge analytics in our public space offers improved security but might be perceived as decreasing personal privacy. While we give up some personal privacy when we live and work on Army installations, this collection of data will require active engagement with our communities. We studied an ongoing Unmanned Aerial System (UAS) supportconcept to detect gunshot incidents in Louisville, KY, to determine the need to involve legislatures, local political leaders, communities, and multiple layers of law enforcement.
6. Synthetic Training Environment. The Installation of the Future offers the Army significant opportunities to divest itself of large brick and mortar training facilities and stove-piped, contractor support-intensive Training Aids, Devices, Simulations, and Simulators (TADSS). MG Maria Gervais, Deputy Commanding General, Combined Arms Center – Training (DCG, CAC-T), presented the Army’sSynthetic Training Environment (STE), incorporating Virtual Reality (VR), “big box” open-architecture simulations using a One World Terrain database, and reduced infrastructure and contractor-support footprints to improve Learning and Training. The STE, delivering high-fidelity simulations and the opportunity for our Soldiers and Leaders to exercise all Warfighting Functions across the full Operational Environment with greater repetitions at home station, will complement the Live Training Environment and enhance overall Army readiness.
7. Security Technologies. Many of the security-oriented technologies (autonomous drones, camera integration, facial recognition, edge analytics, and Artificial Intelligence) that triage and fuse information will also improve our deployed Intelligence, Surveillance, and Reconnaissance (ISR) capabilities. The Chinese lead the world in these technologies today.
8. Virtual Prototyping. The U.S. Army Engineer Research and Development Center (ERDC) is developing a computational testbed using virtual prototypingto determine the best investments for future Army installations. The four drivers in planning for Future Installations are: 1) Initial Maneuver Platform (Force Projection); 2) Resilient Installations working with their community partners; 3) Warfighter Readiness; and 4) Cost effectiveness in terms of efficiency and sustainability.
9. Standard Approach to Smart Installations. A common suite of tools is needed to integrate smart technologies onto installations. While Garrison Commanders need mission command to take advantage of the specific cultures of their installations and surrounding communities, the Army cannot afford to have installations going in different directions on modernization efforts. A method is needed to rapidly pilot prototypes and then determine whether and how to scale the technologies across Army installations.
10. “Low Hanging Fruit.” There are opportunities for Army Installations to lead their communities in tech integration. Partnerships in energy savings, waste management, and early 5G infrastructure provide the Army with early adopter opportunities for collaboration with local communities, states, and across the nation. We must educate contracting officers and Government consumers to look for and seize upon these opportunities.
Videos from each of the Installations of the Future Conference presentations are posted here. The associated slides will be postedhere within the week on the Mad Scientist All Partners Access Network site.
If you enjoyed this post, check out the following:
Mad Scientist Laboratory is pleased to announce that Headquarters, U.S. Army Training and Doctrine Command (TRADOC) is co-sponsoring the Mad Scientist Installations of the Future Conference this week (Tuesday and Wednesday, 19-20 June 2018) with the Office of the Assistant Secretary of the Army for Installations, Energy and Environment (OASA (IE&E)) and Georgia Tech Research Institute (GTRI) at GTRI in Atlanta, Georgia.
Plan now to join us virtually as leading scientists, innovators, and scholars from academia, industry, and government gather to discuss:
1) Current and emerging threat vectors facing installations,
2) “Smart city” opportunities born of technology,
3) Logistics and power projection, and
4) Quality of life.
Presentations will be driven by the following research questions:
1) What are the emerging threat vectors capable of targeting installations and what are the implications to the multi-domain fight?
2) How will mission command and the concept of virtual power be enhanced by smart installations?
3) How will other trends such as localized manufacturing, augmented/virtual reality, and artificial intelligence change how Soldiers will train, sustain, and project power from smart installations?
4) What are the big impact quality of life improvements available through smart technologies?
– Review the conference agenda’s list of presentations and the associated world-class speakers’ biographies here.
– Read our Call for Ideas finalists’ submissionshere, graciously hosted by our colleagues at Small Wars Journal.
[Editor’s Note: Mad Scientist Laboratory is pleased to publish the following guest blog post by Mr. Lewis Jones. Originally a “Letter Home” submission to theCall for Ideasassociated with the Mad Scientist Installations of the Future Conference (see more information about this event at the end of this post), we hope that you will enjoy Mr. Jones’ vision of a mid-Twenty First Century forward deployed base.]
Hey Dad, guess who got new PCS orders! From March 2042 I’ll be assigned to Joint Base Harris in Japan. You spent your early career in Japan, right? I’ll never forget your stories about Camp Zama, a sprawling installation housing hundreds of soldiers and civilians. I used to love hearing about the 2020s, when enemy sensors, drones, and artificial intelligence first wreaked havoc on operations there.
Remember the Garrison commander whose face was 3D-scanned by a rigged vending machine near the gate? The enemy released that humiliating videoright before a major bilateral operation. By the time we proved it was fake, our partners had already withdrawn.
What about the incident at the intel battalion’s favorite TDY hotel with a pool-side storage safe? Soldiers went swimming and tossed their wallets into the safe, unaware that anembedded scanner would clone their SIPR tokens. To make matters worse, the soldiers secured the safe with a four digit code… using the same numbers as their token PIN.
Oh, and remember the Prankenstein A.I. attack? It scanned social media to identify Army personnel living off-base, then called local law enforcement with fake complaints. The computer-generated voice was very convincing, even giving physical descriptions based on soldier’s actual photos. You said that one soured host-nation relations for years!
Or the drones that hovered over Camp Zama, broadcasting fake Wi-Fi hotspots. The enemy scooped up so much intelligence and — ah, you get the picture. Overseas bases were so vulnerable back then.
Well, the S1 sent me a virtual tour and the new base is completely different. When U.S. Forces Japan rebuilt its installations, those wide open bases were replaced by miniature, self-contained fortresses. Joint Base Harris, for example, was built inside a refurbished shopping mall: an entire installation, compressed into a single building!
Here’s what I saw on my virtual tour:
• The roof has solar panels and battery banks for independent power. There’s also an enormous greenhouse, launch pads for drones and helos, and a running trail.
• The ground level contains a water plant that extracts and purifies groundwater, along with indoor hydroponic farms. Special filtration units scrub the air; they’re even rated against CBRN threats.
• What was once a multi-floor parking garage is now a motor pool, firing range, and fitness complex. The gym walls are smart-screens, so you can work out in a different environment every day.
• Communications are encrypted and routed through a satellite uplink. The base even has its own cellphone tower. Special mesh in the walls prevent anybody outside from eavesdropping on emissions— the entire base is a SCIF.
• The mall’s shops and food court were replaced by all the features and functions of a normal base: nearly 2,000 Army, Air and Cyber Force troops living, working, and training inside. They even have a kitchen-bot in the chow hall that can produce seven custom meals per minute!
• Supposedly, the base extends several floors underground, but the tour didn’t show that. I guess that’s where the really secret stuff happens.
By the way, don’t worry about me feeling cooped up: Soldiers are assigned top-notch VR specs during in-processing. During the duty day, they’re only for training simulations. Once you’re off, personal use is authorized. I’ll be able to play virtual games, take virtual tours… MWR even lets you link with telepresence robots to “visit” family back home.
The sealed, self-contained footprint of this new base is far easier to defend in today’s high-tech threat environment. Some guys complain about being stuck inside, but you know what I think? If Navy sailors can spend months at sea in self-contained bases, then there’s no reason the Army can’t do the same on land!
If you were intrigued by this vision of a future Army installation, please plan on joining us virtually at the Mad Scientist Installations of the Future Conference, co-sponsored by the Office of the Assistant Secretary of the Army for Installations, Energy and Environment (OASA (IE&E)); Georgia Tech Research Institute (GTRI); and Headquarters, U.S. Army Training and Doctrine Command (TRADOC), at GTRI in Atlanta, Georgia, on 19-20 June 2018. Clickhere to learn more about the conference and then participate in the live-streamed proceedings, starting at 0830 EDT on 19 June 2018.
Lewis Jones is an Army civilian with nearly 15 years of experience in the Indo-Pacific region. In addition to his Japanese and Chinese language studies, he has earned a Masters in Diplomacy and International Conflict Management from Norwich University. He has worked as a headhunter for multinational investment banks in Tokyo, as a business intelligence analyst for a DOD contractor, and has supported the Army with cybersecurity program management and contract administration. Lewis writes about geopolitics, international relations, U.S. national security, and the effects of rapid advances in technology.
[Editor’s Note: The Operational Environment (OE) is the start point for Army Readiness – now and in the Future. The OE answers the question, “What is the Army ready for?” Without the OE in training and Leader development, Soldiers and Leaders are “practicing” in a benign condition, without the requisite rigor to forge those things essential for winning in a complex, multi-domain battlefield. Building the Army’s future capabilities, a critical component of future readiness, requires this same start point. The assumptions the Army makes about the Future OE are the sine qua non start point for developing battlefield systems — these assumptions must be at the forefront of decision-making for all future investments.]
There are no facts about the future. Leaders interested in building future ready organizations must develop assumptions about possible futures and these assumptions require constant scrutiny. Leaders must also make decisions based on these assumptions to posture organizations to take advantage of opportunities and to mitigate risks. Making these decisions is fundamental to building future readiness.
1. Contested in all domains (air, land, sea, space, and cyber). Increased lethality, by virtue of ubiquitous sensors, proliferated precision, high kinetic energy weapons and advanced area munitions, further enabled by autonomy, robotics, andArtificial Intelligence (AI)with an increasing potential for overmatch. Adversaries will restrict us to temporary windows of advantage with periods of physical and electronic isolation.
2. Concealment is difficult on the future battlefield. Hiding from advanced sensors — where practicable — will require dramatic reduction of heat, electromagnetic, and optical signatures. Traditional hider techniques such as camouflage, deception, and concealment will have to extend to “cross-domain obscuration” in the cyber domain and the electromagnetic spectrum. Canny competitors will monitor their own emissions in real-time to understand and mitigate their vulnerabilities in the “battle of signatures.” Alternately, “hiding in the open” within complex terrain clutter and near-constant relocation might be feasible, provided such relocation could outpace future recon / strike targeting cycles. Adversaries will operate among populations in complex terrain, including dense urban areas.
3. Trans-regional, gray zone, and hybrid strategies with both regular and irregular forces, criminal elements, and terrorists attacking our weaknesses and mitigating our advantages. The ensuing spectrum of competition will range from peaceful, legal activities through violent, mass upheavals and civil wars to traditional state-on-state, unlimited warfare.
4. Adversaries include states, non-state actors, and super-empowered individuals, with non-state actors and super empowered individuals now having access to Weapons of Mass Effect (WME), cyber, space, and Nuclear/Biological/ Chemical (NBC) capabilities. Their operational reach will range from tactical to global, and the application of their impact from one domain into another will be routine. These advanced engagements will also be interactive across the multiple dimensions of conflict, not only across every domain in the physical dimension, but also the cognitive dimension of information operations, and even the moral dimension of belief and values.
5. Increased speed of human interaction, events and action withdemocratized and rapidly proliferating capabilities means constant co-evolution between competitors. Recon / Strike effectiveness is a function of its sensors, shooters, their connections, and the targeting process driving decisions. Therefore, in a contest between peer competitors with comparable capabilities, advantage will fall to the one that is better integrated and makes better and faster decisions.
These assumptions become useful when they translate to potential decision criteria for Leaders to rely on when evaluating systems being developed for the future battlefield. Each of the following questions are fundamental to ensuring the Army is prepared to operate in the future.
1. How will this system operate when disconnected from a network? Units will be disconnected from their networks on future battlefields. Capabilities that require constant timing and precision geo-locational data will be prioritized for disruption by adversaries with capable EW systems.
2. What signature does this system present to an adversary? It is difficult to hide on the future battlefield and temporary windows of advantage will require formations to reduce their battlefield signatures. Capabilities that require constant multi-directional broadcast and units with large mission command centers will quickly be targeted and neutralized.
3. How does this system operate in dense urban areas? The physical terrain in dense urban areas and megacities creates concrete canyons isolating units electronically and physically. Automated capabilities operating in dense population areas might also increase the rate of false signatures, confusing, rather than improving, Commander decision-making. New capabilities must be able to operate disconnected in this terrain. Weapons systems must be able to slew and elevate rapidly to engage vertical targets. Automated systems and sensors will require significant training sets to reduce the rate of false signatures.
4. How does this system take advantage of open and modular architectures? The rapid rate of technological innovations will offer great opportunities to militaries capable of rapidly integrating prototypes into formations. Capabilities developed with open and modular architectures can be upgraded with autonomous and AI enablers as they mature. Early investment in closed-system capabilities will freeze Armies in a period of rapid co-evolution and lead to overmatch.
5. How does this capability help win in competition short of conflict with a near peer competitor? Near peer competitors will seek to achieve limited objectives short of direct conflict with the U.S. Army. Capabilities will need to be effective at operating in the gray zone as well as serving as deterrence. They will need to be capable of strategic employment from CONUS-based installations.
If you enjoyed this post, check out the following items of interest:
Join SciTech Futures‘ community of experts, analysts, and creatives on 11-18 June 2018 as they discuss the logistical challenges of urban campaigns, both today and on into 2035. What disruptive technologies and doctrines will blue (and red) forces have available in 2035? Are unconventional forces the future of urban combat? Their next ideation exercise goes live 11 June 2018 — click here to learn more!
[Editor’s Note: Mad Scientist Laboratory is pleased to publish our latest iteration of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the previous month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]
There are no facts about the future and the future is not a linear extrapolation from the present. We inherently understand this about the future, but Leaders oftentimes seek to quantify the unquantifiable. Eliot Peper opens his Harvard Business Review article with a story about one of the biggest urban problems in New York City at the end of the 19th century – it stank! Horses were producing 45,000 tons of manure a month. The urban planners of 1898 convened a conference to address this issue, but the experts failed to find a solution. More importantly, they could not envision a future 14 years hence, when cars would outnumber horses. The urban problem of the future was not horse manure, but motor vehicle-generated pollution and road infrastructure. All quantifiable data available to the 1898 urban planners only extrapolated to more humans, horses, and manure. It is likely that any expert sharing an assumption about cars over horses would have been laughed out of the conference hall. Flash forward a century and the number one observation from the 9/11 Commission was that the Leaders and experts responsible for preventing such an attack lacked imagination. Story tellingand the science fiction genre allow Leaders to imaginebeyond the numbers and broaden the assumptions needed to envision possible futures. Story telling also helps Leaders and futurists to envision the human context around emerging technologies. For more on Science Fiction and futuring, watch Dr. David Brin‘s Mad Scientistpresentation.
2. “Automated Valor,” by August Cole, Proceedings Magazine, U.S. Naval Institute, May 2018.
Fellow Mad Scientist August Cole’s short story, commissioned by the British Army Concepts Branch, explores the future of urban warfare from a refreshingly new, non-US perspective. Sparking debate about force development and military operations in the 2030s, this story portrays a vivid combat scenario in a world where autonomous weapons have proliferated. Mr. Cole’s story embraces a number of Future Operational Environment themes familiar to Mad Scientists, including combat leadership andteam identity(Soldier and machine),human trust of AI decision-making, virtual and earned citizenship,deep fakes, small unittactical operations, and multi-national Joint operations against an expansionist Chinese super power. Visualizing the future fight from this British Commonwealth perspective provides a new twist in story telling, describing what it will mean to be a Soldier on the battlefield in 2039, depending on machine teammates in the close fight.
3. Altered Carbon, Netflix series, 2018 (based upon a 2002 novel by Richard K. Morgan) — submitted by Mad Scientist Pat Filbert.
Set 300+ years in a futuristic Earth, the show’s main character, or more to the point, his “cortical stack” (alien technology, reverse-engineered for human use that records the sum total of an individual’s consciousness) has been “imprisoned” for 250 years and is “released” back into the general population to solve a mysterious murder. At this time, AI exists in and fully interactswith both the physical and cyber domains. The show incorporates a number of aspects related totrust in AI and technology. Such aspects enable a future where combat is fought by “stored soldiers” on distant worlds using advanced technological capabilities. Some humans have accepted AI projections as near-peers, so the trust factor comes up repeatedly between the humans who accept and embrace this technology and those who remain skeptical, like Will Smith’s character inI, Robot. The implications of AI becoming sentient and capable of violence are at the core of the morality argument against AI technology. The popular acceptance of AI possessing human-like qualities would definitely be a “leap forward” in more than just technology. For additional insights on this topic, watch Mad Scientist Linda MacDonald Glenn‘s presentation.
4. “SOCOM’s Top 10 Technologies“ Podcast, National Defense Magazine, National Defense Industry Association, 3 May 2018 — submitted by Marie Murphy.
This podcast provides a summary of some of the primary emerging technologies that the United States Special Operations Command (SOCOM) and the Department of Defense are developing for military application. In the immediate future — exoskeletons and commercial drone use; in the deep future — quantum computing and China‘s rise to dominate the microelectronics market by 2030 are highlighted in the list. Stew Magnuson, Editor-in-Chief of National Defense Magazine, states that technology is nearing the end of the applicability of Moore’s Law. Due to this, a major consideration for the development of new scientific and technical advancements is private, profit-driven industry, which will certainly be responsible for future cutting-edge technologies. Given that many innovations the military uses or seeks to apply now stem from private sector innovation, what happens when Moore’s Lawexpires and technology moves too quickly for military research and adaptation?
Researchers analyzed the decision-making habits of gamers that play League of Legendsin order to identify and build mental models. Identifying these models will help understand how they are built and, more importantly, how they change over time as players gain proficiency from novice to expert. The researchers analyzed survey responses based on the game and compared the differences between novices, journeymen, and experts. There were clear differences in the way the mental models were organized based on experience, with experts making abstract connections and even showing signs of subnetworks. The researchers plan to use this information for better game design and the development / tailoring of training programs. The Army could leverage the potential of these mental models with neural feedback to accelerate Soldier learning, breaking the tyranny of the 10,000 hour rule of expertise. That said, this information could also prove to be a weapon in the hands of an adversary. What happens to game theory if the adversary knows how your mind works, what your proclivities are, and what courses of action you are likely to favor? What happens if the adversary can identify, based on your actions, who in your unit is a novice and who is an expert, and targets them accordingly (i.e., focusing on defeating the experts first, while leaving the less experienced)? Accessing this information could provide an adversary with an advantage that may prove the difference between success and defeat. Learn more about cognitive enhancement in fellow Mad Scientist Dr. Amy Kruse’s podcast, Human 2.0, hosted by our colleagues at Modern War Institute.
Researchers at the University of California, Berkeley, have exploited mainstream commercial Artificial Intelligence (AI) assistants (e.g., Siri, Alexa, Google Assistant) in order to secretly send commands. The researchers were able to send secret messages to the devices that were embedded in an existing audio track that were undetectable to the human ear. The track could be played and the AI could be told to do any number of things, from transferring money, to adding an item to a shopping list, or opening a malicious website. The adversarial applications of this are immense and abundant. A nefariousactor could surreptitiously activate a device, mute it, and then send and receive information stored on it or even use it to unlock doors, start cars, or call other devices. As the Army becomes more reliant on AI and automation, its vulnerability toPersonalized Warfareattacks via these axes will increase. Will the Army ever be able to use voice activated devices that can be so easily compromised by an undetectable source?
At a recent workshop, the Mad Scientist community was informed of the constraints associated with neural embedded man-machine interfaces – namely, conventional electrode materials will degrade relatively quickly via corrosion brought on by the human brain’s inflammatory immune system response. This challenge may have been overcome by researchers at Carnegie Mellon University, funded by the Defense Advanced Research Projects Agency (DARPA), who have developed a “flexible, squishy silicon-based hydrogel that sticks to neural tissue, bringing non-invasive electrodes to the brain’s surface.” As a tissue analog, this hydrogel is less likely to trigger the brain’s natural defensive response, thus potentially revolutionizing the integration of prosthetics and medical devices with patients’ brains. As with most disruptive technologies, preliminary niche applications (in this case, medical) may jump, initially to the edge, then possibly ripple throughout society. The advent of hydrogel-based electrodes has the potential to accelerate the current transhumanism movement and facilitate direct brain-machine interfaces, as envisioned in Mr. Howard Simkin’s Sine Paripost. Projected forward, the possibility of an Internet of Everything and Everyone may prove to be a two-edged sword, facilitating both the direct upload of knowledge on demand, and the direct hacking of individuals.
If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: email@example.com — we may select it for inclusion in our next edition of “The Queue”!
[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by returning guest blogger Mr. Frank Prautzsch, addressing the Black Swans and Pink Flamingos associated with our last terrestrial frontier — the Arctic. Mr. Prautzsch haspreviously posted on how the future of vaccines is in nano-biology and convergence with the immune system of the body.]
By now, all Mad Scientists and understudies of history arefamiliar with the conditions for the unknown, unknowns (i.e., Black Swans) and the known, knowns (i.e., Pink Flamingos). Perhaps no place has a greater collection of these attributes than the Arctic. The Arctic Region remains arguably our last international frontier. Over the last 20 years, climate change, unexplored energy reserves, short transpolar navigation, eco-tourism, and commerce (with and without indigenous populations) are taking center stage among stakeholders. This focused international interest in the Arctic has pronounced security implications.
To start our Arctic flamingo category, potential energy reserves take center stage. The US Geological Survey estimates that the Arctic holds 18-23% of the untapped oil reserves remaining on the planet. Alaska and West Siberia are estimated to hold 30% of the world’s remaining gas reserves. Russia has attained strategic deals with Exxon Mobil, Eni, and Statoil for securing up to $500B in investment over the next 30 years. Shell paid $2.1B for 275 blocks of off-shore drilling plots northwest of Alaska, but has encountered difficulties in the harsh climate. The United States and Norway are building stronger partnerships on Arctic drilling.
Perhaps the largest flamingo, and also the leader of the flock (or flamboyance), is the Russian military. Most of the Arctic Ocean littoral states are modernizing their military forces in the Arctic. With countries rebuilding their Arctic military capabilities, in concert with vague territorial zones, rich natural resource options, and no real enforcement of maritime law, some concern should be paid to any Russian attempt to have prime sectors of the Arctic become a “new Crimea”. This is a particularly acute topic should Russia model its behaviors after China and the lack of a concerted international community response to China’s sovereignty claims in the South China Sea.
In August of 2007, two Russian mini-subs planted a Russian flag on a titanium mast 14,000 feet below the North Pole. This was tied to their interpretation of the 1982 UN Convention on the Law of the Sea allowing nations to claim sub terrain beyond 200 nautical miles if they prove that such a location is part of their continental shelf.
In the summer of 2014, “Putin broke away from talking about the Ukraine, and indicated that Russia’s future really didn’t lie to its west, but instead in the north. ‘Our interests are concentrated in the Arctic…. And of course, we should pay more attention to issues of development of the Arctic and the strengthening of our position [there].’”
For the past 4 years, Norway, Finland, and Sweden joined much of the international community to overtly criticize Russian representatives and share their disappointment over Russian violation of international and maritime law by invading Crimea and the Ukraine. Finland and Sweden are exploring NATO membership. The volume of Russian TU-97 Bear C4ISR over flights of Finland’s and Sweden’s waters has gone up exponentially. The bombers are flying C4ISR missions but could easily be armed to follow through with their primary mission.
The day after sanctions were placed on Russia for the invasion of Crimea and their activities in the Ukraine, President Putin moved an expeditionary naval fleet into the Arctic. The ships were dispatched to deliver personnel, equipment, and supplies to the New Siberian Islands where a permanent base was constructed. Central to this operation are revitalized military bases at Kotelny and Wrangel Islands, which were abandoned in 1993. Kotelny now has an airbase and is permanent home of the 99th Arctic Tactical Group. Another new air base was commissioned at Cape Schmidt. Additionally, an expanded airstrip at Novaya Zemlya can now accommodate fighters supporting the Northern Fleet. These moves have prompted serious criticism from Canada.
In addition to Kotelny, the Russian Northern Fleet has expanded operations from the Russian town of Alakurtti, Murmansk, which is 50km from the Finnish border. Large portions of the rest of the Northern Fleet are now there with a full complement of 39 ships and 45 submarines. Alakurtti is the new garrison for the Russian Arctic Commandwith 6000 + Arctic-trained ground troops. Russia maintains a fleet for 54 icebreakers and 78 icebreaker-hulled oil/gas supertankers, while the US maintains one heavy breaker and has hopes of acquiring three more. Russia feels that they have to move to the Arctic Ocean to secure their energy future, and to protect the economic interests of their country. The Russians intend to “homestead”, realizing that global energy supplies will again favor their geographical posture someday. From this strategic position, they maintain multiple black swan options, for which the West will have little to challenge.
Reinforcing these energy efforts in the Arctic, Russia deployed its first floating nuclear power plant in April 2018. Moving slowly from St. Petersburg, its final destination is Pevek on the Arctic coast, where it will replace a land-based nuclear power source. This position is 86km from Alaska and the installation uses similar reactor technology to that of Russian icebreaker ships. The primary concern of the international community is the potential for a nuclear accident, but this new power plant could serve a broader Russian purpose in sequestering and dominating the Arctic and its resources.
One of the baby black swans in the Arctic is our unknown ability to generate a clear national will and investment for heavy icebreakers. The US Coast Guard seeks to build three heavy and three medium icebreakers of US lineage. The entire world (including Russia) seeks the help of Finland in icebreaker building and development. With a lack of port construction facilities and a Request for Proposal for three heavy icebreakers from untested US designs, this cygnet will fight for survival against cost, schedule, performance, and risk. While it all looks good on paper, the operational risks to Arctic operations between now and 2025 are pronounced. At the end of the day, the US should consider modular multi-role heavy icebreaker oil/gas tanker hulls that could be tailored to support multiple US missions and interests, including ship escort duty, refueling, C4ISR, vertical lift support, contingency maneuver asset delivery, oil recovery, medical, SAR, and scientific missions…not just icebreaking.
Another indigo black swan is the US Army’s and its Sister Services’ cold weather climate equipment and training. We must improve our survival techniques, mobility and transport, and combat capabilities in cold weather. In a recent Arctic exercise, the US Marine Corps borrowed arctic tentage and soldier wearables from Sweden and Norway. US Arctic tentage has not changed since the mid 1950s. A recent Request for Information introduces the USMC to its first change in cold weather hats and gloves in decades. It is also important to understand the lack of available C4ISR, the performance of commercial off the shelf C4ISR systems subjected to heating and condensation cycles, and the effects of cyber on C4ISR and transportation. Failure in any of these mission areas will render a highly capable force almost useless.
Finally, the most enormous black swan on the planet resides in the Arctic tied to climate change. The introduction of pronounced Arctic sheet ice melting over the past eight summers has opened up the potential for at least seasonal trans-polar shipping and also selected air routes. While we are in love with pretty pictures of polar bears on ice floes and satellite imagery of an ever-shrinking mass, the true story is 3D. We soon will not have Arctic ice of any accumulated age. This is due to warm subsurface ocean thermoclines and high densities of chlorophyll not before seen. Certain NOAA models predict “Ice Free” Arctic Summers by 2047. The impact on climate, fish/wildlife, weather severity, sea levels, and of course human beings is far from being understood. The ice shelves of Greenland are losing one cubic mile of fresh water per week. This is the equivalent of all of the drinking water consumed by Los Angeles in a year.
So, what do we know going into the unknown? (National Geographic, April 2017)
1. The earth’s temperature goes up and down, but it’s gone up 1.69 deg. F consistently since the end of WWII.
2. CO2 warms the planet and we have increased the amount of CO2 by almost half since 1960.
3. 97% of scientists and 98% of authors fault humans for global warming.
4. Arctic sea ice is shrinking and glaciers are receding worldwide.
5. The number of climate-related disasters has tripled since 1980.
6. Retreat and extinction of various plants and animals is starting to occur.
7. Albeit noble, the switch to renewable energy does not offset the world appetite for energy.
While the green-think world worries, commerce is casting an eye on how the Northwest Passage can cut shipping distances between Asia and Europe by up to 3500-4500 miles. A French cruise line is preparing for trans-polar cruises during optimal weather and navigation times. Russia will seek transit and escort fees over its sovereign territories. Reykjavik, Iceland is labeled as the Singapore of 2050. The truth is we will all have to challenge the unknowns of this great swan over time, and we are ill prepared for this confrontation. While Russia looks like a flamingo, its Arctic behaviors can be totally swan-like. If we are looking into the future, we must fear our drift towards fair-weather Clausewitzian warfare while the rest of the planet sees otherwise. Enjoy the birds!
In his current role as President of Velocity Technology Partners LLC, Mr. Frank Prautzsch (LTC, Ret. Signal Corps) is recognized as a technology and business leader supporting the government and is known for exposing or crafting innovative technology solutions for the DoD, SOF, DHS and Intelligence community. He also provides consult to the MEDSTAR Institute for Innovation. His focus is upon innovation and not invention. Mr. Prautzsch holds a Bachelor of Science in Engineering from the United States Military Academy at West Point, is a distinguished graduate of the Marine Corps Signal Advanced Course, Army Airborne School, Ranger School, and Command and General Staff College. He also holds a Master of Science Degree from Naval Postgraduate School in Monterey, California with a degree in Systems Technology (C3) and Space.
The Mad Scientist team participates in many thought exercises, tabletops, and wargames associated with how we will live, work, and fight in the future. A consistent theme in these events is the idea that a major barrier to the integration ofrobotic systems into Army formations is a lack of trust between humans and machines. This assumption rings true as we hear the media and opinion polls describe how society doesn’t trust some disruptive technologies, like driverless cars or the robots coming for our jobs.
In his recent book,Army of None, Paul Scharre describes an event that nearly led to a nuclear confrontation between the Soviet Union and the United States. On September 26, 1983,LTC Stanislav Petrov, a Soviet Officer serving in a bunker outside Moscow was alerted to a U.S. missile launch by a recently deployed space-based early warning system. The Soviet Officer trusted his “gut” – or experientially informed intuition – that this was a false alarm. His gut was right and the world was saved from an inadvertent nuclear exchange because this officer did not over trust the system. But is this the rule or an exception to how humans interact with technology?
The subject of trust between Soldiers, Soldiers and Leaders, and the Army and society is central to the idea of the Army as a profession. At the most tactical level, trust is seen as essential to combat readiness as Soldiers must trust each other in dangerous situations. Humans naturally learn to trust their peers and subordinates once they have worked with them for a period of time. You learn what someone’s strengths and weaknesses are, what they can handle, and under what conditions they will struggle. This human dynamic does not translate to human-machine interaction and the tendency to anthropomorphize machines could be a huge barrier.
We recommend that the Army explore the possibility that Soldiers and Leaders could over trust AI and robotic systems. Over trust of these systems could blunt human expertise, judgement, and intuition thought to be critical to winning in complex operational environments. Also, over trust might lead to additional adversarial vulnerabilities such as deception and spoofing.
In 2016, a research team at the Georgia Institute of Technology revealed the resultsof a study entitled “Overtrust of Robots in Emergency Evacuation Scenarios”. The research team put 42 test participants into a fire emergency with a robot responsible for escorting them to an emergency exit. As the robot passed obvious exits and got lost, 37 participants continued to follow the robot and an additional 2 stood with the robot and didn’t move towards either exit. The study’s takeaway was that roboticists must think about programs that will help humans establish an “appropriate level of trust” with robot teammates.
InFuture Crimes, Marc Goodman writes of the idea of “In Screen We Trust” and the vulnerabilities this trust builds into our interaction with our automation. His example of the cyber-attack against the Iranian uranium enrichment centrifuges highlights the vulnerability of experts believing or trusting their screens against mounting evidence that something else might be contributing to the failure of centrifuges. These experts over trusted their technology or just did not have an “appropriate level of trust”. What does this have to do with Soldiers on the future battlefield? Well, increasingly we depend on our screens and, in the future, our heads-up displays totranslate the world around us. This translation will only become more demanding on the future battlefield with war at machine speed.
So what should our assumptions be about trust and our robotic teammates on the future battlefield?
1) Soldiers and Leaders will react differently to technology integration.
2) Capability developers must account for trust building factors in physical design, natural language processing, and voice communication.
3) Intuition and judgement remain a critical component of human-machine teaming and operating on the future battlefield. Speed becomes a major challenge as humans become the weak link.
4) Building an “appropriate level of trust” will need to be part of Leader Development and training. Mere expertise in a field does not prevent over trust when interacting with our robotic teammates.
5) Lastly, lack of trust is not a barrier to AI and robotic integration on the future battlefield. These capabilities will exist in our formations as well as those of our adversaries. The formation that develops the best concepts for effective human-machine teaming, with trust being a major component, will have the advantage.