[Editor’s Note: Today’s post poses four central questions to our Mad Scientist community of action regarding bias in machine learning and the associated ramifications for artificial intelligence, autonomy, lethality, and decision-making on future warfighting.]
“We thought that we had the answers, it was the questions we had wrong” – Bono, U2
As machine learning and deep learning algorithms become more commonplace, it is clear that the utopian ideal of a bias-neutral Artificial Intelligence (AI) is exactly just that. These algorithms have underlying biases embedded in their coding, imparted by their human programmers (either consciously or unconsciously). These algorithms can develop further biases during the machine learning and training process. Dr. Tolga Bolukbasi, Boston University, recently described algorithms as not being capable of distinguishing right from wrong, unlike humans that can judge their actions, even when they act against ethical norms. For algorithms, data is the ultimate determining factor.
Realizing that algorithms supporting future Intelligence, Surveillance, and Reconnaissance (ISR) networks and Commander’s decision support aids will have inherent biases — what is the impact on future warfighting? This question is exceptionally relevant as Soldiers and Leaders consider the influence of biases in man-machine relationships, and their potential ramifications on the battlefield, especially with regard to the rules of engagement (i.e., mission execution and combat efficiency versus the proportional use of force and minimizing civilian casualties and collateral damage).
“It is difficult to make predictions, particularly about the future.” This quote has been attributed to anyone ranging from Mark Twain to Niels Bohr to Yogi Berra. Point prediction is a sucker’s bet. However, asking the right questions about biases in AI is incredibly important.
The Mad Scientist Initiative has developed a series of questions to help frame the discussion regarding what biases we are willing to accept and in what cases they will be acceptable. Feel free to share your observations and questions in the comments section of this blog post (below) or email them to us at: email@example.com.
1) What types of bias are we willing to accept? Will a so-called cognitive bias that forgoes a logical, deliberative process be allowable? What about a programming bias that is discriminative towards any specific gender(s), ethnicity(ies), race(s), or even age(s)?
2) In what types of systems will we accept biases? Will machine learning applications in supposedly non-lethal warfighting functions like sustainment, protection, and intelligence be given more leeway with regards to bias?
3) Will the biases in machine learning programming and algorithms be more apparent and/or outweigh the inherent biases of humans-in-the-loop? How will perceived biases affect trust and reliance on machine learning applications?
4) At what point will the pace of innovation and introduction of this technology on the battlefield by our adversaries cause us to forego concerns of bias and rapidly field systems to gain a decisive Observe, Orient, Decide, and Act (OODA) loop and combat speed advantage on theHyperactive Battlefield?
For additional information impacting on this important discussion, please see the following:
[Editor’s Note: Since its inception last November, the Mad Scientist Laboratory has enabled us to expand our reach and engage global innovators from across industry, academia, and the Government regarding emergent disruptive technologies and their individual and convergent impacts on the future of warfare. For perspective, our blog has accrued almost 60K views by over 30K visitors from around the world!
Our Mad Scientist Community of Action continues to grow — in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. Almost half (36 out of 81) of the blog posts published have been submitted by guest bloggers. We challenge you to contribute your ideas!
In particular, we would like to recognize Mad Scientist Mr. Sam Bendett by re-posting his submission entitled “Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward,” originally published on 25 June 2018. This post generated a record number of visits and views during the past six month period. Consequently, we hereby declare Sam to be the Mad Scientist Laboratory’s “Maddest” Guest Blogger! for the latter half of FY18. In recognition of his achievement, Sam will receive much coveted Mad Scientist swag.
While Sam’s post revealed the many challenges Russia has experienced in combat testing the Uran-9 Unmanned Ground Vehicle (UGV) in Syria, it is important to note that Russia has designed, prototyped, developed, and operationally tested this system in a combat environment, demonstrating a disciplined and proactive approach to innovation. Russia is learning how to integrate robotic lethal ground combat systems….
Enjoy re-visiting Sam’s informative post below, noting that many of the embedded links are best accessed using non-DoD networks.]
Russia, like many other nations, is investing in the development of various unmanned military systems. The Russian defense establishment sees such systems as mission multipliers, highlighting two major advantages: saving soldiers’ lives and making military missions more effective. In this context, Russian developments are similar to those taking place around the world. Various militaries are fielding unmanned systems for surveillance, intelligence, logistics, or attack missions to make their forces or campaigns more effective. In fact, the Russian military has been successfully using Unmanned Aerial Vehicles (UAVs) in training and combat since 2013. It has used them with great effect in Syria, where these UAVs flew more mission hours than manned aircraft in various Intelligence, Surveillance, and Reconnaissance (ISR) roles.
Russia is also busy designing and testing many unmanned maritime and ground vehicles for various missions with diverse payloads. To underscore the significance of this emerging technology for the nation’s armed forces, Russian Defense Minister Sergei Shoigurecently stated that the serial production of ground combat robots for the military “may start already this year.”
But before we see swarms of ground combat robots with red stars emblazoned on them, the Russian military will put these weapons through rigorous testing in order to determine if they can correspond to battlefield realities. Russian military manufacturers and contractors are not that different from their American counterparts in sometimes talking up the capabilities of their creations, seeking to create the demand for their newest achievement before there is proof that such technology can stand up to harsh battlefield conditions. It is for this reason that the Russian Ministry of Defense (MOD) finally established several centers such as Main Research and Testing Center of Robotics, tasked with working alongside thedefense-industrial sector to create unmanned military technology standards and better communicate warfighters’ needs. The MOD is also running conferences such as the annual “Robotization of the Armed Forces” that bring together military and industry decision-makers for a better dialogue on the development, growth, and evolution of the nation’s unmanned military systems.
This brings us to one of the more interesting developments in Russian UGVs. Then Russian Deputy Defense Minister Borisov recentlyconfirmed that the Uran-9 combat UGV was tested in Syria, which would be the first time this much-discussed system was put into combat. This particular UGV is supposed to operate in teams of three or four and is armed with a 30mm cannon and 7.62 mm machine guns, along with avariety of other weapons.
Just as importantly, it was designed to operate at a distance of up to three kilometers (3000 meters or about two miles) from its operator — a range that could be extended up to six kilometers for a team of these UGVs. This range is absolutely crucial for these machines, which must be operated remotely. Russian designers are developing operational electronics capable of rendering the Uran-9more autonomous, thereby moving the operators to a safer distance from actual combat engagement. The size of a small tank, the Uran-9 impressed the international military community when first unveiled and it was definitely designed to survive battlefield realities….
However, just as “no plan survives first contact with the enemy,” the Uran-9, though built to withstand punishment, came up short in its first trial run in Syria. In a candid admission, Andrei P. Anisimov, Senior Research Officer at the 3rd Central Research Institute of the Ministry of Defense, reported on the Uran-9’s critical combat deficiencies during the 10th All-Russian Scientific Conference entitled “Actual Problems of Defense and Security,” held in April 2018. In particular, the following issues came to light during testing:
• Instead of its intended range of several kilometers, the Uran-9 could only be operated at distance of “300-500 meters among low-rise buildings,” wiping out up to nine-tenths of its total operational range.
• There were “17 cases of short-term (up to one minute) and two cases of long-term (up to 1.5 hours) loss of Uran-9 control” recorded, which rendered this UGV practically useless on the battlefield.
• The UGV’s running gear had problems – there were issues with supporting and guiding rollers, as well as suspension springs.
• The electro-optic stations allowed for reconnaissance and identification of potential targets at a range of no more than two kilometers.
• The OCH-4 optical system did not allow for adequate detection of adversary’s optical and targeting devices and created multiple interferences in the test range’s ground and airspace.
• Unstable operation of the UGV’s 30mm automatic cannon was recorded, with firing delays and failures. Moreover, the UGV could fire only when stationary, which basically wiped out its very purpose of combat “vehicle.”
• The Uran-9’s combat, ISR, and targeting weapons and mechanisms were also not stabilized.
On one hand, these many failures are a sign that this much–discussed and much-advertised machine is in need of significant upgrades, testing, and perhaps even a redesign before it gets put into another combat situation. The Russian militarydid say that it tested nearly 200 types of weapons in Syria, so putting the Uran-9 through its combat paces was a logical step in the long development of this particular UGV. If the Syrian trial was the first of its kind for this UGV, such significant technical glitches would not be surprising.
However, the MOD has been testing this Uran-9 for a while now, showing videosof this machine at a testing range, presumably in Russia. The truly unexpected issue arising during operations in Syria had to do with the failure of the Uran-9 to effectively engage targets with its cannon while in motion (along with a number of other issues). Still, perhaps many observers bought into the idea that this vehicle would perform as built – tracks, weapons, and all. A closer examination of the publicly-releasedtesting video probably foretold some of the Syrian glitches – in this particular one, Uran-9 is shown firing its machine guns while moving, but its cannon was fired only when the vehicle was stationary. Another interesting aspect that is significant in hindsight is that the testing range in the video was a relatively open space – a large field with a few obstacles around, not the kind of complex terrain, dense urban environment encountered in Syria. While today’s and future battlefields will range greatly from open spaces to megacities, a vehicle like the Uran-9 would probably be expected to perform in all conditions. Unless, of course, Syrian tests would effectively limit its use in future combat.
On another hand, so many failures at once point to much larger issues with the Russian development of combat UGVs, issues that Anisimov also discussed during his presentation. He highlighted the following technological aspects that are ubiquitous worldwide at this point in the global development of similar unmanned systems:
• Low level of current UGV autonomy;
• Low level of automation of command and control processes of UGV management, including repairs and maintenance;
• Low communication range, and;
• Problems associated with “friend or foe” target identification.
Judging from the Uran-9’s Syrian test, Anisimov made the following key conclusions which point to the potential trajectory of Russian combat UGV development – assuming thatother unmanned systems may have similar issues when placed in a simulated (or real) combat environment:
• These types of UGVs are equipped with a variety of cameras and sensors — and since the operator is presumably located a safe distance from combat, he may have problems understanding, processing, and effectively responding to what is taking place with this UGV in real-time.
• For the next 10-15 years, unmanned military systems will be unable to effectively take part in combat, with Russians proposing to use them in storming stationary and well-defended targets (effectively giving such combat UGVs a kamikaze role).
• One-time and preferably stationary use of these UGVs would be more effective, with maintenance and repair crews close by.
• These UGVs should be used with other military formations in order to target and destroy fortified and firing enemy positions — but never on their own, since their breakdown would negatively impact the military mission.
The presentation proposed that some of the above-mentioned problems could be overcome by domestic developments in the following UGV technology and equipment areas:
• Creating secure communication channels;
• Building miniaturized hi-tech navigation systems with a high degree of autonomy, capable of operating with a loss of satellite navigation systems;
• Developing miniaturized and effective ISR components;
• Integrating automated command and control systems, and;
• Better optics, electronics and data processing systems.
According to Anisimov’s report, the overall Russian UGV and unmanned military systems development arch is similar to the one proposed by the United States Army Capabilities Integration Center (ARCIC): the gradual development of systems capable of more autonomy on the battlefield, leading to “smart” robots capable of forming “mobile networks” and operating in swarm configurations. Such systems should be “multifunctional” and capable of being integrated into existing armed forces formations for various combat missions, as well as operate autonomously when needed. Finally, each military robot should be able to function within existing and future military technology and systems.
Such a candid review and critique of the Uran-9 in Syria, if true, may point to the Russian Ministry of Defense’s attitude towards its domestic manufacturers. The potential combat effectiveness of this UGV was advertised for the past two years, but its actual performance fell far short of expectations. It is a sign for developers of other Russian unmanned ground vehicles – like Soratnik, Vihr, and Nerehta — since it displays the full range of deficiencies that take place outside of well-managed testing ranges where such vehicles are currently undergoing evaluation. It also brought to light significant problems with ISR equipment — this type of technology is absolutely crucial to any unmanned system’s successful deployment, and its failures during Uran-9 tests exposed a serious combat weakness.
It is also a useful lesson for many other designers of domestic combat UGVs who are seeking to introduce similar systems into existing order of battle. It appears that the Uran-9’s full effectiveness can only be determined at a much later time if it can perform its mission autonomously in the rapidly-changing and complex battlefield environment. Fully autonomous operation so far eludes its Russian developers, who are nonetheless still working towards achieving such operational goals for their combat UGVs. Moreover, Russian deliberations on using their existing combat UGV platforms in one-time attack mode against fortified adversary positions or firing points, tracking closely with ways that Western military analysts arethinking that such weapons could be used in combat.
The Uran-9 is still a test bed and much has to take place before it could be successfully integrated into current Russian concept of operations. We could expect more eye-opening “lessons learned” from its and other UGVs potential deployment in combat. Given the rapid proliferation of unmanned and autonomous technology, we are already in the midst of a new arms race. Many states are now designing, building, exporting, or importing various technologies for their military and security forces.
To make matters more interesting, the Russians have been public with both their statements about new technology being tested and evaluated, and with the possible use of such weapons in current and future conflicts. There should be no strategic or tactical surprise when military robotics are finally encountered in future combat.
Samuel Bendett is a Research Analyst at the CNA Corporation and a Russia Studies Fellow at the American Foreign Policy Council. He is an official Mad Scientist, having presented and been so proclaimed at a previous Mad Scientist Conference. The views expressed here are his own.
[Editor’s Note: Mad Scientist Laboratory is pleased to present our August edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]
Gartner’s annual hype cycle highlights many of the technologies and trends explored by the Mad Scientist program over the last two years. This year’s cycle added 17 new technologies and organized them into five emerging trends: 1) Democratized Artificial Intelligence (AI), 2)Digitalized Eco-Systems, 3) Do-It-Yourself Bio-Hacking, 4) Transparently Immersive Experiences, and 5) Ubiquitous Infrastructure. Of note, many of these technologies have a 5–10 year horizon until the Plateau of Productivity. If this time horizon is accurate, we believe these emerging technologies and five trends will have a significant role in defining the Character of Future War in 2035 and should have modernization implications for the Army of 2028. For additional information on the disruptive technologies identified between now and 2035, see the Era of Accelerated Human Progress portion of ourPotential Game Changers broadsheet.
[Gartner disclaimer: Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.]
“Let’s say you’re an AI scientist, and you’ve found the holy grail of your field — you figured out how to build an artificial general intelligence (AGI). That’s a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas — things no computer can do today.
That’s great, right? Except for one big catch: your AGI system is evil or could only be used for malicious purposes.
So, now a conundrum. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil? Do you file a patent so that no one else (except for you) could bring such an algorithm into existence? Or do you sit on your research, protecting the world from your creation but also passing up on the astronomical paycheck that would surely arrive in the wake of such a discovery?”
The panel’s responses ranged from controlling — “Don’t publish it!” and treat it like a grenade, “one would not hand it to a small child, but maybe a trained soldier could be trusted with it”; to the altruistic — “publish [it]… immediately” and “there is no evil technology, but there are people who would misuse it. If that AGI algorithm was shared with the world, people might be able to find ways to use it for good”; to the entrepreneurial – “sell the evil AGI to [me]. That way, they wouldn’t have to hold onto the ethical burden of such a powerful and scary AI — instead, you could just pass it to [me and I will] take it from there.”
While no consensus of opinion was arrived at, the panel discussion served a useful exercise in illustrating how AIdiffers from previous eras’ game changing technologies. Unlike Nuclear, Biological, and Chemical weapons, no internationally agreed to and implemented control protocols can be applied to AI, as there are no analogous gas centrifuges, fissile materials, or triggering mechanisms; no restricted access pathogens; no proscribed precursor chemicals to control. Rather, when AGI is ultimately achieved, it is likely to be composed of nothing more than diffuse code; a digital will’o wisp that can permeate across the global net to other nations, non-state actors, and super-empowered individuals, with the potential to facilitate unprecedentedly disruptiveInformation Operation (IO) campaigns and Virtual Warfare, revolutionizing human affairs. The West would be best served in emulating the PRC with itsMilitary-Civil Fusion Centers and integrate the resources of the State with the innovation of industry to achieve their own AGI solutions soonest. Thedecisive edge will “accrue to the side with more autonomous decision-action concurrency on the Hyperactive Battlefield” — the best defense against a nefarious AGI is a friendly AGI!
Can justice really be blind? The International Conference on Machine Learning (ICML) was held in Stockholm, Sweden, in July 2018. This conference explored the notion of machine learning fairness and proposed new methods to help regulators provide better oversight and practitioners to develop fair and privacy-preserving data analyses. Like ethical discussions taking place within the DoD, there are rising legal concerns that commercial machine learning systems (e.g., those associated with car insurance pricing) might illegally or unfairly discriminate against certain subgroups of the population. Machine learning will play an important role in assisting battlefield decisions (e.g., the targeting cycle and commander’s decisions) – especially lethal decisions. There is a common misperception that machines will make unbiased and fair decisions, divorced from human bias. Yet the issue of machine learning bias is significant because humans, with their host of cognitive biases, code the very programming that will enable machines to learn and make decisions. Making the best, unbiased decisions will become critical in AI-assisted warfighting. We must ensure that machine-based learning outputs are veriﬁed and understood to preclude the inadvertent introduction of human biases. Read the full reporthere.
In a study published byPLOS ONE, researchers found that arobot’s personality affected a human’s decision-making. In the study, participants were asked to dialogue with a robot that was either sociable (chatty) or functional (focused). At the end of the study, the researchers let the participants know that they could switch the robot off if they wanted to. At that moment, the robot would make an impassioned plea to the participant to resist shutting them down. The participants’ actions were then recorded. Unexpectedly, there were a large number of participants who resisted shutting down the functional robots after they made their plea, as opposed to the sociable ones. This is significant. It shows, beyond the unexpected result, that decision-making is affected by robotic personality. Humans will form an emotional connection to artificial entities despite knowing they are robotic if they mimic and emulate human behavior. If the Army believes its Soldiers will beaccompanied and augmented heavily by robots in the near future, it must also understand that human-robot interaction will not be the same as human-computer interaction. The U.S. Army must explore how attain theappropriate level of trust between Soldiers and their robotic teammates on the future battlefield. Robots must be treated more like partners than tools, with trust, cooperation, and even empathy displayed.
While the advent of the Internet brought home computing and communication even deeper into global households, the revolution of smart phones brought about the concept of constant personal interconnectivity. Today and into the future, not only are humans being connected to the global commons via their smart devices, but a multitude of devices, vehicles, and various accessories are being integrated into the Internet of Things (IoT). Previously, the IoT was addressed as a game changing technology. The IoT is composed of trillions of internet-linked items, creating opportunities and vulnerabilities. There has been explosive growth in low Size Weight and Power (SWaP) and connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness).
Large companies are expected to quickly grow their spending on Internet-connected devices (i.e., appliances, home devices [such as Google Home, Alexa, etc.], various sensors) to approximately $520 billion. This is a massive investment into what will likely become the Internet of Everything (IoE). While growth is focused on known devices, it is likely that it will expand to embedded and wearable sensors – think clothing, accessories, and even sensors and communication devices embedded within the human body. This has two major implications for the Future Operational Environment (FOE):
– The U.S. military is already struggling with the balance between collecting, organizing, and using critical data, allowing service members to use personal devices, and maintaining operations and network security and integrity (see banning of personal fitness trackers recently). A segment of the IoT sensors and devices may be necessary or critical to the function and operation of many U.S. Armed Forces platforms and weapons systems, inciting some critical questions about supply chain security, system vulnerabilities, and reliance on micro sensors and microelectronics
– The U.S. Army of the future will likely have to operate in and arounddense urban environments, where IoT devices and sensors will be abundant, degrading blue force’s ability to sense the battlefield and “see” the enemy, thereby creating a veritable needle in a stack of needles.
With the possibility of a “cyber Pearl Harbor” becoming increasingly imminent, intelligence officials warn of the rising danger of cyber attacks. Effects of these attacks have already been felt around the world. They have the power to break the trust people have in institutions, companies, and governments as they act in the undefinedgray zone between peace and all-out war. The military implications are quite clear: cyber attacks can cripple the military’s ability to function from a command and control aspect to intelligence communications and materiel and personnel networks. Besides the military and government, private companies’ use of the internet must be accounted for when discussing cyber security. Some companies have felt the effects of cyber attacks, while others are reluctant to invest in cyber protection measures. In this way, civilians become affected by acts of cyber warfare, and attacks on a country may not be directed at the opposing military, but the civilian population of a state, as in the case of power and utility outages seen in eastern Europe. Any actor with access to the internet can inflict damage, and anyone connected to the internet is vulnerable to attack, so public-private cooperation is necessary to most effectively combat cyber threats.
If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: firstname.lastname@example.org — we may select it for inclusion in our next edition of “The Queue”!
[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by guest blogger LTC Rob Taber, U.S. Army Training and Doctrine Command (TRADOC) G-2 Futures Directorate, clarifying the often confused character and nature of warfare, and addressing their respective mutability.]
No one is arguing that warfare is not changing. Where people disagree, however, is whether the nature of warfare, the character of warfare, or both are changing.
Take, for example, the National Intelligence Council’s assertion in “Global Trends: Paradox of Progress.” They state, “The nature of conflict is changing. The risk of conflict will increase due to diverging interests among major powers, an expanding terror threat, continued instability in weak states, and the spread of lethal, disruptive technologies. Disrupting societies will become more common, with long-range precision weapons, cyber, androbotic systems to target infrastructure from afar, and more accessible technology to create weapons of mass destruction.”[I]
Additionally, Brad D. Williams, in an introductionto an interview he conducted with Amir Husain, asserts, “Generals and military theorists have sought to characterize the nature of war for millennia, and for long periods of time, warfare doesn’t dramatically change. But, occasionally, new methods for conducting war cause a fundamental reconsideration of its very nature and implications.”[II] Williams then cites “cavalry, the rifled musket and Blitzkrieg as three historical examples”[III] from Husain and General John R. Allen’s (ret.) article, “On Hyperwar.”
Unfortunately, the NIC and Mr. Williams miss the reality that the nature of war is not changing, and it is unlikely to ever change. While these authors may have simply interchanged “nature” when they meant “character,” it is important to be clear on the difference between the two and the implications for the military. To put it more succinctly, words have meaning.
The nature of something is the basic make up of that thing. It is, at core, what that “thing” is. The character of something is the combination of all the different parts and pieces that make up that thing. In the context of warfare, it is useful to ask every doctrine writer’s personal hero, Carl Von Clausewitz, what his views are on the matter.
He argues that war is “subjective,”[IV] “an act of policy,”[V] and “a pulsation of violence.”[VI] Put another way, the nature of war is chaotic, inherently political, and violent. Clausewitz then states that despite war’s “colorful resemblance to a game of chance, all the vicissitudes of its passion, courage, imagination, and enthusiasm it includes are merely its special characteristics.”[VII] In other words, all changes in warfare are those smaller pieces that evolve and interact to make up the character of war.
The argument thatartificial intelligence (AI) and other technologies will enable military commanders to have “a qualitatively unsurpassed level of situational awareness and understanding heretofore unavailable to strategic commander[s]”[VIII] is a grand claim, but one that has been made many times in the past, and remains unfulfilled. The chaos of war, its fog, friction, and chance will likely never be deciphered, regardless of what technology we throw at it. While it is certain that AI-enabled technologies will be able to gather, assess, and deliver heretofore unimaginable amounts of data, these technologies will remain vulnerable to age-old practices ofdenial, deception, and camouflage.
The enemy gets a vote, and in this case, the enemy also gets to play with their AI-enabled technologies that are doing their best to provide decision advantage over us. The information sphere in war will be more cluttered and more confusing than ever.
Regardless of the tools of warfare, be they robotic,autonomous, and/or AI-enabled, they remain tools. And while they will be the primary tools of the warfighter, the decision to enable the warfighter to employ those tools will, more often than not, come from political leaders bent on achieving a certain goal with military force.
Finally, the violence of warfare will not change. Certainly robotics and autonomy will enable machines that can think and operate without humans in the loop. Imagine the future in which the unmanned bomber gets blown out of the sky by the AI-enabled directed energy integrated air defense network. That’s still violence. There are still explosions and kinetic energy with the potential for collateral damage to humans, both combatants and civilians.
Not to mention the bomber carried a payload meant to destroy something in the first place. A military force, at its core, will always carry the mission to kill things and break stuff. What will be different is what tools they use to execute that mission.
To learn more about the changing character of warfare:
– Watch videos of each of the conference presentations on the TRADOC G-2 Operational Environment (OE) Enterprise YouTube Channelhere.
– Review the conference presentation slides (with links to the associated videos) on the Mad Scientist All Partners Access Network (APAN) sitehere.
LTC Rob Taber is currently the Deputy Director of the Futures Directorate within the TRADOC G-2. He is an Army Strategic Intelligence Officer and holds a Master of Science of Strategic Intelligence from the National Intelligence University. His operational assignments include 1st Infantry Division, United States European Command, and the Defense Intelligence Agency.
Note: The featured graphic at the top of this post captures U.S. cavalrymen on General John J. Pershing’s Punitive Expedition into Mexico in 1916. Less than two years later, the United States would find itself fully engaged in Europe in a mechanized First World War. (Source: Tom Laemlein / Armor Plate Press, courtesy of Neil Grant, The Lewis Gun, Osprey Publishing, 2014, page 19)
[I] National Intelligence Council, “Global Trends: Paradox of Progress,” January 2017, https://www.dni.gov/files/documents/nic/GT-Full-Report.pdf, p. 6. [II] Brad D. Williams, “Emerging ‘Hyperwar’ Signals ‘AI-Fueled, machine waged’ Future of Conflict,” Fifth Domain, August 7, 2017, https://www.fifthdomain.com/dod/2017/08/07/emerging-hyperwar-signals-ai-fueled-machine-waged-future-of-conflict/. [III] Ibid. [VI] Carl Von Clausewitz, On War, ed. Michael Howard and Peter Paret (Princeton: Princeton University Press, 1976), 85. [V] Ibid, 87. [VI] Ibid. [VII] Ibid, 86. [VIII] John Allen, Amir Hussain, “On Hyper-War,” Fortuna’s Corner, July 10, 2017, https://fortunascorner.com/2017/07/10/on-hyper-war-by-gen-ret-john-allenusmc-amir-hussain/.
[Editor’s Note: Mad Scientist Laboratory is pleased to publish the following post by guest blogger Dr. Jan Kallberg, faculty member, United States Military Academy at West Point, and Research Scientist with the Army Cyber Institute at West Point. His post serves as a cautionary tale regarding our finite intellectual resources and the associated existential threat in failing to protect them!]
Preface: Based on my experience in cybersecurity, migrating to a broader cyber field, there have always been those exceptional individuals that have an unreplicable ability to see the challenge early on, create a technical solution, and know how to play it in the right order for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival ofArtificial Intelligence increases our reliance on these highly capable individuals – because someone must set the rules, the boundaries, and point out the trajectory for Artificial Intelligence at initiation.
As an industrialist society, we tend to see technology and the information that feeds it as the weapons – and ignore the few humans that have a large-scale direct impact. Even if identified as a weapon, how do you make a human mind classified? Can we protect these high-ability individuals that in the digital world are weapons, not as tools but compilers of capability, or are we still focused on the tools? Why do we see only weapons that are steel and electronics and not the weaponized mind as a weapon? I believe firmly that we underestimate the importance of Applicable Intelligence – the ability to play the cyber engagement in the optimal order. Adversaries are often good observers because they are scouting for our weak spots. I set the stage for the following post in 2034, close enough to be realistic and far enough for things to happen when our adversaries are betting that we rely more on a few minds than we are willing to accept.
Post: In a not too distant future, 20th of August 2034, a peer adversary’s first strategic moves are the targeted killings of less than twenty individuals as they go about their daily lives: watching a 3-D printer making a protein sandwich at a breakfast restaurant; stepping out from the downtown Chicago monorail; or taking a taste of a poison-filled retro Jolt Cola. In thegray zone, when the geopolitical temperature increases, but we are still not at war yet, our adversary acts quickly and expedites a limited number of targeted killings within the United States of persons whom are unknown to mass media, the general public, and have only one thing in common – Applicable Intelligence (AI).
The ability to apply is a far greater asset than the technology itself. Cyber and card games have one thing in common, the order you play your cards matters. In cyber, the tools are publicly available, anyone can download them from the Internet and use them, but the weaponization of the tools occurs when used by someone who understands how to play the tools in an optimal order. These minds are different because they see an opportunity to exploit in a digital fog of war where others don’t or can’t see it. They address problems unburdened by traditional thinking, in new innovative ways, maximizing the dual-purpose of digital tools, and can create tangible cyber effects.
It is the Applicable Intelligence (AI) that creates the procedures, the application of tools, and turns simple digital software in sets or combinations as a convergence to digitally lethal weapons. This AI is the intelligence to mix, match, tweak, and arrange dual purpose software. In 2034, it is as if you had the supernatural ability to create a thermonuclear bomb from what you can find at Kroger or Albertson.
Sadly we missed it; we didn’t see it. We never left the 20th century. Our adversary saw it clearly and at the dawn of conflict killed off the weaponized minds, without discretion, and with no concern for international law or morality.
These intellects are weapons of growing strategic magnitude. In 2034, the United States missed the importance of these few intellects. This error left them unprotected.
All of our efforts were instead focusing on what they delivered, the application and the technology, which was hidden in secret vaults and only discussed in sensitive compartmented information facilities. Therefore, we classify to the highest level to ensure the confidentiality and integrity of our cyber capabilities. Meanwhile, the most critical component, the militarized intellect, we put no value to because it is a human. In a society marinated in an engineering mindset, humans are like desk space, electricity, and broadband; it is a commodity that is input in the production of the technical machinery. The marveled technical machinery is the only thing we care about today, 2018, and as it turned out in 2034 as well.
We are stuck in how we think, and we are unable to see it coming, but our adversaries see it. At a systematic level, we are unable to see humans as the weapon itself, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed. As the armory of the war of 1812, as the stockpile of 1943, and as the launch pad of 2034. Arms are made of steel, or fancier metals, with electronics – we failed in 2034 to see weapons made of corn, steak, and an added combative intellect.
General Nakasone stated in 2017, “Our best ones [coders] are 50 or 100 times better than their peers,” and continued “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” In reality, the success of cyber and cyber operations is highly dependent not on the tools or toolsets but instead upon the super-empowered individual that General Nakasone calls “the 50-x coder.”
There were clear signals that we could have noticed before General Nakasone pointed it out clearly in 2017. The United States’ Manhattan Project during World War II had at its peak 125,000 workers on the payroll, but the intellects that drove the project to success and completion were few. The difference with the Manhattan Project and the future of cyber is that we were unable to see the human as a weapon, being locked in by our path dependency as an engineering society where we hail the technology and forget the importance of the humans behind it.
America’s endless love of technical innovations and advanced machinery reflects in a nation that has celebrated mechanical wonders and engineered solutions since its creation. For America, technical wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the intercontinental railroad, the Panama Canal, the manufacturing era, the moon landing, and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps that can solve a problem or act.
The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced. In 2034, the era of digital conflicts and thewar between algorithms with engagements occurring at machine speed with no time for leadership or human interaction, it is the intellects that design and understand how to play it. We didn’t see it.
In 2034, with fewer than twenty bodies piled up after targeted killings, resides the Cyber Pearl Harbor. It was not imploding critical infrastructure, a tsunami of cyber attacks, nor hackers flooding our financial systems, but instead traditional lead and gunpowder. The super-empowered individuals are gone, and we are stuck in a digital war at speeds we don’t understand, unable to play it in the right order, and with limited intellectual torque to see through the fog of war provided by an exploding kaleidoscope of nodes and digital engagements.
Dr. Jan Kallberg is currently an Assistant Professor of Political Science with the Department of Social Sciences, United States Military Academy at West Point, and a Research Scientist with the Army Cyber Institute at West Point. He was earlier a researcher with the Cyber Security Research and Education Institute, The University of Texas at Dallas, and is a part-time faculty member at George Washington University. Dr. Kallberg earned his Ph.D. and MA from the University of Texas at Dallas and earned a JD/LL.M. from Juridicum Law School, Stockholm University. Dr. Kallberg is a certified CISSP, ISACA CISM, and serves as the Managing Editor for the Cyber Defense Review. He has authored papers in the Strategic Studies Quarterly, Joint Forces Quarterly, IEEE IT Professional, IEEE Access, IEEE Security and Privacy, and IEEE Technology and Society.
On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC. Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces. The new and innovative learning capabilities addressed at this conference will enable our Soldiers and Leaders to act quickly and decisively in a changing Operational Environment (OE) with fleeting windows of opportunity and more advanced and lethal technologies.
We have identified the following “Top 10” takeaways related to Learning in 2050:
1. Many learning technologies built around commercial products are available today (Amazon Alexa, Smart Phones, Immersion tech, Avatar experts) for introduction into our training and educational institutions. Many of these technologies are part of the Army’s concept for aSynthetic Training Environment (STE)and there are nascent manifestations already. For these technologies to be widely available to the future Army, the Army of today must be prepared to address:
– The cultural challenges associated with changing the dynamicbetween learners and instructors, teachers, and coaches; and
– The adequate funding to produce capabilities at scale so that digital tutors or other technologies (Augmented Reality [AR] / Virtual Reality [VR], etc.) and skills required in a dynamic future, like critical thinking/group think mitigation, are widely available or perhaps ubiquitous.
2. Personalization and individualization of learning in the future will be paramount, and some training that today takes place in physical schools will be more the exception, with learning occurring at the point of need. This transformation will not be limited to lesson plans or even just learning styles:
– Project-oriented learning; when today’s high school students are building apps, they are asked “What positive change do you want to have?” One example is an open table for Bully Free Tables. In the future, learners will learn through working on projects;
– Project-oriented learning will lead to a convergence of learning and operations, creating a chicken (learning) or the egg (mission/project) relationship; and
– Learning must be adapted to consciously address the desired, or extant, culture.
3. Some jobs and skill sets have not even been articulated yet. Hobbies and recreational activities engaged in by kids and enthusiasts today could become occupations or Military Occupational Specialties (MOS’s) of the future (e.g., drone creator/maintainer, 3-D printing specialist, digital and cyber fortification construction engineer — think Minecraft and Fortnite with real-world physical implications). Some emerging trends inpersonalized warfare, big data, and virtual nations could bring about the necessity for more specialists that don’t currently exist (e.g., data protection and/or data erasure specialists).
4. The New Human (who will be born in 2032 and is the recruit of 2050) will be fundamentally different from the Old Human. The Chief of Staff of the Army (CSA) in 2050 is currently a young Captain in our Army today. While we are arguably cyborgs today (with integrated electronics in our pockets and on our wrists), the New Humans will likely be cyborgs in the truest sense of the word, with some havingembedded sensors. How will those New Humans learn? What will they need to learn? Why would they want to learn something? These are all critical questions the Army will continue to ask over the next several decades.
5. Learning is continuous and self-initiated, while education is a point in time and is “done to you” by someone else. Learning may result in a certificate or degree – similar to education – or can lead to the foundations of a skill or a deeper understanding of operations and activity. How will organizations quantify learning in the future? Will degrees or even certifications still be the benchmark for talent and capability?
6. Learning isn’t slowing down, it’s speeding up. More and more things are becoming instantaneous and humans have no concept of extreme speed. Tesla cars have the ability to update software, with owners getting into a veritably different car each day. What happens to our Soldiers when military vehicles change much more iteratively? This may force a paradigm shift wherein learning means tightening local and global connections (tough to do considering government/military network securities, firewalls, vulnerabilities, and constraints); viewing technology as extended brains all networked together (similar to Dr. Alexander Kott’s look at the Internet of Battlefield Things [IoBT]); and leveraging these capabilities to enable Soldier learning at extremely high speeds.
7. While there are a number of emerging concepts and technologies to improve and accelerate learning (TNT, extended reality, personalized learning models, and intelligent tutors), the focus, training stimuli, data sets, and desired outcomes all have to be properly tuned and aligned or the Learner could end up losing correct behavior habits (developing maladaptive plasticity), developing incorrect or skewed behaviors (per the desired capability), or assuming inert cognitive biases.
8. Geolocation may become increasingly less important when it comes to learning in the future. If Apple required users to go to Silicon Valley to get trained on an iPhone, they would be exponentially less successful. But this is how the Army currently trains. The ubiquity of connectivity, the growth of the Internet of Things (and eventually Internet of Everything), the introduction of universal interfaces (think one XBOX controller capable of controlling 10 different types of vehicles), major advances in modeling and simulations, and social media innovation all converge to minimize the importance of teachers, students, mentors, and learners being collocated at the same physical location.
9. Significant questions have to be asked regarding the specificity of training in children at a young age to the point that we may be overemphasizing STEM from an early age and not helping them learn across a wider spectrum. We need Transdisciplinarity in the coming generations.
10. 3-D reconstructions of bases, training areas, cities, and military objectives coupled with mixed reality, haptic sensing, and intuitive controls have the potential to dramatically change how Soldiers train and learn when it comes to not only single performance tasks (e.g., marksmanship, vehicle driving, reconnaissance, etc.) but also in dense urban operations, multi-unit maneuver, and command and control.
During the next two weeks, we will be posting the videos from each of the Learning in 2050 Conference presentations on the TRADOC G-2 Operational Environment (OE) EnterpriseYouTube Channel and the associated slides on our Mad Scientist APAN site — stay connected here at the Mad Scientist Laboratory.
One of the main thrusts in the Mad Scientist lines of effort is harnessing and cultivating the Intellect of the Nation. In this vein, we are asking Learning in 2050 Conference participants (both in person and online) to share their ideas on the presentations and topic. Please consider:
– What topics were most important to you personally and professionally?
– What were your main takeaways from the event?
– What topics did you want the speakers to extrapolate more on?
– What were the implications for your given occupation/career field from the findings of the event?
Your input will be of critical importance to our analysis and products that will have significant impact on the future of the force in design, structuring, planning, and training! Please submit your input to Mad Scientist at: email@example.com.
[Editor’s Note: Mad Scientist Laboratory is pleased to present (somewhat belatedly) our July edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]
Mr. Nicholson summarizes a recent presentation by one of our favorite Mad Scientists,P.W. Singer from New America. Mr. Singer warns that as more and more items are linked to the internet of things, the opportunities for nations and societies (also non-state actors andsuper-empowered individuals) to attack and be attacked become much broader. He states that “all of this technology does not mean that we will see humans eliminated from war anytime soon. Rather, just like the steam engine and the plane and the computer, we will see changes in the human skills that are most needed and less needed. This movement of people skills can and should change everything from our recruiting and training to our doctrine and organizational design.” This movement of people skills was a key aspect of last week’s Mad Scientist Learning in 2050 Conference, conducted at Georgetown University on 8 – 9 August. The demands on Leaders and skills required to compete in the changing character of war are probably fundamentally different. Mr. Singer challenges us to choose real change and not change just enough to fail. His example of the USS Arizona with its two catapult-launched float planes demonstrates a bureaucracy’s incremental approach in the face of revolutionary change. That change – modern bombers – made this once great warship a monument to a “Day that will live in Infamy.”
David Ignatius, famed spy novelist and Washington Post journalist, tackles not only espionage but also a multitude of disruptive technologiesin his new thriller, The Quantum Spy. The book revolves around a race towards leap-ahead developments in quantum computing between the United States and China; but a looming subplot is the cat and mouse game of counterintelligence, infiltration, and insertion of moles between the Central Intelligence Agency and the Chinese Ministry of State Security. CIA case officer and Army veteran Harris Chang struggles with his Chinese heritage, devotion to America, and the sometimes unscrupulous role of his organization in fighting to protect America’s secrets. The book is replete with detailed and accurate descriptions ofAmerican innovation efforts. The depiction of the infiltration on American college campuses and research institutions by foreign students being sponsored and often directed by foreign adversaries is alarming and timely given recent real-world events such as aChinese student taking groundbreaking work on metamaterials at Duke University back to his home country. The book raises important questions about the balance between open, collaborative innovation (that opens up a number of vulnerabilities) and more restrictive, government-funded research (that may be more secure), both of which are critical in the current Era of Accelerated Human Progress (now through 2035) as described inThe Operational Environment and the Changing Character of Future Warfare. Similarly to Agents of Innocence and Body of Lies, David Ignatius has created a work that not only features a fantastic story but that includes many government, military, and intelligence implications.
3. “War and the Human Brain” podcast with Dr. James Giordano and Mr. John Amble, Modern War Institute, 24 July 2018 (originally aired in 2017) – review by Marie Murphy.
Modern War Institute’s John Amble spoke with Dr. James Giordano about his research in neuroscience and using “the brain as a weapon” following his presentation at the Mad Scientist Visualizing Multi Domain Battle in 2030-2050 Conference, 25-26 July 2017, at Georgetown University, Washington, D.C. After a brief historical overview of neuroscience’s military applications, Dr. Giordano explains how recent research on electric and magnetic trans-cranial stimulation and implantable electrodes has opened up possibilities and controversies. Soldiers of the future could obtain modifications that improve memory, cognition, and vigilance while decreasing fatigue. Conversely, there is anethical dilemma when it comes to discontinuing, removing, or deactivating these improvements; there is concern regarding the Soldier potentially feeling disabled or disenabled afterwards. The discussion transitioned to the implications of “drugs, bugs, toxins, and tools,” all of which can have some kind of effect on neurological activity, and all of which can be weaponized. These capabilities, while not considered weapons of mass destruction, are categorized asweapons of mass disruption. These tools and technologies pose a real, rising threat in the future Operational Environment; are deployable by nation-states, non-state actors, andsuper-empowered individuals; and can be specifically targeted for optimal impact. Read more about these capabilities in the Mad Scientist Bio Convergence and Soldier 2050 Conference Final Report.
On 17 July 2018, the UK’s Nuffield Council on Bioethics issued apress release in conjunction with their publication ofGenome editing and human reproduction. The Council, established in 1991 to address ethical issues raised by new developments in biology and medicine, “concluded that editing the DNA of a human embryo, sperm, or egg to influence the characteristics of a future person (‘heritable genome editing’) could be morally permissible.” Futurism interpreted this as meaning we are “one step closer to designer babies,” and concluded it “is a promising sign for anyone eager for the day gene-editing lets them create the offspring of their dreams.” That said, the Council recommends two overarching principles governing the ethical use of heritable genome editing: “they must be intended to secure, and be consistent with, the welfare of the future person; and they should not increase disadvantage, discrimination or division in society.” The Council also noted that current British law precludes the genomic editing of embryos that are to be placed in a womb. So, no Brave New World in our future, right?
Not necessarily.… As Mr. Hank Greely, Professor of Law, Stanford University,pointed out this spring at our Mad Scientist Bio Convergence and Soldier 2050 Conference, we are on the cusp of being able to use skin cells to generate lines of viable embryos, which then may be subjected to Preimplantation Genetic Diagnoses prior to selection and implantation to preclude a host of genetic diseases and ensure healthier babies (who could possibly object to that?). With the advent of genetic editing and artificial wombs, we will be able to manipulate the genomic coding of any given embryo (initially to address genetic disease, but eventually to enhance capabilities), implant it, and then “decant” the resulting progeny. Sound farfetched? At the same conference, Ms. Elsa Kania, CNAS, noted that the PRC is currently gene editing human embryos and conducting human clinical trials. Their Bio-Google Initiative (BGI) is soliciting DNA from their geniuses in an attempt to understand the genomic basis for intelligence. With the advent of genetically enhanced humans, it is conceivable that we could face adversaries in the Deep Future Operational Environment with warrior caste soldiers, each modified genetically as embryos for greater strength, endurance, and combat performance in complex and extreme environments (e.g., high / low temperatures, low atmospheric pressures) and with optimized Brain Computer Interfaces. Previous regimes sought to populate their forces with “Supermen” — genomic editing may provide future regimes with the post-industrial means of accomplishing this objective “… by the lights of perverted science.”
Red Meat Games released its fifth virtual reality (VR) project, Bring to Light. Developers designed the VR horror game to push players to their terror limits with the help of a biometric sensor. Right now, Bring to Light is the first VR game to use biometric feedback to effect gameplay; it calls to mind the Black Mirror episode “Playtest,” a near-future cautionary tale of the risks associated with combining VR and Augmented Reality (AR) with gaming. In spite of this, AR and VR will become more integrated and player involved. As discussed in last month’seditionof “The Queue,” VR has the potential to also accelerate learning and enhance retention when used to train our Soldiers and Leaders.
Stanford University is working on a technology known as “Shapeshift” that presents users with a haptic “touch” interface that provides a bridge between VR and the physical world. Shapeshift is a high-resolution, compact, modular shape display consisting of 288 actuated pins (4.85mm×4.85mm, 2.8mm inter-pin spacing) formed by six 2×24 pin modules. It is reminiscent ofpin art toys played with by children and adults alike for years. The interface will allow users to truly feel the objects they see and interact with in VR, bringing about an entirely new level of immersion into constructed virtual or augmented worlds. The implications for accurate and intuitive modeling, design, simulation, and trainingare astounding. In the future, such interfaces could be utilized in vehicles, on or with weapons, and integrated in classrooms and other training venues.
Engineers from Tufts University have re-designed the bandage with the intent of taking it from a passive treatment to an active treatment for chronic wounds. These skin wounds can be from burns, diabetes, or other medical conditions that overwhelm the normal regenerative capabilities of the skin. The bandage monitors the pH and temperature and can administer drugs when either goes out of normal range. While the bandage treats only certain chronic skin conditions at present, it is easy to see future implications of this technology, especially in Soldiers on the battlefield. Persistent or serious wounds can be monitored and treated in real-time without needing to take the Soldier out of the fight or waiting for medical advice and treatment from a professional. This could reduce cost and recovery time. What is the next step beyond smart bandages? Will it be feasible to have general health sensors and a variety of treatments embedded on the Soldier in the future?
If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: firstname.lastname@example.org — we may select it for inclusion in our next edition of “The Queue”!
Mad Scientist Laboratory is pleased to announce that Headquarters, U.S. Army Training and Doctrine Command (TRADOC) is co-sponsoring the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies this week (Wednesday and Thursday, 8-9 August 2018) in Washington, DC.
Future learning techniques and technologies are critical to the Army’s operations in the 21st century against adversaries in rapidly evolving battlespaces. The ability to effectively respond to a changing Operational Environment (OE) with fleeting windows of opportunity is paramount, and Leaders must act quickly to adjust to different OEs and more advanced and lethal technologies. Learning technologies must enable Soldiers to learn, think, and adapt using innovative synthetic environments to accelerate learning and attain expertise more quickly. Looking to 2050, learning enablers will become far more mobile and on-demand.
Looking at Learning in 2050, topics of interest include, but are not limited to: Virtual, Augmented, and Mixed Realities (VR/AR/MR); interactive, autonomous, accelerated, and augmented learning technologies; gamification; skills needed for Soldiers and Leaders in 2050;synthetic training environments; virtual mentors; and intelligent artificial tutors. Advanced learning capabilities present the opportunity for Soldiers and Leaders to prepare for operations and operate in multiple domains while improving current cognitive load limitations.
Plan to join us virtually at the conference as leading scientists, innovators, and scholars from academia, industry, and government gather to discuss:
1) How will emerging technologies improve learning or augment intelligence in professional military education, at home station, while deployed, and on the battlefield?
2) How can the Army accelerate learning to improve Soldier and unit agility in rapidly changing OEs?
3) What new skills will Soldiers and Leaders require to fight and win in 2050?
– Read our Learning in 2050 Call for Ideas finalists’ submissionshere, graciously hosted by our colleagues at Small Wars Journal.
– Starting Tuesday, 7 August 2018, see the conference agenda’s list of presentations and the associated world-class speakers’ biographieshere.
Join us at the conference on-linehere via live-streaming audio and video, beginning at 0840 EDT on Wednesday, 08 Aug 2018; submit your questions to each of the presenters via the moderated interactive chat room; and tag your comments @TRADOC on Twitter with #Learningin2050.
[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by returning guest blogger and proclaimed Mad Scientist Mr. Howard R. Simkin, hypothesizing the activities of an Operational Detachment Alpha (ODA) deployed on a security assistance operation in the 2050 timeframe. Mr. Simkin addresses how advanced learning capabilities can improve what were once cognitive load limitations. This is a one of the themes we will explore at next week’s Mad Scientist Learning in 2050 Conference; more information on this conference can be found at the bottom of this post.]
This is the ODAs third deployment to the country, although it is Captain Clark Weston’s first deployment as a team leader. The rest of his ODA have long experience in the region and country. They all have the 2050 standard milspec augmentation of every Special Operations (SO) Operator: corneal and audial implants, subdural brain-computer interfaces, and medical nano-enhancement.
Unlike earlier generations of SO Operators aided by advanced technology, they can see into the near-infra red, understand sixty spoken languages, acquire new skill sets rapidly, interface directly with computers and see that information in a heads up display without a device, and survive any injury short of dismemberment. However, they continue to rely on their cultural and human skills to provide those critical puzzle pieces from the human domain which technology and data science alone cannot.
No matter what technologies are at play, thehuman elementwill still be paramount. As the noted futurist and theoretical physicist Michio Kaku observed in his discussions of the ‘Cave Man Principle’, “whenever there is a conflict between modern technology and the desires of our primitive ancestors, these primitive desires win each time.”[I]
The sound of an onrushing thunderstorm briefly distracted CPT[II] Weston from the report he was compiling. His eyes scanned the equipment hung on wooden pegs protruding from the white plastered walls or scattered on the small wooden desk adorned by a single switch operated lamp. He couldn’t help smiling. The wooden pegs, plastered walls, and primitive lamp were a good metaphor for the region. His apartment back home sported the latest in technology, adaptive video capable walls, a customized AI virtual assistant, and lighting and HVAC[III] that operated without human intervention. Here, it was back to basics.
His concentration broken, he stood up and stretched. Dark of hair and eyes, of medium height and slender build, he could easily pass for a native of the region. As for fluency in the local language, it had been baked into his neural circuitry through rigorous training, cognitive enhancements, and experience. A student of history, Weston had been surprised during his attendance at the SOF[IV] Captains Career Course when he read articles and papers that had heralded the death of language training.
He wondered. Didn’t the people who wrote those articles pause to consider that no technology works all the time? Either as a result of adversary action or the arrival of mean time between failures, a glitch in a technology-dependent language capability could be at best embarrassing and at worst catastrophic. Didn’t they realize that learning a new language alters the learner’s neural networks, allowing a nuanced understanding of a culture that software had not been able to achieve? Besides, around 65 percent of human communication is non-verbal, he reasoned. Language occurs in a shifting cultural context, something even the best AIs still couldn’t always tackle.
He paced around the room, reflecting on the past few months. Things had definitely taken a turn for the better. With very few exceptions, the Joint security assistance efforts he was aware of were going well. He was very proud of what his ODA had accomplished, training the Ministry of the Interior’s capitol region paramilitary force (CRPF) to what Minerva[V] had deemed a sufficient level of competence in a wide range of tactical skills.
More importantly, as his Team Sergeant Abdel Jamaal had observed, “We got them to believe in themselves as protectors and to stop acting like bullies.” This had led to the development of an increasing number of information sources which in turn had led to the arrest of a number of senior narco-terrorists. He and Sergeant Jamaal had advised and assisted in those arrests in a virtual mode. To the local population, it looked like the CRPF was doing all of the work.
The team medical/civil affairs specialist, Sergeant First Class Belinda Tompkins and the team cyber/additive manufacturing authority, Sergeant DeWayne Jones had achieved quite a lot on their own. After consulting with the Nimble Griffin[VI] team, they had employed their expertise to upgrade the antiquated in-country hospital 3D Printers to produce the latestgene editingdrugs and fight the diseases still endemic to the region. They had done this in the background, having the CRPF collect the machines quietly and then return them to the hospitals with great fanfare. The resulting media coverage was a public relations bonanza. The only US presence was virtual and invisible to the media or public.
A loud peal of thunder shook Weston from his thoughts. The lights flickered in his room, then steadied up. He sat back down at the table to finish his report. All in all, things were going very well.
[Note that any resemblance to any current events or persons, living or dead, is purely coincidental.]
If you enjoyed this post, please read Mr. Simkin’s articleTechnological Fluency 2035-2050, submitted in response to our Learning in 2050 Call for Ideas and hosted by our colleagues at Small Wars Journal.
Other Learning in 2050 Call for Ideas submissions include the following:
Please also plan on joining us virtually at the Mad Scientist Learning in 2050 Conference. This event will be live streamed on both days (08-09 August 2018). You can watch and interact with all of the speakers at the conference watch page or tag @TRADOC on Twitter with #Learningin2050. Note that the live streaming event is best viewed via a commercial internet connection (i.e., non-NIPRNet).
Howard R. Simkin is a Senior Concept Developer in the DCS, G-9 Concepts, Experimentation and Analysis Directorate, U.S. Army Special Operations Command. He has over 40 years of combined military, law enforcement, defense contractor, and government experience. He is a retired Special Forces officer with a wide variety of special operations experience.
________________________________________________________ [I] Kaku, M. (2011). Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100. New York: Random House (Kindle Edition), 13. [II] Captain. [III] Heating, ventilation, and air conditioning. [IV]Special Operations Forces. [V]Department of Defense AI virtual assistant. [VI]A Joint Interagency Cyber Task Force.
[Editor’s Note: The U.S. Army Training and Doctrine Command (TRADOC) G-2 is co-hosting the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies on 8-9 August 2018 in Washington, DC. In advance of this conference, Mad Scientist Laboratory is pleased to present today’s post addressing what is necessary to truly transform Learning in 2050 by returning guest blogger Mr. Nick Marsella. Read Mr. Marsella’s previous two posts addressing Futures Work atPart I and Part II]
Only a handful of years ago, a conference on the topic of learning in 2050 would spur discussions on needed changes in the way we formally educate and train people to live successful lives and be productive citizens.[I] Advocates in K-12 would probably argue for increasing investment in schools, better technology, and increased STEM education. Higher educators would raise many of the same concerns, pointing to the value of the “the academy” and its universities as integral to the nation’s economic, security, and social well-being by preparing the nation’s future leaders, innovators, and scientists.
Yet, times have changed. “Learning in 2050” could easily address how education and training must meet the required immediate learning needs of the individual and for supporting “lifelong learning” in a very changing and competitive world.[II] The conference could also address how new discoveries in learning and the cognitive sciences will inform the education and training fields, and potentially enhance individual abilities to learn and think.[III] “Learning in 2050” could also focus on how organizational learning will be even more important than today – spelling the difference between bankruptcy and irrelevancy – or for military forces – victory or defeat. We must also address how to teach people to learn and organize themselves for learning.[IV]
Lastly, a “Learning in 2050” conference could also focus onmachine learning and howartificial intelligence will transform not only the workplace, but have a major impact on national security.[V] Aside from understanding the potential and limitations of this transformative technology, increasingly we must train and educate people on how to use it to their advantage and understand its limitations for effective “human – machine teaming.” We must also provide opportunities to use fielded new technologies and for individuals to learn when and how totrust it.[VI]
All of these areas would provide rich discussions and perhaps new insights. But just as LTG (ret) H.R. McMaster warned us about thinking about the challenges in future warfare, we must first acknowledge the continuities for this broad topic of “Learning in 2050” and its implications for the U.S. Army.[VII] Until the Army is replaced by robots or knowledge and skills are uploaded directly into the brain as shown in the “Matrix” — learning involves humans and the learning process and the Army’s Soldiers and its civilian workforce [not discounting organizational or machine learning].
While much may change in the way the individual will learn, we must recognize that the focus of “Learning in 2050” is on the learner and the systems, programs/schools, or technologies adopted in the future must support the learner. As Herbert Simon, one of the founders of cognitive science and a Nobel laureate noted: “Learning results from what the student does and thinks and only from what the student does and thinks. The teacher can advance learning only by influencing what the student does to learn.”[VIII] To the Army’s credit, the U.S. Army Learning Concept for Training and Education 2020-2040 vision supports this approach by immersing “Soldiers and Army civilians in a progressive, continuous, learner-centric, competency-based learning environment,” but the danger is we will be captured by technology, procedures, and discussions about the utility and need for “brick and mortar schools.”[IX]
Learning results from what the student does and thinks and only from what the student does and thinks.
Learning is a process that involves changing knowledge, belief, behavior, and attitudes and is entirely dependent on the learner as he/she interprets and responds to the learning experience – in and out of the classroom.[X] Our ideas, concepts, or recommendations to improve the future of learning in 2050 must either: improve student learning outcomes, improve student learning efficiency by accelerating learning, or improve the student’s motivation and engagement to learn.
“Learning in 2050” must identify external environmental factors which will affect what the student may need to learn to respond to the future, and also recognize that the generation of 2050 will be different from today’s student in values, beliefs, attitudes, and acceptance of technology.[XI] Changes in the learning system must be ethical, affordable, and feasible. To support effective student learning, learning outcomes must be clearly defined – whether a student is participating in a yearlong professional education program or a five-day field training exercise – and must be understood by the learner.[XII]
We must think big. For example, Professor of Cognition and Education at Harvard’s Graduate School of Education, Howard Gardner postulated that to be successful in the 21st Century requires the development of the “disciplined mind, the synthesizing mind, the creative mind, the respectful mind, and the ethical mind.”[XIII]
Approaches, processes, and organization, along with the use of technology and other cognitive science tools, must focus on the learning process. Illustrated below is the typical officer career timeline with formal educational opportunities sprinkled throughout the years.[XIV] While some form of formal education in “brick and mortar” schools will continue, one wonders if we will turn this model on its head – with more upfront education; shorter focused professional education; more blended programs combining resident/non-resident instruction; and continual access to experts, courses, and knowledge selected by the individual for “on demand” learning. Today, we often use education as a reward for performance (i.e., resident PME); in the future, education must be a “right of the Profession,” equally provided to all (to include Army civilians) – necessary for performance as a member of the profession of arms.
The role of the teacher will change. Instructors will become “learning coaches” to help the learner identify gaps and needs in meaningful and dynamic individual learning plans. Like the Army’s Master Fitness Trainer whom advises and monitors a unit’s physical readiness, we must create in our units “Master Learning Coaches,” not simply a training specialist who manages the schedule and records. One can imagine technology evolving to do some of this as the Alexa’s and Siri’s of today become the AI tutors and mentors of the future. We must also remember that any system or process for learning in 2050 must fit the needs of multiple communities: Active Army, Army National Guard, and Army Reserve forces, as well as Army civilians.
Just as the delivery of instruction will change, the assessment of learning will change as well. Simulations and gaming should aim to provide an “Enders’ Game” experience, where reality and simulation are indistinguishable. Training systems should enable individuals to practice repeatedly and as Vince Lombardi noted – “Practice does not make perfect. Perfect practice makes perfect.” Experiential learning will reinforce classroom, on-line instruction, or short intensive courses/seminars through the linkage of “classroom seat time” and “field time” at the Combat Training Centers, Warfighter, or other exercises or experiences.
Tell me and I forget; teach me and I may remember; involve me and I learn. Benjamin Franklin[XV]
Of course, much will have to change in terms of policies and the way we think about education, training, and learning. If one moves back in time the same number of years that we are looking to the future – it is the year 1984. How much has changed since then?
While in some ways technology has transformed the learning process – e.g., typewriters to laptops; card catalogues to instant on-line access to the world’s literature from anywhere; and classes at brick and mortar schools to Massive Open Online Courses (MOOCs), and blended and on-line learning with Blackboard. Yet, as Mark Twain reportedly noted – “if history doesn’t repeat itself – it rhymes” and some things look the same as they did in 1984, with lectures and passive learning in large lecture halls – just as PowerPoint lectures are ongoing today for some passively undergoing PME.
If “Learning in 2050” is to be truly transformative – we must think differently. We must move beyond the industrial age approach of mass education with its caste systems and allocation of seats. To be successful in the future, we must recognize that our efforts must center on the learner to provide immediate access to knowledge to learn in time to be of value.
Nick Marsella is a retired Army Colonel and is currently a Department of the Army civilian serving as the Devil’s Advocate/Red Team for Training and Doctrine Command. ___________________________________________________________________
[I] While the terms “education” and “training” are often used interchangeably, I will use the oft quoted rule – training is about skills in order to do a job or perform a task, while education is broader in terms of instilling general competencies and to deal with the unexpected.
[II] The noted futurist Alvin Toffler is often quoted noting: “The illiterate of the 21st Century are not those who cannot read and write but those who cannot learn, unlearn, and relearn.”
[III] Sheftick, G. (2018, May 18). Army researchers look to neurostimulation to enhance, accelerate Soldier’s abilities. Retrieved from: https://www.army.mil/article/206197/army_researchers_looking_to_neurostimulation_to_enhance_accelerate_soldiers_abilities
[IV] This will become increasing important as the useful shelf life of knowledge is shortening. See Zao-Sanders, M. (2017). A 2×2 matrix to help you prioritize the skills to learn right now. Harvard Business Review. Retrieved from: https://hbr.org/2017/09/a-2×2-matrix-to-help-you-prioritize-the-skills-to-learn-right-now — so much to learn, so little time.
[V] Much has been written on AI and its implications. One of the most recent and interesting papers was recently released by the Center for New American Security in June 2018. See: Scharre, P. & Horowitz, M.C. (2018). Artificial Intelligence: What every policymaker needs to know. Retrieved from: https://www.cnas.org/publications/reports/artificial-intelligence-what-every-policymaker-needs-to-know
For those wanting further details and potential insights see: Executive Office of the President, National Science and Technology Council, Committee on Technology Report, Preparing for the Future of Artificial Intelligence, October 2016.
[VI] Based on my anecdotal experiences, complicated systems, such as those found in command and control, have been fielded to units without sufficient training. Even when fielded with training, unless in combat, proficiency using the systems quickly lapses. See: Mission Command Digital Master Gunner, May 17, 2016, retrieved from https://www.army.mil/standto/archive_2016-05-17. See Freedberg, S. Jr. Artificial Stupidity: Fumbling the Handoff from AI to Human Control. Breaking Defense. Retrieved from: https://breakingdefense.com/2017/06/artificial-stupidity-fumbling-the-handoff/
[VII] McMaster, H.R. (LTG) (2015). Continuity and Change: The Army Operating Concept and Clear Thinking about Future War. Military Review.
[VIII] Ambrose, S.A., Bridges, M.W., DiPietro, M., Lovett, M.C. & Norman, M. K. (2010). How learning works: 7 research-based principles for smart teaching. San Francisco, CA: Jossey-Bass, p. 1.
[IX] U.S. Army Training and Doctrine Command. TRADOC Pamphlet 525-8-2. The U.S. Army Learning Concept for Training and Education 2020-2040.
[XI] For example, should machine language be learned as a foreign language in lieu of a traditional foreign language (e.g., Spanish) – given the development of automated machine language translators (AKA = the Universal Translator)?
[XII] The point here is we must clearly understand what we want the learner to learn and adequately define it and insure the learner knows what the outcomes are. For example, we continually espouse that we want leaders to be critical thinkers, but I challenge the reader to find the definitive definition and expected attributes from being a critical thinker given ADRP 6-22, Army Leadership, FM 6-22 Army Leadership, and ADRP 5 and 6 describe it differently. At a recent higher education conference of leaders, administrators and selected faculty, one member succinctly put it this way to highlight the importance of student’s understanding expected learning outcomes: “Teaching students without providing them with learning outcomes is like giving a 500 piece puzzle without an image of what they’re assembling.”
[XIII] Gardner, H. (2008). Five Minds for the Future. Boston, MA: Harvard Business Press. For application of Gardner’s premise see Marsella, N.R. (2017). Reframing the Human Dimension: Gardner’s “Five Minds for the Future.” Journal of Military Learning. Retrieved from: https://www.armyupress.army.mil/Journals/Journal-of-Military-Learning/Journal-of-Military-Learning-Archives/April-2017-Edition/Reframing-the-Human-Dimension/
[XIV] Officer education may differ due to a variety of factors but the normal progression for Professional Military Education includes: Basic Officer Leader Course (BOLC B, to include ROTC/USMA/OCS which is BOLC A); Captains Career Course; Intermediate Level Education (ILE) and Senior Service College as well as specialty training (e.g., language school), graduate school, and Joint schools. Extracted from previous edition of DA Pam 600-3, Commissioned Office Professional Development and Career Management, December 2014, p.27 which is now obsolete. Graphic is as an example. For current policy, see DA PAM 600-3, dated 26 June 2017. .
[XV] See https://blogs.darden.virginia.edu/brunerblog/