[Editor’s Note: Regular readers will note that one of our enduring themes is the Internet’s emergence as a central disruptive innovation. With the publication of proclaimed Mad Scientist P.W. Singer and co-author Emerson T. Brooking’s LikeWar – The Weaponization of Social Media, Mad Scientist Laboratory addresses what is arguably the most powerful manifestation of the internet — Social Media — and how it is inextricably linked to the future of warfare. Messrs. Singer and Brooking’s new book is essential reading if today’s Leaders (both in and out of uniform) are to understand, defend against, and ultimately wield the non-kinetic, yet violently manipulative effects of Social Media.]
“The modern internet is not just a network, but an ecosystem of 4 billion souls…. Those who can manipulate this swirling tide, steer its direction and flow, can…. accomplish astonishing evil. They can foment violence, stoke hate, sow falsehoods, incite wars, and even erode the pillars of democracy itself.”
As noted inThe Operational Environment and the Changing Character of Future Warfare, Social Media and the Internet of Things have spawned a revolution that has connected “all aspects of human engagement where cognition, ideas, and perceptions, are almost instantaneously available.” While this connectivity has been a powerfully beneficial global change agent, it has also amplified human foibles and biases. Authors Singer and Brookings note that humans by nature are social creatures that tend to gravitate into like-minded groups. We “Like” and share things online that resonate with our own beliefs. We also tend to believe what resonates with us and our community of friends.
“Whether the cause is dangerous (support for a terrorist group), mundane (support for a political party), or inane (belief that the earth is flat), social media guarantees that you can find others who share your views and even be steered to them by the platforms’ own algorithms… As groups of like-minded people clump together, they grow to resemble fanatical tribes, trapped in echo chambers of their own design.”
Weaponization of Information
The advent of Social Media less than 20 years ago has changed how we wage war.
“Attacking an adversary’s most important center of gravity — the spirit of its people — no longer requires massive bombing runs or reams of propaganda. All it takes is a smartphone and a few idle seconds. And anyone can do it.”
Nation states and non-state actors alike are leveraging social media to manipulate like-minded populations’ cognitive biases to influence the dynamics of conflict. This continuous on-line fight for your mind represents “not a single information war but thousands and potentially millions of them.”
LikeWar provides a host of examples describing how contemporary belligerents are weaponizing Social Media to augment their operations in the physical domain. Regarding the battle to defeat ISIS and re-take Mosul, authors Singer and Brookings note that:
“Social media had changed not just the message, but the dynamics of conflict. How information was being accessed, manipulated, and spread had taken on new power. Who was involved in the fight, where they were located, and even how they achieved victory had been twisted and transformed. Indeed, if what was online could swing the course of a battle — or eliminate the need for battle entirely — what, exactly, could be considered ‘war’ at all?“
Even American gang members are entering the fray assuper-empowered individuals, leveraging social media to instigate killings via “Facebook drilling” in Chicago or “wallbanging” in Los Angeles.
And it is only “a handful of Silicon Valley engineers,” with their brother and sister technocrats inBeijing, St. Petersburg, and a few other global hubs of Twenty-first Century innovation that are forging and then unleashing the code that is democratizing this virtual warfare.
Artificial Intelligence (AI)-Enabled Information Operations
Seeing is believing, right? Not anymore! Previously clumsy efforts to photo-shop images and fabricate grainy videos and poorly executed CGI have given way to sophisticatedDeepfakes, using AI algorithms to create nearly undetectable fake images, videos, and audio tracks that then go viral on-line to dupe, deceive, and manipulate. This year, FakeApp was launched as free software, enabling anyone with an artificial neural network and a graphics processor to create and share bogus videos via Social Media. Each Deepfake video that:
“… you watch, like, or share represents a tiny ripple on the information battlefield, privileging one side at the expense of others. Your online attention and actions are thus both targets and ammunition in an unending series of skirmishes.”
Just as AI is facilitating these distortions in reality, the race is on to harness AI to detect and delete these fakes and prevent “the end of truth.”
If you enjoyed this post:
– Listen to the accompanying playlist composed by P.W. Singer while reading LikeWar.
[Editor’s Note: The United States Army Training and Doctrine Command (TRADOC) co-hosted the Mad Scientist Bio Convergence and Soldier 2050 Conference with SRI International at their Menlo Park, CA, campus on 8-9 March 2018, where participants discussed the advent of new biotechnologies and the associated benefits, vulnerabilities, and ethics associated with Soldier enhancement for the Army of the Future. The following post is an excerpt from this conference’s final report.]
Advances in synthetic biology likely will enhance future Soldier performance – speed, strength, endurance, and resilience – but will bring with it vulnerabilities, such as genomic targeting, that can be exploited by an adversary and/or potentially harm the individual undergoing the enhancement.
Emerging synthetic biology tools – e.g., CRISPR, Talon, and ZFN – present an opportunity to engineer Soldiers’ DNA and enhance their abilities. Bioengineering is becoming easier and cheaper as a bevy of developments are reducing biotechnology transaction costs in gene reading, writing, and editing. Due to the ever-increasing speed and lethality of the future battlefield, combatants will need cognitive and physical enhancement to survive and thrive.
Cognitive enhancement could make Soldiers more lethal, more decisive, and perhaps more resilient. Using neurofeedback, a process that allows a user to see their brain activity in real-time, one can identify ideal brain states, and use them to enhance an individual’s mental performance. Through the mapping and presentation of identified expert brains, novices can rapidly improve their acuity after just a few training sessions.Further, there are studies being conducted that explore the possibility of directly emulating those expert brain states with non-invasive EEG caps that could improve performance almost immediately.Dr. Amy Kruse, the Chief Scientific Officer at the Platypus Institute, referred to this phenomenon as “sitting on a gold mine of brains.”
There is also the potential to change and improve Soldier’s physical attributes. Scientists can develop drugs, specific dietary plans, and potentially use genetic editing to improve speed, strength, agility, and endurance.
In order to fully leverage the capability of human performance enhancement, Andrew Herr, CEO of Helicase and an Adjunct Fellow at CNAS, suggested that human performance R&D be moved out of the medical field and become its own research area due to its differing objectives and the convergence between varying technologies.
Soldiers, Airmen, Marines, and Sailors are already trying to enhance themselves with commercial products – often containing unknown or unsafe ingredients – so it is incumbent on the U.S. military to, at the very least, help those who want to improve.
However, a host of new vulnerabilities, at the genetic level, accompany this revolutionary leap in human evolution. If one can map the human genome and more thoroughly scan and understand the brain, they can target genomes and brains in the same ways. Soldiers could become incredibly vulnerable at the genomic level, forcing the Army to not only protect Soldiers using body armor and armored vehicles, but also protect their identities, genomes, and physiologies.
Adversaries will exploit all biological enhancements to gain competitive advantage over U.S. forces. Targeted genome editing technology such as CRISPR will enable adversarial threats to employ super-empowered Soldiers on the battlefield and target specific populations with bioweapons. U.S. adversaries may use technologies recklessly to achieve short term gains with no consideration of long range effects.
There are numerousethicalquestions that come with the enhancement of Soldiers such as the moral acceptability of the Army making permanent enhancements to Soldiers, the responsibility for returning transitioning Soldiers to a “baseline human,” and the general definition of what a “baseline human” is legally defined as.
By altering, enhancing, and augmenting the biology of the human Soldier, the United States Army will potentially enter into uncharted ethical territory. Instead of issuing items to Soldiers to complement their physical and cognitive assets, by 2050, the U.S. Army may have the will and the means to issue them increased biological abilities in those areas. The future implications and the limits or thresholds for enhancement have not yet been considered. The military is already willing to correct the vision of certain members – laser eye surgery, for example – a practice that could be accurately referred to as human enhancement, so discretely defining where the threshold lies will be important. It is already known that other countries, and possible adversaries, are willing to cross the line where we are not. Russia, most recently, was banned from competition in the 2018 Winter Olympics for widespread performance-enhancing drug violations that were believed to be supported by the Russian Government. Those drugs violate the spirit of competition in the Olympics, but no such spirit exists in warfare.
Another consideration is whether or not the Soldier enhancements are permanent. By enhancing Soldiers’ faculties, the Army is, in fact, enhancing their lethality or their ability to defeat the enemy. What happens with these enhancements—whether the Army can or should remove them— when a Soldier leaves the Army is an open question. As stated previously, the Army is willing and able to improve eyesight, but does not revert that eyesight back to its original state after the individual has separated. Some possible moral questions surrounding Soldier enhancement include:
• If the Army were to increase a Soldier’s stamina, visual acuity, resistance to disease, and pain tolerance, making them a more lethal warfighter, is it incumbent upon the Army to remove those enhancements?
• If the Soldier later used those enhancements in civilian life for nefarious purposes, would the Army be responsible?
Answers to these legal questions are beyond the scope of this paper, but can be considered now before the advent of these new technologies becomes widespread.
If the Army decides to reverse certain Soldier enhancements, it likely will need to determine the definition of a “baseline human.” This would establish norms for features, traits, and abilities that can be permanently enhanced and which must be removed before leaving service. This would undoubtedly involve both legal and moral challenges.
The complete Mad Scientist Bio Convergence and Soldier 2050 Final Report can be readhere.
To learn more about the ramifications of Soldier enhancement, please go to:
– Dr. Amy Kruse’sHuman 2.0podcast, hosted by our colleagues at Modern War Institute.
– The Ethics and the Future of War panel discussion, facilitated by LTG Jim Dubik (USA-Ret.) from Day 2 (26 July 2017) of the Mad Scientist Visualizing Multi Domain Battle in 2030-2050 Conference at Georgetown University.
 Ahmad, Zarah and Stephanie Larson, “The DNA Utility in Military Environments,” slide 5, presented at Mad Scientist Bio Convergence and the Soldier 2050 Conference, 8 March 2018.  Kruse, Amy, “Human 2.0 Upgrading Human Performance,” Slide 12, presented at Mad Scientist Bio Convergence and the Soldier 2050 Conference, 8 March 2018 https://www.frontiersin.org/articles/10.3389/fnhum.2016.00034/full https://www.technologyreview.com/the-download/610034/china-is-already-gene-editing-a-lot-of-humans/ https://www.c4isrnet.com/unmanned/2018/05/07/russia-confirms-its-armed-robot-tank-was-in-syria/ https://www.washingtonpost.com/sports/russia-banned-from-2018-olympics-following-doping-allegations/2017/12/05/9ab49790-d9d4-11e7-b859-fb0995360725_story.html?noredirect=on&utm_term=.d12db68f42d1
[Editor’s Note: Today’s post poses four central questions to our Mad Scientist community of action regarding bias in machine learning and the associated ramifications for artificial intelligence, autonomy, lethality, and decision-making on future warfighting.]
“We thought that we had the answers, it was the questions we had wrong” – Bono, U2
As machine learning and deep learning algorithms become more commonplace, it is clear that the utopian ideal of a bias-neutral Artificial Intelligence (AI) is exactly just that. These algorithms have underlying biases embedded in their coding, imparted by their human programmers (either consciously or unconsciously). These algorithms can develop further biases during the machine learning and training process. Dr. Tolga Bolukbasi, Boston University, recently described algorithms as not being capable of distinguishing right from wrong, unlike humans that can judge their actions, even when they act against ethical norms. For algorithms, data is the ultimate determining factor.
Realizing that algorithms supporting future Intelligence, Surveillance, and Reconnaissance (ISR) networks and Commander’s decision support aids will have inherent biases — what is the impact on future warfighting? This question is exceptionally relevant as Soldiers and Leaders consider the influence of biases in man-machine relationships, and their potential ramifications on the battlefield, especially with regard to the rules of engagement (i.e., mission execution and combat efficiency versus the proportional use of force and minimizing civilian casualties and collateral damage).
“It is difficult to make predictions, particularly about the future.” This quote has been attributed to anyone ranging from Mark Twain to Niels Bohr to Yogi Berra. Point prediction is a sucker’s bet. However, asking the right questions about biases in AI is incredibly important.
The Mad Scientist Initiative has developed a series of questions to help frame the discussion regarding what biases we are willing to accept and in what cases they will be acceptable. Feel free to share your observations and questions in the comments section of this blog post (below) or email them to us at: firstname.lastname@example.org.
1) What types of bias are we willing to accept? Will a so-called cognitive bias that forgoes a logical, deliberative process be allowable? What about a programming bias that is discriminative towards any specific gender(s), ethnicity(ies), race(s), or even age(s)?
2) In what types of systems will we accept biases? Will machine learning applications in supposedly non-lethal warfighting functions like sustainment, protection, and intelligence be given more leeway with regards to bias?
3) Will the biases in machine learning programming and algorithms be more apparent and/or outweigh the inherent biases of humans-in-the-loop? How will perceived biases affect trust and reliance on machine learning applications?
4) At what point will the pace of innovation and introduction of this technology on the battlefield by our adversaries cause us to forego concerns of bias and rapidly field systems to gain a decisive Observe, Orient, Decide, and Act (OODA) loop and combat speed advantage on theHyperactive Battlefield?
For additional information impacting on this important discussion, please see the following:
[Editor’s Note: Since its inception last November, the Mad Scientist Laboratory has enabled us to expand our reach and engage global innovators from across industry, academia, and the Government regarding emergent disruptive technologies and their individual and convergent impacts on the future of warfare. For perspective, our blog has accrued almost 60K views by over 30K visitors from around the world!
Our Mad Scientist Community of Action continues to grow — in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. Almost half (36 out of 81) of the blog posts published have been submitted by guest bloggers. We challenge you to contribute your ideas!
In particular, we would like to recognize Mad Scientist Mr. Sam Bendett by re-posting his submission entitled “Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward,” originally published on 25 June 2018. This post generated a record number of visits and views during the past six month period. Consequently, we hereby declare Sam to be the Mad Scientist Laboratory’s “Maddest” Guest Blogger! for the latter half of FY18. In recognition of his achievement, Sam will receive much coveted Mad Scientist swag.
While Sam’s post revealed the many challenges Russia has experienced in combat testing the Uran-9 Unmanned Ground Vehicle (UGV) in Syria, it is important to note that Russia has designed, prototyped, developed, and operationally tested this system in a combat environment, demonstrating a disciplined and proactive approach to innovation. Russia is learning how to integrate robotic lethal ground combat systems….
Enjoy re-visiting Sam’s informative post below, noting that many of the embedded links are best accessed using non-DoD networks.]
Russia, like many other nations, is investing in the development of various unmanned military systems. The Russian defense establishment sees such systems as mission multipliers, highlighting two major advantages: saving soldiers’ lives and making military missions more effective. In this context, Russian developments are similar to those taking place around the world. Various militaries are fielding unmanned systems for surveillance, intelligence, logistics, or attack missions to make their forces or campaigns more effective. In fact, the Russian military has been successfully using Unmanned Aerial Vehicles (UAVs) in training and combat since 2013. It has used them with great effect in Syria, where these UAVs flew more mission hours than manned aircraft in various Intelligence, Surveillance, and Reconnaissance (ISR) roles.
Russia is also busy designing and testing many unmanned maritime and ground vehicles for various missions with diverse payloads. To underscore the significance of this emerging technology for the nation’s armed forces, Russian Defense Minister Sergei Shoigurecently stated that the serial production of ground combat robots for the military “may start already this year.”
But before we see swarms of ground combat robots with red stars emblazoned on them, the Russian military will put these weapons through rigorous testing in order to determine if they can correspond to battlefield realities. Russian military manufacturers and contractors are not that different from their American counterparts in sometimes talking up the capabilities of their creations, seeking to create the demand for their newest achievement before there is proof that such technology can stand up to harsh battlefield conditions. It is for this reason that the Russian Ministry of Defense (MOD) finally established several centers such as Main Research and Testing Center of Robotics, tasked with working alongside thedefense-industrial sector to create unmanned military technology standards and better communicate warfighters’ needs. The MOD is also running conferences such as the annual “Robotization of the Armed Forces” that bring together military and industry decision-makers for a better dialogue on the development, growth, and evolution of the nation’s unmanned military systems.
This brings us to one of the more interesting developments in Russian UGVs. Then Russian Deputy Defense Minister Borisov recentlyconfirmed that the Uran-9 combat UGV was tested in Syria, which would be the first time this much-discussed system was put into combat. This particular UGV is supposed to operate in teams of three or four and is armed with a 30mm cannon and 7.62 mm machine guns, along with avariety of other weapons.
Just as importantly, it was designed to operate at a distance of up to three kilometers (3000 meters or about two miles) from its operator — a range that could be extended up to six kilometers for a team of these UGVs. This range is absolutely crucial for these machines, which must be operated remotely. Russian designers are developing operational electronics capable of rendering the Uran-9more autonomous, thereby moving the operators to a safer distance from actual combat engagement. The size of a small tank, the Uran-9 impressed the international military community when first unveiled and it was definitely designed to survive battlefield realities….
However, just as “no plan survives first contact with the enemy,” the Uran-9, though built to withstand punishment, came up short in its first trial run in Syria. In a candid admission, Andrei P. Anisimov, Senior Research Officer at the 3rd Central Research Institute of the Ministry of Defense, reported on the Uran-9’s critical combat deficiencies during the 10th All-Russian Scientific Conference entitled “Actual Problems of Defense and Security,” held in April 2018. In particular, the following issues came to light during testing:
• Instead of its intended range of several kilometers, the Uran-9 could only be operated at distance of “300-500 meters among low-rise buildings,” wiping out up to nine-tenths of its total operational range.
• There were “17 cases of short-term (up to one minute) and two cases of long-term (up to 1.5 hours) loss of Uran-9 control” recorded, which rendered this UGV practically useless on the battlefield.
• The UGV’s running gear had problems – there were issues with supporting and guiding rollers, as well as suspension springs.
• The electro-optic stations allowed for reconnaissance and identification of potential targets at a range of no more than two kilometers.
• The OCH-4 optical system did not allow for adequate detection of adversary’s optical and targeting devices and created multiple interferences in the test range’s ground and airspace.
• Unstable operation of the UGV’s 30mm automatic cannon was recorded, with firing delays and failures. Moreover, the UGV could fire only when stationary, which basically wiped out its very purpose of combat “vehicle.”
• The Uran-9’s combat, ISR, and targeting weapons and mechanisms were also not stabilized.
On one hand, these many failures are a sign that this much–discussed and much-advertised machine is in need of significant upgrades, testing, and perhaps even a redesign before it gets put into another combat situation. The Russian militarydid say that it tested nearly 200 types of weapons in Syria, so putting the Uran-9 through its combat paces was a logical step in the long development of this particular UGV. If the Syrian trial was the first of its kind for this UGV, such significant technical glitches would not be surprising.
However, the MOD has been testing this Uran-9 for a while now, showing videosof this machine at a testing range, presumably in Russia. The truly unexpected issue arising during operations in Syria had to do with the failure of the Uran-9 to effectively engage targets with its cannon while in motion (along with a number of other issues). Still, perhaps many observers bought into the idea that this vehicle would perform as built – tracks, weapons, and all. A closer examination of the publicly-releasedtesting video probably foretold some of the Syrian glitches – in this particular one, Uran-9 is shown firing its machine guns while moving, but its cannon was fired only when the vehicle was stationary. Another interesting aspect that is significant in hindsight is that the testing range in the video was a relatively open space – a large field with a few obstacles around, not the kind of complex terrain, dense urban environment encountered in Syria. While today’s and future battlefields will range greatly from open spaces to megacities, a vehicle like the Uran-9 would probably be expected to perform in all conditions. Unless, of course, Syrian tests would effectively limit its use in future combat.
On another hand, so many failures at once point to much larger issues with the Russian development of combat UGVs, issues that Anisimov also discussed during his presentation. He highlighted the following technological aspects that are ubiquitous worldwide at this point in the global development of similar unmanned systems:
• Low level of current UGV autonomy;
• Low level of automation of command and control processes of UGV management, including repairs and maintenance;
• Low communication range, and;
• Problems associated with “friend or foe” target identification.
Judging from the Uran-9’s Syrian test, Anisimov made the following key conclusions which point to the potential trajectory of Russian combat UGV development – assuming thatother unmanned systems may have similar issues when placed in a simulated (or real) combat environment:
• These types of UGVs are equipped with a variety of cameras and sensors — and since the operator is presumably located a safe distance from combat, he may have problems understanding, processing, and effectively responding to what is taking place with this UGV in real-time.
• For the next 10-15 years, unmanned military systems will be unable to effectively take part in combat, with Russians proposing to use them in storming stationary and well-defended targets (effectively giving such combat UGVs a kamikaze role).
• One-time and preferably stationary use of these UGVs would be more effective, with maintenance and repair crews close by.
• These UGVs should be used with other military formations in order to target and destroy fortified and firing enemy positions — but never on their own, since their breakdown would negatively impact the military mission.
The presentation proposed that some of the above-mentioned problems could be overcome by domestic developments in the following UGV technology and equipment areas:
• Creating secure communication channels;
• Building miniaturized hi-tech navigation systems with a high degree of autonomy, capable of operating with a loss of satellite navigation systems;
• Developing miniaturized and effective ISR components;
• Integrating automated command and control systems, and;
• Better optics, electronics and data processing systems.
According to Anisimov’s report, the overall Russian UGV and unmanned military systems development arch is similar to the one proposed by the United States Army Capabilities Integration Center (ARCIC): the gradual development of systems capable of more autonomy on the battlefield, leading to “smart” robots capable of forming “mobile networks” and operating in swarm configurations. Such systems should be “multifunctional” and capable of being integrated into existing armed forces formations for various combat missions, as well as operate autonomously when needed. Finally, each military robot should be able to function within existing and future military technology and systems.
Such a candid review and critique of the Uran-9 in Syria, if true, may point to the Russian Ministry of Defense’s attitude towards its domestic manufacturers. The potential combat effectiveness of this UGV was advertised for the past two years, but its actual performance fell far short of expectations. It is a sign for developers of other Russian unmanned ground vehicles – like Soratnik, Vihr, and Nerehta — since it displays the full range of deficiencies that take place outside of well-managed testing ranges where such vehicles are currently undergoing evaluation. It also brought to light significant problems with ISR equipment — this type of technology is absolutely crucial to any unmanned system’s successful deployment, and its failures during Uran-9 tests exposed a serious combat weakness.
It is also a useful lesson for many other designers of domestic combat UGVs who are seeking to introduce similar systems into existing order of battle. It appears that the Uran-9’s full effectiveness can only be determined at a much later time if it can perform its mission autonomously in the rapidly-changing and complex battlefield environment. Fully autonomous operation so far eludes its Russian developers, who are nonetheless still working towards achieving such operational goals for their combat UGVs. Moreover, Russian deliberations on using their existing combat UGV platforms in one-time attack mode against fortified adversary positions or firing points, tracking closely with ways that Western military analysts arethinking that such weapons could be used in combat.
The Uran-9 is still a test bed and much has to take place before it could be successfully integrated into current Russian concept of operations. We could expect more eye-opening “lessons learned” from its and other UGVs potential deployment in combat. Given the rapid proliferation of unmanned and autonomous technology, we are already in the midst of a new arms race. Many states are now designing, building, exporting, or importing various technologies for their military and security forces.
To make matters more interesting, the Russians have been public with both their statements about new technology being tested and evaluated, and with the possible use of such weapons in current and future conflicts. There should be no strategic or tactical surprise when military robotics are finally encountered in future combat.
Samuel Bendett is a Research Analyst at the CNA Corporation and a Russia Studies Fellow at the American Foreign Policy Council. He is an official Mad Scientist, having presented and been so proclaimed at a previous Mad Scientist Conference. The views expressed here are his own.
[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by guest blogger LTC Rob Taber, U.S. Army Training and Doctrine Command (TRADOC) G-2 Futures Directorate, clarifying the often confused character and nature of warfare, and addressing their respective mutability.]
No one is arguing that warfare is not changing. Where people disagree, however, is whether the nature of warfare, the character of warfare, or both are changing.
Take, for example, the National Intelligence Council’s assertion in “Global Trends: Paradox of Progress.” They state, “The nature of conflict is changing. The risk of conflict will increase due to diverging interests among major powers, an expanding terror threat, continued instability in weak states, and the spread of lethal, disruptive technologies. Disrupting societies will become more common, with long-range precision weapons, cyber, androbotic systems to target infrastructure from afar, and more accessible technology to create weapons of mass destruction.”[I]
Additionally, Brad D. Williams, in an introductionto an interview he conducted with Amir Husain, asserts, “Generals and military theorists have sought to characterize the nature of war for millennia, and for long periods of time, warfare doesn’t dramatically change. But, occasionally, new methods for conducting war cause a fundamental reconsideration of its very nature and implications.”[II] Williams then cites “cavalry, the rifled musket and Blitzkrieg as three historical examples”[III] from Husain and General John R. Allen’s (ret.) article, “On Hyperwar.”
Unfortunately, the NIC and Mr. Williams miss the reality that the nature of war is not changing, and it is unlikely to ever change. While these authors may have simply interchanged “nature” when they meant “character,” it is important to be clear on the difference between the two and the implications for the military. To put it more succinctly, words have meaning.
The nature of something is the basic make up of that thing. It is, at core, what that “thing” is. The character of something is the combination of all the different parts and pieces that make up that thing. In the context of warfare, it is useful to ask every doctrine writer’s personal hero, Carl Von Clausewitz, what his views are on the matter.
He argues that war is “subjective,”[IV] “an act of policy,”[V] and “a pulsation of violence.”[VI] Put another way, the nature of war is chaotic, inherently political, and violent. Clausewitz then states that despite war’s “colorful resemblance to a game of chance, all the vicissitudes of its passion, courage, imagination, and enthusiasm it includes are merely its special characteristics.”[VII] In other words, all changes in warfare are those smaller pieces that evolve and interact to make up the character of war.
The argument thatartificial intelligence (AI) and other technologies will enable military commanders to have “a qualitatively unsurpassed level of situational awareness and understanding heretofore unavailable to strategic commander[s]”[VIII] is a grand claim, but one that has been made many times in the past, and remains unfulfilled. The chaos of war, its fog, friction, and chance will likely never be deciphered, regardless of what technology we throw at it. While it is certain that AI-enabled technologies will be able to gather, assess, and deliver heretofore unimaginable amounts of data, these technologies will remain vulnerable to age-old practices ofdenial, deception, and camouflage.
The enemy gets a vote, and in this case, the enemy also gets to play with their AI-enabled technologies that are doing their best to provide decision advantage over us. The information sphere in war will be more cluttered and more confusing than ever.
Regardless of the tools of warfare, be they robotic,autonomous, and/or AI-enabled, they remain tools. And while they will be the primary tools of the warfighter, the decision to enable the warfighter to employ those tools will, more often than not, come from political leaders bent on achieving a certain goal with military force.
Finally, the violence of warfare will not change. Certainly robotics and autonomy will enable machines that can think and operate without humans in the loop. Imagine the future in which the unmanned bomber gets blown out of the sky by the AI-enabled directed energy integrated air defense network. That’s still violence. There are still explosions and kinetic energy with the potential for collateral damage to humans, both combatants and civilians.
Not to mention the bomber carried a payload meant to destroy something in the first place. A military force, at its core, will always carry the mission to kill things and break stuff. What will be different is what tools they use to execute that mission.
To learn more about the changing character of warfare:
– Watch videos of each of the conference presentations on the TRADOC G-2 Operational Environment (OE) Enterprise YouTube Channelhere.
– Review the conference presentation slides (with links to the associated videos) on the Mad Scientist All Partners Access Network (APAN) sitehere.
LTC Rob Taber is currently the Deputy Director of the Futures Directorate within the TRADOC G-2. He is an Army Strategic Intelligence Officer and holds a Master of Science of Strategic Intelligence from the National Intelligence University. His operational assignments include 1st Infantry Division, United States European Command, and the Defense Intelligence Agency.
Note: The featured graphic at the top of this post captures U.S. cavalrymen on General John J. Pershing’s Punitive Expedition into Mexico in 1916. Less than two years later, the United States would find itself fully engaged in Europe in a mechanized First World War. (Source: Tom Laemlein / Armor Plate Press, courtesy of Neil Grant, The Lewis Gun, Osprey Publishing, 2014, page 19)
[I] National Intelligence Council, “Global Trends: Paradox of Progress,” January 2017, https://www.dni.gov/files/documents/nic/GT-Full-Report.pdf, p. 6. [II] Brad D. Williams, “Emerging ‘Hyperwar’ Signals ‘AI-Fueled, machine waged’ Future of Conflict,” Fifth Domain, August 7, 2017, https://www.fifthdomain.com/dod/2017/08/07/emerging-hyperwar-signals-ai-fueled-machine-waged-future-of-conflict/. [III] Ibid. [VI] Carl Von Clausewitz, On War, ed. Michael Howard and Peter Paret (Princeton: Princeton University Press, 1976), 85. [V] Ibid, 87. [VI] Ibid. [VII] Ibid, 86. [VIII] John Allen, Amir Hussain, “On Hyper-War,” Fortuna’s Corner, July 10, 2017, https://fortunascorner.com/2017/07/10/on-hyper-war-by-gen-ret-john-allenusmc-amir-hussain/.
[Editor’s Note: Mad Scientist Laboratory is pleased to publish the following post by guest blogger Dr. Jan Kallberg, faculty member, United States Military Academy at West Point, and Research Scientist with the Army Cyber Institute at West Point. His post serves as a cautionary tale regarding our finite intellectual resources and the associated existential threat in failing to protect them!]
Preface: Based on my experience in cybersecurity, migrating to a broader cyber field, there have always been those exceptional individuals that have an unreplicable ability to see the challenge early on, create a technical solution, and know how to play it in the right order for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival ofArtificial Intelligence increases our reliance on these highly capable individuals – because someone must set the rules, the boundaries, and point out the trajectory for Artificial Intelligence at initiation.
As an industrialist society, we tend to see technology and the information that feeds it as the weapons – and ignore the few humans that have a large-scale direct impact. Even if identified as a weapon, how do you make a human mind classified? Can we protect these high-ability individuals that in the digital world are weapons, not as tools but compilers of capability, or are we still focused on the tools? Why do we see only weapons that are steel and electronics and not the weaponized mind as a weapon? I believe firmly that we underestimate the importance of Applicable Intelligence – the ability to play the cyber engagement in the optimal order. Adversaries are often good observers because they are scouting for our weak spots. I set the stage for the following post in 2034, close enough to be realistic and far enough for things to happen when our adversaries are betting that we rely more on a few minds than we are willing to accept.
Post: In a not too distant future, 20th of August 2034, a peer adversary’s first strategic moves are the targeted killings of less than twenty individuals as they go about their daily lives: watching a 3-D printer making a protein sandwich at a breakfast restaurant; stepping out from the downtown Chicago monorail; or taking a taste of a poison-filled retro Jolt Cola. In thegray zone, when the geopolitical temperature increases, but we are still not at war yet, our adversary acts quickly and expedites a limited number of targeted killings within the United States of persons whom are unknown to mass media, the general public, and have only one thing in common – Applicable Intelligence (AI).
The ability to apply is a far greater asset than the technology itself. Cyber and card games have one thing in common, the order you play your cards matters. In cyber, the tools are publicly available, anyone can download them from the Internet and use them, but the weaponization of the tools occurs when used by someone who understands how to play the tools in an optimal order. These minds are different because they see an opportunity to exploit in a digital fog of war where others don’t or can’t see it. They address problems unburdened by traditional thinking, in new innovative ways, maximizing the dual-purpose of digital tools, and can create tangible cyber effects.
It is the Applicable Intelligence (AI) that creates the procedures, the application of tools, and turns simple digital software in sets or combinations as a convergence to digitally lethal weapons. This AI is the intelligence to mix, match, tweak, and arrange dual purpose software. In 2034, it is as if you had the supernatural ability to create a thermonuclear bomb from what you can find at Kroger or Albertson.
Sadly we missed it; we didn’t see it. We never left the 20th century. Our adversary saw it clearly and at the dawn of conflict killed off the weaponized minds, without discretion, and with no concern for international law or morality.
These intellects are weapons of growing strategic magnitude. In 2034, the United States missed the importance of these few intellects. This error left them unprotected.
All of our efforts were instead focusing on what they delivered, the application and the technology, which was hidden in secret vaults and only discussed in sensitive compartmented information facilities. Therefore, we classify to the highest level to ensure the confidentiality and integrity of our cyber capabilities. Meanwhile, the most critical component, the militarized intellect, we put no value to because it is a human. In a society marinated in an engineering mindset, humans are like desk space, electricity, and broadband; it is a commodity that is input in the production of the technical machinery. The marveled technical machinery is the only thing we care about today, 2018, and as it turned out in 2034 as well.
We are stuck in how we think, and we are unable to see it coming, but our adversaries see it. At a systematic level, we are unable to see humans as the weapon itself, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed. As the armory of the war of 1812, as the stockpile of 1943, and as the launch pad of 2034. Arms are made of steel, or fancier metals, with electronics – we failed in 2034 to see weapons made of corn, steak, and an added combative intellect.
General Nakasone stated in 2017, “Our best ones [coders] are 50 or 100 times better than their peers,” and continued “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” In reality, the success of cyber and cyber operations is highly dependent not on the tools or toolsets but instead upon the super-empowered individual that General Nakasone calls “the 50-x coder.”
There were clear signals that we could have noticed before General Nakasone pointed it out clearly in 2017. The United States’ Manhattan Project during World War II had at its peak 125,000 workers on the payroll, but the intellects that drove the project to success and completion were few. The difference with the Manhattan Project and the future of cyber is that we were unable to see the human as a weapon, being locked in by our path dependency as an engineering society where we hail the technology and forget the importance of the humans behind it.
America’s endless love of technical innovations and advanced machinery reflects in a nation that has celebrated mechanical wonders and engineered solutions since its creation. For America, technical wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the intercontinental railroad, the Panama Canal, the manufacturing era, the moon landing, and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps that can solve a problem or act.
The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced. In 2034, the era of digital conflicts and thewar between algorithms with engagements occurring at machine speed with no time for leadership or human interaction, it is the intellects that design and understand how to play it. We didn’t see it.
In 2034, with fewer than twenty bodies piled up after targeted killings, resides the Cyber Pearl Harbor. It was not imploding critical infrastructure, a tsunami of cyber attacks, nor hackers flooding our financial systems, but instead traditional lead and gunpowder. The super-empowered individuals are gone, and we are stuck in a digital war at speeds we don’t understand, unable to play it in the right order, and with limited intellectual torque to see through the fog of war provided by an exploding kaleidoscope of nodes and digital engagements.
Dr. Jan Kallberg is currently an Assistant Professor of Political Science with the Department of Social Sciences, United States Military Academy at West Point, and a Research Scientist with the Army Cyber Institute at West Point. He was earlier a researcher with the Cyber Security Research and Education Institute, The University of Texas at Dallas, and is a part-time faculty member at George Washington University. Dr. Kallberg earned his Ph.D. and MA from the University of Texas at Dallas and earned a JD/LL.M. from Juridicum Law School, Stockholm University. Dr. Kallberg is a certified CISSP, ISACA CISM, and serves as the Managing Editor for the Cyber Defense Review. He has authored papers in the Strategic Studies Quarterly, Joint Forces Quarterly, IEEE IT Professional, IEEE Access, IEEE Security and Privacy, and IEEE Technology and Society.
[Editor’s Note: Mad Scientist Laboratory is pleased to present (somewhat belatedly) our July edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]
Mr. Nicholson summarizes a recent presentation by one of our favorite Mad Scientists,P.W. Singer from New America. Mr. Singer warns that as more and more items are linked to the internet of things, the opportunities for nations and societies (also non-state actors andsuper-empowered individuals) to attack and be attacked become much broader. He states that “all of this technology does not mean that we will see humans eliminated from war anytime soon. Rather, just like the steam engine and the plane and the computer, we will see changes in the human skills that are most needed and less needed. This movement of people skills can and should change everything from our recruiting and training to our doctrine and organizational design.” This movement of people skills was a key aspect of last week’s Mad Scientist Learning in 2050 Conference, conducted at Georgetown University on 8 – 9 August. The demands on Leaders and skills required to compete in the changing character of war are probably fundamentally different. Mr. Singer challenges us to choose real change and not change just enough to fail. His example of the USS Arizona with its two catapult-launched float planes demonstrates a bureaucracy’s incremental approach in the face of revolutionary change. That change – modern bombers – made this once great warship a monument to a “Day that will live in Infamy.”
David Ignatius, famed spy novelist and Washington Post journalist, tackles not only espionage but also a multitude of disruptive technologiesin his new thriller, The Quantum Spy. The book revolves around a race towards leap-ahead developments in quantum computing between the United States and China; but a looming subplot is the cat and mouse game of counterintelligence, infiltration, and insertion of moles between the Central Intelligence Agency and the Chinese Ministry of State Security. CIA case officer and Army veteran Harris Chang struggles with his Chinese heritage, devotion to America, and the sometimes unscrupulous role of his organization in fighting to protect America’s secrets. The book is replete with detailed and accurate descriptions ofAmerican innovation efforts. The depiction of the infiltration on American college campuses and research institutions by foreign students being sponsored and often directed by foreign adversaries is alarming and timely given recent real-world events such as aChinese student taking groundbreaking work on metamaterials at Duke University back to his home country. The book raises important questions about the balance between open, collaborative innovation (that opens up a number of vulnerabilities) and more restrictive, government-funded research (that may be more secure), both of which are critical in the current Era of Accelerated Human Progress (now through 2035) as described inThe Operational Environment and the Changing Character of Future Warfare. Similarly to Agents of Innocence and Body of Lies, David Ignatius has created a work that not only features a fantastic story but that includes many government, military, and intelligence implications.
3. “War and the Human Brain” podcast with Dr. James Giordano and Mr. John Amble, Modern War Institute, 24 July 2018 (originally aired in 2017) – review by Marie Murphy.
Modern War Institute’s John Amble spoke with Dr. James Giordano about his research in neuroscience and using “the brain as a weapon” following his presentation at the Mad Scientist Visualizing Multi Domain Battle in 2030-2050 Conference, 25-26 July 2017, at Georgetown University, Washington, D.C. After a brief historical overview of neuroscience’s military applications, Dr. Giordano explains how recent research on electric and magnetic trans-cranial stimulation and implantable electrodes has opened up possibilities and controversies. Soldiers of the future could obtain modifications that improve memory, cognition, and vigilance while decreasing fatigue. Conversely, there is anethical dilemma when it comes to discontinuing, removing, or deactivating these improvements; there is concern regarding the Soldier potentially feeling disabled or disenabled afterwards. The discussion transitioned to the implications of “drugs, bugs, toxins, and tools,” all of which can have some kind of effect on neurological activity, and all of which can be weaponized. These capabilities, while not considered weapons of mass destruction, are categorized asweapons of mass disruption. These tools and technologies pose a real, rising threat in the future Operational Environment; are deployable by nation-states, non-state actors, andsuper-empowered individuals; and can be specifically targeted for optimal impact. Read more about these capabilities in the Mad Scientist Bio Convergence and Soldier 2050 Conference Final Report.
On 17 July 2018, the UK’s Nuffield Council on Bioethics issued apress release in conjunction with their publication ofGenome editing and human reproduction. The Council, established in 1991 to address ethical issues raised by new developments in biology and medicine, “concluded that editing the DNA of a human embryo, sperm, or egg to influence the characteristics of a future person (‘heritable genome editing’) could be morally permissible.” Futurism interpreted this as meaning we are “one step closer to designer babies,” and concluded it “is a promising sign for anyone eager for the day gene-editing lets them create the offspring of their dreams.” That said, the Council recommends two overarching principles governing the ethical use of heritable genome editing: “they must be intended to secure, and be consistent with, the welfare of the future person; and they should not increase disadvantage, discrimination or division in society.” The Council also noted that current British law precludes the genomic editing of embryos that are to be placed in a womb. So, no Brave New World in our future, right?
Not necessarily.… As Mr. Hank Greely, Professor of Law, Stanford University,pointed out this spring at our Mad Scientist Bio Convergence and Soldier 2050 Conference, we are on the cusp of being able to use skin cells to generate lines of viable embryos, which then may be subjected to Preimplantation Genetic Diagnoses prior to selection and implantation to preclude a host of genetic diseases and ensure healthier babies (who could possibly object to that?). With the advent of genetic editing and artificial wombs, we will be able to manipulate the genomic coding of any given embryo (initially to address genetic disease, but eventually to enhance capabilities), implant it, and then “decant” the resulting progeny. Sound farfetched? At the same conference, Ms. Elsa Kania, CNAS, noted that the PRC is currently gene editing human embryos and conducting human clinical trials. Their Bio-Google Initiative (BGI) is soliciting DNA from their geniuses in an attempt to understand the genomic basis for intelligence. With the advent of genetically enhanced humans, it is conceivable that we could face adversaries in the Deep Future Operational Environment with warrior caste soldiers, each modified genetically as embryos for greater strength, endurance, and combat performance in complex and extreme environments (e.g., high / low temperatures, low atmospheric pressures) and with optimized Brain Computer Interfaces. Previous regimes sought to populate their forces with “Supermen” — genomic editing may provide future regimes with the post-industrial means of accomplishing this objective “… by the lights of perverted science.”
Red Meat Games released its fifth virtual reality (VR) project, Bring to Light. Developers designed the VR horror game to push players to their terror limits with the help of a biometric sensor. Right now, Bring to Light is the first VR game to use biometric feedback to effect gameplay; it calls to mind the Black Mirror episode “Playtest,” a near-future cautionary tale of the risks associated with combining VR and Augmented Reality (AR) with gaming. In spite of this, AR and VR will become more integrated and player involved. As discussed in last month’seditionof “The Queue,” VR has the potential to also accelerate learning and enhance retention when used to train our Soldiers and Leaders.
Stanford University is working on a technology known as “Shapeshift” that presents users with a haptic “touch” interface that provides a bridge between VR and the physical world. Shapeshift is a high-resolution, compact, modular shape display consisting of 288 actuated pins (4.85mm×4.85mm, 2.8mm inter-pin spacing) formed by six 2×24 pin modules. It is reminiscent ofpin art toys played with by children and adults alike for years. The interface will allow users to truly feel the objects they see and interact with in VR, bringing about an entirely new level of immersion into constructed virtual or augmented worlds. The implications for accurate and intuitive modeling, design, simulation, and trainingare astounding. In the future, such interfaces could be utilized in vehicles, on or with weapons, and integrated in classrooms and other training venues.
Engineers from Tufts University have re-designed the bandage with the intent of taking it from a passive treatment to an active treatment for chronic wounds. These skin wounds can be from burns, diabetes, or other medical conditions that overwhelm the normal regenerative capabilities of the skin. The bandage monitors the pH and temperature and can administer drugs when either goes out of normal range. While the bandage treats only certain chronic skin conditions at present, it is easy to see future implications of this technology, especially in Soldiers on the battlefield. Persistent or serious wounds can be monitored and treated in real-time without needing to take the Soldier out of the fight or waiting for medical advice and treatment from a professional. This could reduce cost and recovery time. What is the next step beyond smart bandages? Will it be feasible to have general health sensors and a variety of treatments embedded on the Soldier in the future?
If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: email@example.com — we may select it for inclusion in our next edition of “The Queue”!
Mad Scientist Laboratory is pleased to announce that Headquarters, U.S. Army Training and Doctrine Command (TRADOC) is co-sponsoring the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies this week (Wednesday and Thursday, 8-9 August 2018) in Washington, DC.
Future learning techniques and technologies are critical to the Army’s operations in the 21st century against adversaries in rapidly evolving battlespaces. The ability to effectively respond to a changing Operational Environment (OE) with fleeting windows of opportunity is paramount, and Leaders must act quickly to adjust to different OEs and more advanced and lethal technologies. Learning technologies must enable Soldiers to learn, think, and adapt using innovative synthetic environments to accelerate learning and attain expertise more quickly. Looking to 2050, learning enablers will become far more mobile and on-demand.
Looking at Learning in 2050, topics of interest include, but are not limited to: Virtual, Augmented, and Mixed Realities (VR/AR/MR); interactive, autonomous, accelerated, and augmented learning technologies; gamification; skills needed for Soldiers and Leaders in 2050;synthetic training environments; virtual mentors; and intelligent artificial tutors. Advanced learning capabilities present the opportunity for Soldiers and Leaders to prepare for operations and operate in multiple domains while improving current cognitive load limitations.
Plan to join us virtually at the conference as leading scientists, innovators, and scholars from academia, industry, and government gather to discuss:
1) How will emerging technologies improve learning or augment intelligence in professional military education, at home station, while deployed, and on the battlefield?
2) How can the Army accelerate learning to improve Soldier and unit agility in rapidly changing OEs?
3) What new skills will Soldiers and Leaders require to fight and win in 2050?
– Read our Learning in 2050 Call for Ideas finalists’ submissionshere, graciously hosted by our colleagues at Small Wars Journal.
– Starting Tuesday, 7 August 2018, see the conference agenda’s list of presentations and the associated world-class speakers’ biographieshere.
Join us at the conference on-linehere via live-streaming audio and video, beginning at 0840 EDT on Wednesday, 08 Aug 2018; submit your questions to each of the presenters via the moderated interactive chat room; and tag your comments @TRADOC on Twitter with #Learningin2050.
[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by returning guest blogger and proclaimed Mad Scientist Mr. Howard R. Simkin, hypothesizing the activities of an Operational Detachment Alpha (ODA) deployed on a security assistance operation in the 2050 timeframe. Mr. Simkin addresses how advanced learning capabilities can improve what were once cognitive load limitations. This is a one of the themes we will explore at next week’s Mad Scientist Learning in 2050 Conference; more information on this conference can be found at the bottom of this post.]
This is the ODAs third deployment to the country, although it is Captain Clark Weston’s first deployment as a team leader. The rest of his ODA have long experience in the region and country. They all have the 2050 standard milspec augmentation of every Special Operations (SO) Operator: corneal and audial implants, subdural brain-computer interfaces, and medical nano-enhancement.
Unlike earlier generations of SO Operators aided by advanced technology, they can see into the near-infra red, understand sixty spoken languages, acquire new skill sets rapidly, interface directly with computers and see that information in a heads up display without a device, and survive any injury short of dismemberment. However, they continue to rely on their cultural and human skills to provide those critical puzzle pieces from the human domain which technology and data science alone cannot.
No matter what technologies are at play, thehuman elementwill still be paramount. As the noted futurist and theoretical physicist Michio Kaku observed in his discussions of the ‘Cave Man Principle’, “whenever there is a conflict between modern technology and the desires of our primitive ancestors, these primitive desires win each time.”[I]
The sound of an onrushing thunderstorm briefly distracted CPT[II] Weston from the report he was compiling. His eyes scanned the equipment hung on wooden pegs protruding from the white plastered walls or scattered on the small wooden desk adorned by a single switch operated lamp. He couldn’t help smiling. The wooden pegs, plastered walls, and primitive lamp were a good metaphor for the region. His apartment back home sported the latest in technology, adaptive video capable walls, a customized AI virtual assistant, and lighting and HVAC[III] that operated without human intervention. Here, it was back to basics.
His concentration broken, he stood up and stretched. Dark of hair and eyes, of medium height and slender build, he could easily pass for a native of the region. As for fluency in the local language, it had been baked into his neural circuitry through rigorous training, cognitive enhancements, and experience. A student of history, Weston had been surprised during his attendance at the SOF[IV] Captains Career Course when he read articles and papers that had heralded the death of language training.
He wondered. Didn’t the people who wrote those articles pause to consider that no technology works all the time? Either as a result of adversary action or the arrival of mean time between failures, a glitch in a technology-dependent language capability could be at best embarrassing and at worst catastrophic. Didn’t they realize that learning a new language alters the learner’s neural networks, allowing a nuanced understanding of a culture that software had not been able to achieve? Besides, around 65 percent of human communication is non-verbal, he reasoned. Language occurs in a shifting cultural context, something even the best AIs still couldn’t always tackle.
He paced around the room, reflecting on the past few months. Things had definitely taken a turn for the better. With very few exceptions, the Joint security assistance efforts he was aware of were going well. He was very proud of what his ODA had accomplished, training the Ministry of the Interior’s capitol region paramilitary force (CRPF) to what Minerva[V] had deemed a sufficient level of competence in a wide range of tactical skills.
More importantly, as his Team Sergeant Abdel Jamaal had observed, “We got them to believe in themselves as protectors and to stop acting like bullies.” This had led to the development of an increasing number of information sources which in turn had led to the arrest of a number of senior narco-terrorists. He and Sergeant Jamaal had advised and assisted in those arrests in a virtual mode. To the local population, it looked like the CRPF was doing all of the work.
The team medical/civil affairs specialist, Sergeant First Class Belinda Tompkins and the team cyber/additive manufacturing authority, Sergeant DeWayne Jones had achieved quite a lot on their own. After consulting with the Nimble Griffin[VI] team, they had employed their expertise to upgrade the antiquated in-country hospital 3D Printers to produce the latestgene editingdrugs and fight the diseases still endemic to the region. They had done this in the background, having the CRPF collect the machines quietly and then return them to the hospitals with great fanfare. The resulting media coverage was a public relations bonanza. The only US presence was virtual and invisible to the media or public.
A loud peal of thunder shook Weston from his thoughts. The lights flickered in his room, then steadied up. He sat back down at the table to finish his report. All in all, things were going very well.
[Note that any resemblance to any current events or persons, living or dead, is purely coincidental.]
If you enjoyed this post, please read Mr. Simkin’s articleTechnological Fluency 2035-2050, submitted in response to our Learning in 2050 Call for Ideas and hosted by our colleagues at Small Wars Journal.
Other Learning in 2050 Call for Ideas submissions include the following:
Please also plan on joining us virtually at the Mad Scientist Learning in 2050 Conference. This event will be live streamed on both days (08-09 August 2018). You can watch and interact with all of the speakers at the conference watch page or tag @TRADOC on Twitter with #Learningin2050. Note that the live streaming event is best viewed via a commercial internet connection (i.e., non-NIPRNet).
Howard R. Simkin is a Senior Concept Developer in the DCS, G-9 Concepts, Experimentation and Analysis Directorate, U.S. Army Special Operations Command. He has over 40 years of combined military, law enforcement, defense contractor, and government experience. He is a retired Special Forces officer with a wide variety of special operations experience.
________________________________________________________ [I] Kaku, M. (2011). Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100. New York: Random House (Kindle Edition), 13. [II] Captain. [III] Heating, ventilation, and air conditioning. [IV]Special Operations Forces. [V]Department of Defense AI virtual assistant. [VI]A Joint Interagency Cyber Task Force.
[Editor’s Note: The U.S. Army Training and Doctrine Command (TRADOC) G-2 is co-hosting the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies on 8-9 August 2018 in Washington, DC. In advance of this conference, Mad Scientist Laboratory is pleased to present today’s post addressing what is necessary to truly transform Learning in 2050 by returning guest blogger Mr. Nick Marsella. Read Mr. Marsella’s previous two posts addressing Futures Work atPart I and Part II]
Only a handful of years ago, a conference on the topic of learning in 2050 would spur discussions on needed changes in the way we formally educate and train people to live successful lives and be productive citizens.[I] Advocates in K-12 would probably argue for increasing investment in schools, better technology, and increased STEM education. Higher educators would raise many of the same concerns, pointing to the value of the “the academy” and its universities as integral to the nation’s economic, security, and social well-being by preparing the nation’s future leaders, innovators, and scientists.
Yet, times have changed. “Learning in 2050” could easily address how education and training must meet the required immediate learning needs of the individual and for supporting “lifelong learning” in a very changing and competitive world.[II] The conference could also address how new discoveries in learning and the cognitive sciences will inform the education and training fields, and potentially enhance individual abilities to learn and think.[III] “Learning in 2050” could also focus on how organizational learning will be even more important than today – spelling the difference between bankruptcy and irrelevancy – or for military forces – victory or defeat. We must also address how to teach people to learn and organize themselves for learning.[IV]
Lastly, a “Learning in 2050” conference could also focus onmachine learning and howartificial intelligence will transform not only the workplace, but have a major impact on national security.[V] Aside from understanding the potential and limitations of this transformative technology, increasingly we must train and educate people on how to use it to their advantage and understand its limitations for effective “human – machine teaming.” We must also provide opportunities to use fielded new technologies and for individuals to learn when and how totrust it.[VI]
All of these areas would provide rich discussions and perhaps new insights. But just as LTG (ret) H.R. McMaster warned us about thinking about the challenges in future warfare, we must first acknowledge the continuities for this broad topic of “Learning in 2050” and its implications for the U.S. Army.[VII] Until the Army is replaced by robots or knowledge and skills are uploaded directly into the brain as shown in the “Matrix” — learning involves humans and the learning process and the Army’s Soldiers and its civilian workforce [not discounting organizational or machine learning].
While much may change in the way the individual will learn, we must recognize that the focus of “Learning in 2050” is on the learner and the systems, programs/schools, or technologies adopted in the future must support the learner. As Herbert Simon, one of the founders of cognitive science and a Nobel laureate noted: “Learning results from what the student does and thinks and only from what the student does and thinks. The teacher can advance learning only by influencing what the student does to learn.”[VIII] To the Army’s credit, the U.S. Army Learning Concept for Training and Education 2020-2040 vision supports this approach by immersing “Soldiers and Army civilians in a progressive, continuous, learner-centric, competency-based learning environment,” but the danger is we will be captured by technology, procedures, and discussions about the utility and need for “brick and mortar schools.”[IX]
Learning results from what the student does and thinks and only from what the student does and thinks.
Learning is a process that involves changing knowledge, belief, behavior, and attitudes and is entirely dependent on the learner as he/she interprets and responds to the learning experience – in and out of the classroom.[X] Our ideas, concepts, or recommendations to improve the future of learning in 2050 must either: improve student learning outcomes, improve student learning efficiency by accelerating learning, or improve the student’s motivation and engagement to learn.
“Learning in 2050” must identify external environmental factors which will affect what the student may need to learn to respond to the future, and also recognize that the generation of 2050 will be different from today’s student in values, beliefs, attitudes, and acceptance of technology.[XI] Changes in the learning system must be ethical, affordable, and feasible. To support effective student learning, learning outcomes must be clearly defined – whether a student is participating in a yearlong professional education program or a five-day field training exercise – and must be understood by the learner.[XII]
We must think big. For example, Professor of Cognition and Education at Harvard’s Graduate School of Education, Howard Gardner postulated that to be successful in the 21st Century requires the development of the “disciplined mind, the synthesizing mind, the creative mind, the respectful mind, and the ethical mind.”[XIII]
Approaches, processes, and organization, along with the use of technology and other cognitive science tools, must focus on the learning process. Illustrated below is the typical officer career timeline with formal educational opportunities sprinkled throughout the years.[XIV] While some form of formal education in “brick and mortar” schools will continue, one wonders if we will turn this model on its head – with more upfront education; shorter focused professional education; more blended programs combining resident/non-resident instruction; and continual access to experts, courses, and knowledge selected by the individual for “on demand” learning. Today, we often use education as a reward for performance (i.e., resident PME); in the future, education must be a “right of the Profession,” equally provided to all (to include Army civilians) – necessary for performance as a member of the profession of arms.
The role of the teacher will change. Instructors will become “learning coaches” to help the learner identify gaps and needs in meaningful and dynamic individual learning plans. Like the Army’s Master Fitness Trainer whom advises and monitors a unit’s physical readiness, we must create in our units “Master Learning Coaches,” not simply a training specialist who manages the schedule and records. One can imagine technology evolving to do some of this as the Alexa’s and Siri’s of today become the AI tutors and mentors of the future. We must also remember that any system or process for learning in 2050 must fit the needs of multiple communities: Active Army, Army National Guard, and Army Reserve forces, as well as Army civilians.
Just as the delivery of instruction will change, the assessment of learning will change as well. Simulations and gaming should aim to provide an “Enders’ Game” experience, where reality and simulation are indistinguishable. Training systems should enable individuals to practice repeatedly and as Vince Lombardi noted – “Practice does not make perfect. Perfect practice makes perfect.” Experiential learning will reinforce classroom, on-line instruction, or short intensive courses/seminars through the linkage of “classroom seat time” and “field time” at the Combat Training Centers, Warfighter, or other exercises or experiences.
Tell me and I forget; teach me and I may remember; involve me and I learn. Benjamin Franklin[XV]
Of course, much will have to change in terms of policies and the way we think about education, training, and learning. If one moves back in time the same number of years that we are looking to the future – it is the year 1984. How much has changed since then?
While in some ways technology has transformed the learning process – e.g., typewriters to laptops; card catalogues to instant on-line access to the world’s literature from anywhere; and classes at brick and mortar schools to Massive Open Online Courses (MOOCs), and blended and on-line learning with Blackboard. Yet, as Mark Twain reportedly noted – “if history doesn’t repeat itself – it rhymes” and some things look the same as they did in 1984, with lectures and passive learning in large lecture halls – just as PowerPoint lectures are ongoing today for some passively undergoing PME.
If “Learning in 2050” is to be truly transformative – we must think differently. We must move beyond the industrial age approach of mass education with its caste systems and allocation of seats. To be successful in the future, we must recognize that our efforts must center on the learner to provide immediate access to knowledge to learn in time to be of value.
Nick Marsella is a retired Army Colonel and is currently a Department of the Army civilian serving as the Devil’s Advocate/Red Team for Training and Doctrine Command. ___________________________________________________________________
[I] While the terms “education” and “training” are often used interchangeably, I will use the oft quoted rule – training is about skills in order to do a job or perform a task, while education is broader in terms of instilling general competencies and to deal with the unexpected.
[II] The noted futurist Alvin Toffler is often quoted noting: “The illiterate of the 21st Century are not those who cannot read and write but those who cannot learn, unlearn, and relearn.”
[III] Sheftick, G. (2018, May 18). Army researchers look to neurostimulation to enhance, accelerate Soldier’s abilities. Retrieved from: https://www.army.mil/article/206197/army_researchers_looking_to_neurostimulation_to_enhance_accelerate_soldiers_abilities
[IV] This will become increasing important as the useful shelf life of knowledge is shortening. See Zao-Sanders, M. (2017). A 2×2 matrix to help you prioritize the skills to learn right now. Harvard Business Review. Retrieved from: https://hbr.org/2017/09/a-2×2-matrix-to-help-you-prioritize-the-skills-to-learn-right-now — so much to learn, so little time.
[V] Much has been written on AI and its implications. One of the most recent and interesting papers was recently released by the Center for New American Security in June 2018. See: Scharre, P. & Horowitz, M.C. (2018). Artificial Intelligence: What every policymaker needs to know. Retrieved from: https://www.cnas.org/publications/reports/artificial-intelligence-what-every-policymaker-needs-to-know
For those wanting further details and potential insights see: Executive Office of the President, National Science and Technology Council, Committee on Technology Report, Preparing for the Future of Artificial Intelligence, October 2016.
[VI] Based on my anecdotal experiences, complicated systems, such as those found in command and control, have been fielded to units without sufficient training. Even when fielded with training, unless in combat, proficiency using the systems quickly lapses. See: Mission Command Digital Master Gunner, May 17, 2016, retrieved from https://www.army.mil/standto/archive_2016-05-17. See Freedberg, S. Jr. Artificial Stupidity: Fumbling the Handoff from AI to Human Control. Breaking Defense. Retrieved from: https://breakingdefense.com/2017/06/artificial-stupidity-fumbling-the-handoff/
[VII] McMaster, H.R. (LTG) (2015). Continuity and Change: The Army Operating Concept and Clear Thinking about Future War. Military Review.
[VIII] Ambrose, S.A., Bridges, M.W., DiPietro, M., Lovett, M.C. & Norman, M. K. (2010). How learning works: 7 research-based principles for smart teaching. San Francisco, CA: Jossey-Bass, p. 1.
[IX] U.S. Army Training and Doctrine Command. TRADOC Pamphlet 525-8-2. The U.S. Army Learning Concept for Training and Education 2020-2040.
[XI] For example, should machine language be learned as a foreign language in lieu of a traditional foreign language (e.g., Spanish) – given the development of automated machine language translators (AKA = the Universal Translator)?
[XII] The point here is we must clearly understand what we want the learner to learn and adequately define it and insure the learner knows what the outcomes are. For example, we continually espouse that we want leaders to be critical thinkers, but I challenge the reader to find the definitive definition and expected attributes from being a critical thinker given ADRP 6-22, Army Leadership, FM 6-22 Army Leadership, and ADRP 5 and 6 describe it differently. At a recent higher education conference of leaders, administrators and selected faculty, one member succinctly put it this way to highlight the importance of student’s understanding expected learning outcomes: “Teaching students without providing them with learning outcomes is like giving a 500 piece puzzle without an image of what they’re assembling.”
[XIII] Gardner, H. (2008). Five Minds for the Future. Boston, MA: Harvard Business Press. For application of Gardner’s premise see Marsella, N.R. (2017). Reframing the Human Dimension: Gardner’s “Five Minds for the Future.” Journal of Military Learning. Retrieved from: https://www.armyupress.army.mil/Journals/Journal-of-Military-Learning/Journal-of-Military-Learning-Archives/April-2017-Edition/Reframing-the-Human-Dimension/
[XIV] Officer education may differ due to a variety of factors but the normal progression for Professional Military Education includes: Basic Officer Leader Course (BOLC B, to include ROTC/USMA/OCS which is BOLC A); Captains Career Course; Intermediate Level Education (ILE) and Senior Service College as well as specialty training (e.g., language school), graduate school, and Joint schools. Extracted from previous edition of DA Pam 600-3, Commissioned Office Professional Development and Career Management, December 2014, p.27 which is now obsolete. Graphic is as an example. For current policy, see DA PAM 600-3, dated 26 June 2017. .
[XV] See https://blogs.darden.virginia.edu/brunerblog/