[Editor’s Note: Story Telling is a powerful tool that allows us to envision how innovative technologies could be employed and operationalized in the Future Operational Environment. Mad Scientist is seeking your visions of future combat with our Science Fiction Writing Contest 2019. Our deadline for submission is 1 APRIL 2019, so please review the contest details below, get those creative writing juices flowing, and send us your visions of combat in 2030!]
Background: The U.S. Army finds itself at a historical inflection point, where disparate, yet related elements of an increasingly complexOperational Environment (OE) are converging, creating a situation where fast moving trends are rapidly transforming the nature of all aspects of society and human life – including the character of warfare. It is important to take a creative approach to projecting and anticipating both transformational and enduring trends that will lend themselves to the depiction of the future. In this vein, the U.S. Army Mad Scientist Initiative is seeking your creativity and unique ideas to describe a battlefield that does not yet exist.
Task: Write about the following scenario – On March 17th, 2030, the country of Donovia, after months of strained relations and covert hostilities, invades neighboring country Otso. Donovia is a wealthy nation that is a near-peer competitor to the United States. Like the United States, Donovia has invested heavily indisruptive technologies such as robotics, AI, autonomy, quantum information sciences, bio enhancements and gene editing, space-based weapons and communications, drones, nanotechnology, and directed energy weapons. The United States is a close ally of Otso and is compelled to intervene due to treaty obligations and historical ties. The United States is about to engage Donovia in its first battle with a near-peer competitor in over 80 years…
Three ways to approach:
1) Forecasting – Description of the timeline and events leading up to the battle.
2) Describing – Account of the battle while it’s happening.
3) Backcasting – Retrospective look after the battle has ended (i.e., After Action Review or lessons learned).
Three questions to consider while writing (U.S., adversaries, and others):
1) What will forces and Soldiers look like in 2030?
2) What technologies will enable them or be prevalent on the battlefield?
3) What doMulti-Domain Operations look like in 2030?
– No more than 5000 words in length
– Provide your submission in .doc or .docx format
– Please use conventional text formatting (e.g., no columns) and have images “in line” with text
– Submissions from Government and DoD employees must be cleared through their respective PAOs prior to submission
– MUST include completed release form (on the back of contest flyer)
– CANNOT have been previously published
Selected submissions may be chosen for publication or a possible future speaking opportunity.
Contact: Send your submissions to: firstname.lastname@example.org
For additional story telling inspiration, please see the following blog posts:
[Editor’s Note: On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC. Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces. Today’s post is extracted from this conference’s final report (more of which is addressed at the bottom of this post).]
The U.S. Army currently has more than 150Military Occupational Specialties (MOSs), each requiring a Soldier to learn unique tasks, skills, and knowledges. The emergence of a number of new technologies – drones, Artificial Intelligence (AI), autonomy, immersive mixed reality, big data storage and analytics, etc. – coupled with the changing character of future warfare means that many of these MOSs will need to change, while others will need to be created. This already has been seen in the wider U.S. and global economy, where the growth of internet services, smartphones, social media, and cloud technology over the last ten years has introduced a host of new occupations that previously did not exist. The future will further define and compel the creation of new jobs and skillsets that have not yet been articulated or even imagined. Today’s hobbies (e.g., drones) and recreational activities (e.g., Minecraft/Fortnite) that potential recruits engage in every day could become MOSs or Additional Skill Identifiers (ASIs) of the future.
Training eighty thousand new Recruits a year on existing MOSs is a colossal undertaking. A great expansion in the jobs and skillsets needed to field a highly capable future Army, replete with modified or new MOSs, adds a considerable burden to the Army’s learning systems and institutions. These new requirements, however, will almost certainly present an opportunity for the Army to capitalize on intelligent tutors, personalized learning, and immersive learning to lessen costs and save time in Soldier and Leader development.
The recruit of 2050 will be born in 2032 and will be fundamentally different from the generations born before them. Marc Prensky, educational writer and speaker who coined the term digital native, asserts this “New Human” will stand in stark contrast to the “Old Human” in the ways they learn and approach learning..1 Where humans today are born into a world with ubiquitous internet, hyper-connectivity, and the Internet of Things, each of these elements are generally external to the human. By 2032, these technologies likely will haveconverged and will be embedded or integrated into the individual with connectivity literally on the tips of their fingers.
Some of the newly required skills may be inherent within the next generation(s) of these Recruits. Many of the games, drones, and other everyday technologies that are already or soon to be very common – narrow AI, app development and general programming, and smart devices – will yield a variety of intrinsic skills that Recruits will have prior to entering the Army. Just like we no longer train Soldiers on how to use a computer, games like Fortnite, with no formal relationship with the military, will provide players with militarily-useful skills such as communications, resource management, foraging, force structure management, and fortification and structure building, all while attempting to survive against persistent attack. Due to these trends, Recruits may come into the Army with fundamental technical skills and baseline military thinking attributes that flatten the learning curve for Initial Entry Training (IET).2
While these new Recruits may have a set of some required skills, there will still be a premium placed on premier skillsets in fields such as AI and machine learning, robotics, big data management, and quantum information sciences. Due to the high demand for these skillsets, the Army will have to compete for talent with private industry, battling them on compensation, benefits, perks, and a less restrictive work environment – limited to no dress code, flexible schedule, and freedom of action. In light of this, the Army may have to consider adjusting or relaxing its current recruitment processes, business practices, and force structuring to ensure it is able to attract and retain expertise. It also may have to reconsider how it adapts and utilizes its civilian workforce to undertake these types of tasks in new and creative ways.
The Recruit of 2050 will need to be engaged much differently than today. Potential Recruits may not want to be contacted by traditional methods3 – phone calls, in person, job fairs – but instead likely will prefer to “meet” digitally first. Recruiters already are seeing this today. In order to improve recruiting efforts, the Army may need to look for Recruits in non-traditional areas such as competitive online gaming. There is an opportunity for the Army to use AI to identify Recruit commonalities and improve its targeted advertisements in the digital realm to entice specific groups who have otherwise been overlooked. The Army is already exploring this avenue of approach through the formation of aneSports team that will engage young potential Recruits and attempt to normalize their view of Soldiers and the Army, making them both more relatable and enticing.4 This presents a broader opportunity to close the chasm that exists between civilians and the military.
The overall dynamic landscape of the future economy, the evolving labor market, and the changing character of future warfare will create an inflection point for the Army to re-evaluate longstanding recruitment strategies, workplace standards, and learning institutions and programs. This will bring about an opportunity for the Army to expand, refine, and realign its collection of skillsets and MOSs, making Soldiers more adapted for future battles, while at the same time challenging the Army to remain prominent in attracting premier talent in a highly competitive environment.
[Editor’s Note: Mad Scientist Laboratory is pleased to review proclaimed Mad Scientist Dr. Alexander Kott’s paper,Ground Warfare in 2050: How It Might Look, published by the US Army Research Laboratory in August 2018. This paper offers readers with a technological forecast of autonomous intelligent agents and robots and their potential for employment on future battlefields in the year 2050. In this post, Mad Scientist reviews Dr. Kott’s conclusions and provides links to our previously published posts that support his findings.]
In his paper, Dr. Kott addresses two major trends (currently under way) that will continue to affect combat operations for the foreseeable future. They are:
• The employment of small aerial drones for Intelligence, Surveillance, and Reconnaissance (ISR) will continue, making concealment difficult and eliminating distance from opposing forces as a means of counter-detection. This will require the development and use of decoy capabilities (also intelligent robotic devices). This counter-reconnaissance fight will feature prominently on future battlefields between autonomous sensors and countermeasures – “a robot-on-robot affair.”
• The continued proliferation of intelligent munitions, operating at greater distances, collaborating in teams to seek out and destroy designated targets, and able to defeat armored and other hardened targets, as well as defiladed and entrenched targets.
• Intelligent munitions will be neutralized “primarily by missiles and only secondarily by armor and entrenchments. Specialized autonomous protection vehicles will be required that will use their extensive load of antimissiles to defeat the incoming intelligent munitions.”
• Forces will exploit “very complex terrain, such as dense forest and urban environments” for cover and concealment, requiring the development of highly mobile “ground robots with legs and limbs,” able to negotiate this congested landscape.
• The proliferation of autonomous combat systems on the battlefield will generate an additional required capability — “a significant number of specialized robotic vehicles that will serve as mobile power generation plants and charging stations.”
• “To gain protection from intelligent munitions, extended subterranean tunnels and facilities will become important. This in turn will necessitate the tunnel-digging robotic machines, suitably equipped for battlefield mobility.”
• All of these autonomous, yet simultaneously integrated and networked battlefield systems will be vulnerable to Cyber-Electromagnetic Activities (CEMA). Consequently, the battle within the Cyber domain will “be fought largely by various autonomous cyber agents that will attack, defend, and manage the overall network of exceptional complexity and dynamics.”
• The “high volume and velocity of information produced and demanded by the robot-intensive force” will require an increasingly autonomous Command and Control (C2) system, with humans increasingly being on, rather than in, the loop.
If you enjoyed reading this post, please watch Dr. Alexander Kott’s presentation, “The Network is the Robot,” from the Mad Scientist Robotics, Artificial Intelligence, and Autonomy: Visioning Multi-Domain Warfare in 2030-2050 Conference, co-sponsored by the Georgia Tech Research Institute (GTRI), in Atlanta, Georgia, 7-8 March 2017.
Dr. Alexander Kott serves as the ARL’s Chief Scientist. In this role he provides leadership in development of ARL technical strategy, maintaining technical quality of ARL research, and representing ARL to external technical community. He published over 80 technical papers and served as the initiator, co-author and primary editor of over ten books, including most recently Cyber Defense and Situational Awareness (2015) and Cyber Security of SCADA and other Industrial Control Systems (2016), and the forthcoming Cyber Resilience of Systems and Networks (2019).
[Editor’s Note: Today’s post poses four central questions to our Mad Scientist community of action regarding bias in machine learning and the associated ramifications for artificial intelligence, autonomy, lethality, and decision-making on future warfighting.]
“We thought that we had the answers, it was the questions we had wrong” – Bono, U2
As machine learning and deep learning algorithms become more commonplace, it is clear that the utopian ideal of a bias-neutral Artificial Intelligence (AI) is exactly just that. These algorithms have underlying biases embedded in their coding, imparted by their human programmers (either consciously or unconsciously). These algorithms can develop further biases during the machine learning and training process. Dr. Tolga Bolukbasi, Boston University, recently described algorithms as not being capable of distinguishing right from wrong, unlike humans that can judge their actions, even when they act against ethical norms. For algorithms, data is the ultimate determining factor.
Realizing that algorithms supporting future Intelligence, Surveillance, and Reconnaissance (ISR) networks and Commander’s decision support aids will have inherent biases — what is the impact on future warfighting? This question is exceptionally relevant as Soldiers and Leaders consider the influence of biases in man-machine relationships, and their potential ramifications on the battlefield, especially with regard to the rules of engagement (i.e., mission execution and combat efficiency versus the proportional use of force and minimizing civilian casualties and collateral damage).
“It is difficult to make predictions, particularly about the future.” This quote has been attributed to anyone ranging from Mark Twain to Niels Bohr to Yogi Berra. Point prediction is a sucker’s bet. However, asking the right questions about biases in AI is incredibly important.
The Mad Scientist Initiative has developed a series of questions to help frame the discussion regarding what biases we are willing to accept and in what cases they will be acceptable. Feel free to share your observations and questions in the comments section of this blog post (below) or email them to us at: email@example.com.
1) What types of bias are we willing to accept? Will a so-called cognitive bias that forgoes a logical, deliberative process be allowable? What about a programming bias that is discriminative towards any specific gender(s), ethnicity(ies), race(s), or even age(s)?
2) In what types of systems will we accept biases? Will machine learning applications in supposedly non-lethal warfighting functions like sustainment, protection, and intelligence be given more leeway with regards to bias?
3) Will the biases in machine learning programming and algorithms be more apparent and/or outweigh the inherent biases of humans-in-the-loop? How will perceived biases affect trust and reliance on machine learning applications?
4) At what point will the pace of innovation and introduction of this technology on the battlefield by our adversaries cause us to forego concerns of bias and rapidly field systems to gain a decisive Observe, Orient, Decide, and Act (OODA) loop and combat speed advantage on theHyperactive Battlefield?
For additional information impacting on this important discussion, please see the following:
[Editor’s Note: Since its inception last November, the Mad Scientist Laboratory has enabled us to expand our reach and engage global innovators from across industry, academia, and the Government regarding emergent disruptive technologies and their individual and convergent impacts on the future of warfare. For perspective, our blog has accrued almost 60K views by over 30K visitors from around the world!
Our Mad Scientist Community of Action continues to grow — in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. Almost half (36 out of 81) of the blog posts published have been submitted by guest bloggers. We challenge you to contribute your ideas!
In particular, we would like to recognize Mad Scientist Mr. Sam Bendett by re-posting his submission entitled “Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward,” originally published on 25 June 2018. This post generated a record number of visits and views during the past six month period. Consequently, we hereby declare Sam to be the Mad Scientist Laboratory’s “Maddest” Guest Blogger! for the latter half of FY18. In recognition of his achievement, Sam will receive much coveted Mad Scientist swag.
While Sam’s post revealed the many challenges Russia has experienced in combat testing the Uran-9 Unmanned Ground Vehicle (UGV) in Syria, it is important to note that Russia has designed, prototyped, developed, and operationally tested this system in a combat environment, demonstrating a disciplined and proactive approach to innovation. Russia is learning how to integrate robotic lethal ground combat systems….
Enjoy re-visiting Sam’s informative post below, noting that many of the embedded links are best accessed using non-DoD networks.]
Russia, like many other nations, is investing in the development of various unmanned military systems. The Russian defense establishment sees such systems as mission multipliers, highlighting two major advantages: saving soldiers’ lives and making military missions more effective. In this context, Russian developments are similar to those taking place around the world. Various militaries are fielding unmanned systems for surveillance, intelligence, logistics, or attack missions to make their forces or campaigns more effective. In fact, the Russian military has been successfully using Unmanned Aerial Vehicles (UAVs) in training and combat since 2013. It has used them with great effect in Syria, where these UAVs flew more mission hours than manned aircraft in various Intelligence, Surveillance, and Reconnaissance (ISR) roles.
Russia is also busy designing and testing many unmanned maritime and ground vehicles for various missions with diverse payloads. To underscore the significance of this emerging technology for the nation’s armed forces, Russian Defense Minister Sergei Shoigurecently stated that the serial production of ground combat robots for the military “may start already this year.”
But before we see swarms of ground combat robots with red stars emblazoned on them, the Russian military will put these weapons through rigorous testing in order to determine if they can correspond to battlefield realities. Russian military manufacturers and contractors are not that different from their American counterparts in sometimes talking up the capabilities of their creations, seeking to create the demand for their newest achievement before there is proof that such technology can stand up to harsh battlefield conditions. It is for this reason that the Russian Ministry of Defense (MOD) finally established several centers such as Main Research and Testing Center of Robotics, tasked with working alongside thedefense-industrial sector to create unmanned military technology standards and better communicate warfighters’ needs. The MOD is also running conferences such as the annual “Robotization of the Armed Forces” that bring together military and industry decision-makers for a better dialogue on the development, growth, and evolution of the nation’s unmanned military systems.
This brings us to one of the more interesting developments in Russian UGVs. Then Russian Deputy Defense Minister Borisov recentlyconfirmed that the Uran-9 combat UGV was tested in Syria, which would be the first time this much-discussed system was put into combat. This particular UGV is supposed to operate in teams of three or four and is armed with a 30mm cannon and 7.62 mm machine guns, along with avariety of other weapons.
Just as importantly, it was designed to operate at a distance of up to three kilometers (3000 meters or about two miles) from its operator — a range that could be extended up to six kilometers for a team of these UGVs. This range is absolutely crucial for these machines, which must be operated remotely. Russian designers are developing operational electronics capable of rendering the Uran-9more autonomous, thereby moving the operators to a safer distance from actual combat engagement. The size of a small tank, the Uran-9 impressed the international military community when first unveiled and it was definitely designed to survive battlefield realities….
However, just as “no plan survives first contact with the enemy,” the Uran-9, though built to withstand punishment, came up short in its first trial run in Syria. In a candid admission, Andrei P. Anisimov, Senior Research Officer at the 3rd Central Research Institute of the Ministry of Defense, reported on the Uran-9’s critical combat deficiencies during the 10th All-Russian Scientific Conference entitled “Actual Problems of Defense and Security,” held in April 2018. In particular, the following issues came to light during testing:
• Instead of its intended range of several kilometers, the Uran-9 could only be operated at distance of “300-500 meters among low-rise buildings,” wiping out up to nine-tenths of its total operational range.
• There were “17 cases of short-term (up to one minute) and two cases of long-term (up to 1.5 hours) loss of Uran-9 control” recorded, which rendered this UGV practically useless on the battlefield.
• The UGV’s running gear had problems – there were issues with supporting and guiding rollers, as well as suspension springs.
• The electro-optic stations allowed for reconnaissance and identification of potential targets at a range of no more than two kilometers.
• The OCH-4 optical system did not allow for adequate detection of adversary’s optical and targeting devices and created multiple interferences in the test range’s ground and airspace.
• Unstable operation of the UGV’s 30mm automatic cannon was recorded, with firing delays and failures. Moreover, the UGV could fire only when stationary, which basically wiped out its very purpose of combat “vehicle.”
• The Uran-9’s combat, ISR, and targeting weapons and mechanisms were also not stabilized.
On one hand, these many failures are a sign that this much–discussed and much-advertised machine is in need of significant upgrades, testing, and perhaps even a redesign before it gets put into another combat situation. The Russian militarydid say that it tested nearly 200 types of weapons in Syria, so putting the Uran-9 through its combat paces was a logical step in the long development of this particular UGV. If the Syrian trial was the first of its kind for this UGV, such significant technical glitches would not be surprising.
However, the MOD has been testing this Uran-9 for a while now, showing videosof this machine at a testing range, presumably in Russia. The truly unexpected issue arising during operations in Syria had to do with the failure of the Uran-9 to effectively engage targets with its cannon while in motion (along with a number of other issues). Still, perhaps many observers bought into the idea that this vehicle would perform as built – tracks, weapons, and all. A closer examination of the publicly-releasedtesting video probably foretold some of the Syrian glitches – in this particular one, Uran-9 is shown firing its machine guns while moving, but its cannon was fired only when the vehicle was stationary. Another interesting aspect that is significant in hindsight is that the testing range in the video was a relatively open space – a large field with a few obstacles around, not the kind of complex terrain, dense urban environment encountered in Syria. While today’s and future battlefields will range greatly from open spaces to megacities, a vehicle like the Uran-9 would probably be expected to perform in all conditions. Unless, of course, Syrian tests would effectively limit its use in future combat.
On another hand, so many failures at once point to much larger issues with the Russian development of combat UGVs, issues that Anisimov also discussed during his presentation. He highlighted the following technological aspects that are ubiquitous worldwide at this point in the global development of similar unmanned systems:
• Low level of current UGV autonomy;
• Low level of automation of command and control processes of UGV management, including repairs and maintenance;
• Low communication range, and;
• Problems associated with “friend or foe” target identification.
Judging from the Uran-9’s Syrian test, Anisimov made the following key conclusions which point to the potential trajectory of Russian combat UGV development – assuming thatother unmanned systems may have similar issues when placed in a simulated (or real) combat environment:
• These types of UGVs are equipped with a variety of cameras and sensors — and since the operator is presumably located a safe distance from combat, he may have problems understanding, processing, and effectively responding to what is taking place with this UGV in real-time.
• For the next 10-15 years, unmanned military systems will be unable to effectively take part in combat, with Russians proposing to use them in storming stationary and well-defended targets (effectively giving such combat UGVs a kamikaze role).
• One-time and preferably stationary use of these UGVs would be more effective, with maintenance and repair crews close by.
• These UGVs should be used with other military formations in order to target and destroy fortified and firing enemy positions — but never on their own, since their breakdown would negatively impact the military mission.
The presentation proposed that some of the above-mentioned problems could be overcome by domestic developments in the following UGV technology and equipment areas:
• Creating secure communication channels;
• Building miniaturized hi-tech navigation systems with a high degree of autonomy, capable of operating with a loss of satellite navigation systems;
• Developing miniaturized and effective ISR components;
• Integrating automated command and control systems, and;
• Better optics, electronics and data processing systems.
According to Anisimov’s report, the overall Russian UGV and unmanned military systems development arch is similar to the one proposed by the United States Army Capabilities Integration Center (ARCIC): the gradual development of systems capable of more autonomy on the battlefield, leading to “smart” robots capable of forming “mobile networks” and operating in swarm configurations. Such systems should be “multifunctional” and capable of being integrated into existing armed forces formations for various combat missions, as well as operate autonomously when needed. Finally, each military robot should be able to function within existing and future military technology and systems.
Such a candid review and critique of the Uran-9 in Syria, if true, may point to the Russian Ministry of Defense’s attitude towards its domestic manufacturers. The potential combat effectiveness of this UGV was advertised for the past two years, but its actual performance fell far short of expectations. It is a sign for developers of other Russian unmanned ground vehicles – like Soratnik, Vihr, and Nerehta — since it displays the full range of deficiencies that take place outside of well-managed testing ranges where such vehicles are currently undergoing evaluation. It also brought to light significant problems with ISR equipment — this type of technology is absolutely crucial to any unmanned system’s successful deployment, and its failures during Uran-9 tests exposed a serious combat weakness.
It is also a useful lesson for many other designers of domestic combat UGVs who are seeking to introduce similar systems into existing order of battle. It appears that the Uran-9’s full effectiveness can only be determined at a much later time if it can perform its mission autonomously in the rapidly-changing and complex battlefield environment. Fully autonomous operation so far eludes its Russian developers, who are nonetheless still working towards achieving such operational goals for their combat UGVs. Moreover, Russian deliberations on using their existing combat UGV platforms in one-time attack mode against fortified adversary positions or firing points, tracking closely with ways that Western military analysts arethinking that such weapons could be used in combat.
The Uran-9 is still a test bed and much has to take place before it could be successfully integrated into current Russian concept of operations. We could expect more eye-opening “lessons learned” from its and other UGVs potential deployment in combat. Given the rapid proliferation of unmanned and autonomous technology, we are already in the midst of a new arms race. Many states are now designing, building, exporting, or importing various technologies for their military and security forces.
To make matters more interesting, the Russians have been public with both their statements about new technology being tested and evaluated, and with the possible use of such weapons in current and future conflicts. There should be no strategic or tactical surprise when military robotics are finally encountered in future combat.
Samuel Bendett is a Research Analyst at the CNA Corporation and a Russia Studies Fellow at the American Foreign Policy Council. He is an official Mad Scientist, having presented and been so proclaimed at a previous Mad Scientist Conference. The views expressed here are his own.
[Editor’s Note: Mad Scientist Laboratory is pleased to present our August edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]
Gartner’s annual hype cycle highlights many of the technologies and trends explored by the Mad Scientist program over the last two years. This year’s cycle added 17 new technologies and organized them into five emerging trends: 1) Democratized Artificial Intelligence (AI), 2)Digitalized Eco-Systems, 3) Do-It-Yourself Bio-Hacking, 4) Transparently Immersive Experiences, and 5) Ubiquitous Infrastructure. Of note, many of these technologies have a 5–10 year horizon until the Plateau of Productivity. If this time horizon is accurate, we believe these emerging technologies and five trends will have a significant role in defining the Character of Future War in 2035 and should have modernization implications for the Army of 2028. For additional information on the disruptive technologies identified between now and 2035, see the Era of Accelerated Human Progress portion of ourPotential Game Changers broadsheet.
[Gartner disclaimer: Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.]
“Let’s say you’re an AI scientist, and you’ve found the holy grail of your field — you figured out how to build an artificial general intelligence (AGI). That’s a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas — things no computer can do today.
That’s great, right? Except for one big catch: your AGI system is evil or could only be used for malicious purposes.
So, now a conundrum. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil? Do you file a patent so that no one else (except for you) could bring such an algorithm into existence? Or do you sit on your research, protecting the world from your creation but also passing up on the astronomical paycheck that would surely arrive in the wake of such a discovery?”
The panel’s responses ranged from controlling — “Don’t publish it!” and treat it like a grenade, “one would not hand it to a small child, but maybe a trained soldier could be trusted with it”; to the altruistic — “publish [it]… immediately” and “there is no evil technology, but there are people who would misuse it. If that AGI algorithm was shared with the world, people might be able to find ways to use it for good”; to the entrepreneurial – “sell the evil AGI to [me]. That way, they wouldn’t have to hold onto the ethical burden of such a powerful and scary AI — instead, you could just pass it to [me and I will] take it from there.”
While no consensus of opinion was arrived at, the panel discussion served a useful exercise in illustrating how AIdiffers from previous eras’ game changing technologies. Unlike Nuclear, Biological, and Chemical weapons, no internationally agreed to and implemented control protocols can be applied to AI, as there are no analogous gas centrifuges, fissile materials, or triggering mechanisms; no restricted access pathogens; no proscribed precursor chemicals to control. Rather, when AGI is ultimately achieved, it is likely to be composed of nothing more than diffuse code; a digital will’o wisp that can permeate across the global net to other nations, non-state actors, and super-empowered individuals, with the potential to facilitate unprecedentedly disruptiveInformation Operation (IO) campaigns and Virtual Warfare, revolutionizing human affairs. The West would be best served in emulating the PRC with itsMilitary-Civil Fusion Centers and integrate the resources of the State with the innovation of industry to achieve their own AGI solutions soonest. Thedecisive edge will “accrue to the side with more autonomous decision-action concurrency on the Hyperactive Battlefield” — the best defense against a nefarious AGI is a friendly AGI!
Can justice really be blind? The International Conference on Machine Learning (ICML) was held in Stockholm, Sweden, in July 2018. This conference explored the notion of machine learning fairness and proposed new methods to help regulators provide better oversight and practitioners to develop fair and privacy-preserving data analyses. Like ethical discussions taking place within the DoD, there are rising legal concerns that commercial machine learning systems (e.g., those associated with car insurance pricing) might illegally or unfairly discriminate against certain subgroups of the population. Machine learning will play an important role in assisting battlefield decisions (e.g., the targeting cycle and commander’s decisions) – especially lethal decisions. There is a common misperception that machines will make unbiased and fair decisions, divorced from human bias. Yet the issue of machine learning bias is significant because humans, with their host of cognitive biases, code the very programming that will enable machines to learn and make decisions. Making the best, unbiased decisions will become critical in AI-assisted warfighting. We must ensure that machine-based learning outputs are veriﬁed and understood to preclude the inadvertent introduction of human biases. Read the full reporthere.
In a study published byPLOS ONE, researchers found that arobot’s personality affected a human’s decision-making. In the study, participants were asked to dialogue with a robot that was either sociable (chatty) or functional (focused). At the end of the study, the researchers let the participants know that they could switch the robot off if they wanted to. At that moment, the robot would make an impassioned plea to the participant to resist shutting them down. The participants’ actions were then recorded. Unexpectedly, there were a large number of participants who resisted shutting down the functional robots after they made their plea, as opposed to the sociable ones. This is significant. It shows, beyond the unexpected result, that decision-making is affected by robotic personality. Humans will form an emotional connection to artificial entities despite knowing they are robotic if they mimic and emulate human behavior. If the Army believes its Soldiers will beaccompanied and augmented heavily by robots in the near future, it must also understand that human-robot interaction will not be the same as human-computer interaction. The U.S. Army must explore how attain theappropriate level of trust between Soldiers and their robotic teammates on the future battlefield. Robots must be treated more like partners than tools, with trust, cooperation, and even empathy displayed.
While the advent of the Internet brought home computing and communication even deeper into global households, the revolution of smart phones brought about the concept of constant personal interconnectivity. Today and into the future, not only are humans being connected to the global commons via their smart devices, but a multitude of devices, vehicles, and various accessories are being integrated into the Internet of Things (IoT). Previously, the IoT was addressed as a game changing technology. The IoT is composed of trillions of internet-linked items, creating opportunities and vulnerabilities. There has been explosive growth in low Size Weight and Power (SWaP) and connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness).
Large companies are expected to quickly grow their spending on Internet-connected devices (i.e., appliances, home devices [such as Google Home, Alexa, etc.], various sensors) to approximately $520 billion. This is a massive investment into what will likely become the Internet of Everything (IoE). While growth is focused on known devices, it is likely that it will expand to embedded and wearable sensors – think clothing, accessories, and even sensors and communication devices embedded within the human body. This has two major implications for the Future Operational Environment (FOE):
– The U.S. military is already struggling with the balance between collecting, organizing, and using critical data, allowing service members to use personal devices, and maintaining operations and network security and integrity (see banning of personal fitness trackers recently). A segment of the IoT sensors and devices may be necessary or critical to the function and operation of many U.S. Armed Forces platforms and weapons systems, inciting some critical questions about supply chain security, system vulnerabilities, and reliance on micro sensors and microelectronics
– The U.S. Army of the future will likely have to operate in and arounddense urban environments, where IoT devices and sensors will be abundant, degrading blue force’s ability to sense the battlefield and “see” the enemy, thereby creating a veritable needle in a stack of needles.
With the possibility of a “cyber Pearl Harbor” becoming increasingly imminent, intelligence officials warn of the rising danger of cyber attacks. Effects of these attacks have already been felt around the world. They have the power to break the trust people have in institutions, companies, and governments as they act in the undefinedgray zone between peace and all-out war. The military implications are quite clear: cyber attacks can cripple the military’s ability to function from a command and control aspect to intelligence communications and materiel and personnel networks. Besides the military and government, private companies’ use of the internet must be accounted for when discussing cyber security. Some companies have felt the effects of cyber attacks, while others are reluctant to invest in cyber protection measures. In this way, civilians become affected by acts of cyber warfare, and attacks on a country may not be directed at the opposing military, but the civilian population of a state, as in the case of power and utility outages seen in eastern Europe. Any actor with access to the internet can inflict damage, and anyone connected to the internet is vulnerable to attack, so public-private cooperation is necessary to most effectively combat cyber threats.
If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: firstname.lastname@example.org — we may select it for inclusion in our next edition of “The Queue”!
[Editor’s Note: Mad Scientist Laboratory is pleased to publish the following post by guest blogger Dr. Jan Kallberg, faculty member, United States Military Academy at West Point, and Research Scientist with the Army Cyber Institute at West Point. His post serves as a cautionary tale regarding our finite intellectual resources and the associated existential threat in failing to protect them!]
Preface: Based on my experience in cybersecurity, migrating to a broader cyber field, there have always been those exceptional individuals that have an unreplicable ability to see the challenge early on, create a technical solution, and know how to play it in the right order for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival ofArtificial Intelligence increases our reliance on these highly capable individuals – because someone must set the rules, the boundaries, and point out the trajectory for Artificial Intelligence at initiation.
As an industrialist society, we tend to see technology and the information that feeds it as the weapons – and ignore the few humans that have a large-scale direct impact. Even if identified as a weapon, how do you make a human mind classified? Can we protect these high-ability individuals that in the digital world are weapons, not as tools but compilers of capability, or are we still focused on the tools? Why do we see only weapons that are steel and electronics and not the weaponized mind as a weapon? I believe firmly that we underestimate the importance of Applicable Intelligence – the ability to play the cyber engagement in the optimal order. Adversaries are often good observers because they are scouting for our weak spots. I set the stage for the following post in 2034, close enough to be realistic and far enough for things to happen when our adversaries are betting that we rely more on a few minds than we are willing to accept.
Post: In a not too distant future, 20th of August 2034, a peer adversary’s first strategic moves are the targeted killings of less than twenty individuals as they go about their daily lives: watching a 3-D printer making a protein sandwich at a breakfast restaurant; stepping out from the downtown Chicago monorail; or taking a taste of a poison-filled retro Jolt Cola. In thegray zone, when the geopolitical temperature increases, but we are still not at war yet, our adversary acts quickly and expedites a limited number of targeted killings within the United States of persons whom are unknown to mass media, the general public, and have only one thing in common – Applicable Intelligence (AI).
The ability to apply is a far greater asset than the technology itself. Cyber and card games have one thing in common, the order you play your cards matters. In cyber, the tools are publicly available, anyone can download them from the Internet and use them, but the weaponization of the tools occurs when used by someone who understands how to play the tools in an optimal order. These minds are different because they see an opportunity to exploit in a digital fog of war where others don’t or can’t see it. They address problems unburdened by traditional thinking, in new innovative ways, maximizing the dual-purpose of digital tools, and can create tangible cyber effects.
It is the Applicable Intelligence (AI) that creates the procedures, the application of tools, and turns simple digital software in sets or combinations as a convergence to digitally lethal weapons. This AI is the intelligence to mix, match, tweak, and arrange dual purpose software. In 2034, it is as if you had the supernatural ability to create a thermonuclear bomb from what you can find at Kroger or Albertson.
Sadly we missed it; we didn’t see it. We never left the 20th century. Our adversary saw it clearly and at the dawn of conflict killed off the weaponized minds, without discretion, and with no concern for international law or morality.
These intellects are weapons of growing strategic magnitude. In 2034, the United States missed the importance of these few intellects. This error left them unprotected.
All of our efforts were instead focusing on what they delivered, the application and the technology, which was hidden in secret vaults and only discussed in sensitive compartmented information facilities. Therefore, we classify to the highest level to ensure the confidentiality and integrity of our cyber capabilities. Meanwhile, the most critical component, the militarized intellect, we put no value to because it is a human. In a society marinated in an engineering mindset, humans are like desk space, electricity, and broadband; it is a commodity that is input in the production of the technical machinery. The marveled technical machinery is the only thing we care about today, 2018, and as it turned out in 2034 as well.
We are stuck in how we think, and we are unable to see it coming, but our adversaries see it. At a systematic level, we are unable to see humans as the weapon itself, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed. As the armory of the war of 1812, as the stockpile of 1943, and as the launch pad of 2034. Arms are made of steel, or fancier metals, with electronics – we failed in 2034 to see weapons made of corn, steak, and an added combative intellect.
General Nakasone stated in 2017, “Our best ones [coders] are 50 or 100 times better than their peers,” and continued “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” In reality, the success of cyber and cyber operations is highly dependent not on the tools or toolsets but instead upon the super-empowered individual that General Nakasone calls “the 50-x coder.”
There were clear signals that we could have noticed before General Nakasone pointed it out clearly in 2017. The United States’ Manhattan Project during World War II had at its peak 125,000 workers on the payroll, but the intellects that drove the project to success and completion were few. The difference with the Manhattan Project and the future of cyber is that we were unable to see the human as a weapon, being locked in by our path dependency as an engineering society where we hail the technology and forget the importance of the humans behind it.
America’s endless love of technical innovations and advanced machinery reflects in a nation that has celebrated mechanical wonders and engineered solutions since its creation. For America, technical wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the intercontinental railroad, the Panama Canal, the manufacturing era, the moon landing, and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps that can solve a problem or act.
The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced. In 2034, the era of digital conflicts and thewar between algorithms with engagements occurring at machine speed with no time for leadership or human interaction, it is the intellects that design and understand how to play it. We didn’t see it.
In 2034, with fewer than twenty bodies piled up after targeted killings, resides the Cyber Pearl Harbor. It was not imploding critical infrastructure, a tsunami of cyber attacks, nor hackers flooding our financial systems, but instead traditional lead and gunpowder. The super-empowered individuals are gone, and we are stuck in a digital war at speeds we don’t understand, unable to play it in the right order, and with limited intellectual torque to see through the fog of war provided by an exploding kaleidoscope of nodes and digital engagements.
Dr. Jan Kallberg is currently an Assistant Professor of Political Science with the Department of Social Sciences, United States Military Academy at West Point, and a Research Scientist with the Army Cyber Institute at West Point. He was earlier a researcher with the Cyber Security Research and Education Institute, The University of Texas at Dallas, and is a part-time faculty member at George Washington University. Dr. Kallberg earned his Ph.D. and MA from the University of Texas at Dallas and earned a JD/LL.M. from Juridicum Law School, Stockholm University. Dr. Kallberg is a certified CISSP, ISACA CISM, and serves as the Managing Editor for the Cyber Defense Review. He has authored papers in the Strategic Studies Quarterly, Joint Forces Quarterly, IEEE IT Professional, IEEE Access, IEEE Security and Privacy, and IEEE Technology and Society.
[Editor’s Note: Mad Scientist is pleased to present Mr. Mike Matson‘s guest blog post set in 2037 — pitting the defending Angolan 6th Mechanized Brigade with Russian advisors and mercenaries against a Namibian Special Forces incursion supported by South African National Defence Force (SANDF) Special Operators. Both sides employ autonomous combat systems, albeit very differently — Enjoy!]
Preface: This story was inspired by two events. First, Boston Dynamics over the last year had released a series of short videos of theirhumanoid and animal-inspired robots which had generated a strong visceral Internet reaction. Elon Musk had commented about one video that they would “in a few years… move so fast you’ll need a strobe light to see it.” That visual stuck with me and I was looking for an opportunity to expand on that image.
The second event was a recent trip to the Grand Tetons. I had a black bear rise up out of an otherwise empty meadow less than 50 meters away. A 200-kilo predator which can run at 60kph and yet remain invisible in high grass left a strong impression. And while I didn’t see any gray wolves, a guide discussed how some of the packs, composed of groups of 45-kilogram sized animals, had learned how to take down 700-kilogram bison. I visualized packs of speeding robotic wolves with bear-sized robots following behind.
I used these events as the genesis to explore a completely different approach to designing and employing unmanned ground combat vehicles (GCVs). Instead of the Russian crewless, traditional-styled armored vehicles, I approached GCVs from the standpoint of South Africa, which may not have the same resources as Russia, but has an innovative defense industry. If starting from scratch, how might their designs diverge? What could they do with less resources? And how would these designs match up to “traditional” GCVs?
To find out what would happen, I pitted an Angolan mechanize brigade outfitted with Russian GCVs against South African special forces armed with a top secret indigenous GCV program. The setting is southern Angola in 2037, and there are Demons in the Tall Grass. As Mr. Musk said in his Tweet, sweet dreams! Mike Matson
(2230Z 25 May 2037) Savate, Angola
Paulo crouched in his slit trench with his squad mates. He knew this was something other than an exercise. The entire Angolan 6th Mechanized Brigade had road marched south to Savate, about 60 kilometers from the Namibian border. There, they were ordered to dig fighting positions and issued live ammunition.
Everyone was nervous. Thirty minutes before, one of their patrols a kilometer south of them had made contact. A company had gone out in support and a massive firefight had ensued. A panicked officer could be heard on the net calling in artillery on their own position because they were being attacked by demons in the tall grass. Nobody had yet returned.
Behind Paulo, the battalion commander came forward. With him were three Russian mercenaries. Paulo knew the Russians had brought along two companies of robot tanks. The robot tanks sported an impressively large number of guns, missiles and lasers. Two of them had deployed with the quick reaction force. Explosions suggested that they had been destroyed.
Paulo watched the Angolan officer carefully. Suddenly there was a screamed warning from down the trenches. He whipped around and saw forms in the tall grass moving towards the trenches at a high rate of speed, spread out across his entire front. A dozen or more speeding lines headed directly towards the trenches like fish swimming just under the water.
“Fire!” Paulo ordered and started shooting, properly squeezing off three round bursts. The lines kept coming. Paulo had strobe light-like glimpses of bounding animals. Just before they burst from cover, piercingly loud hyena cries filled the night. Paulo slammed his hand on the nearby clacker to detonate the directional mines to his front. The world exploded in noise and dust.
(Earlier That Morning) 25 Kilometers south of Savate
Captain Verlin Ellis, Bravo Group, SANDF, crouched with his NCO, his soldiers, and his Namibian SF counterpart at dawn under a tree surrounded by thick green bush.
“Listen up everyone, the operation is a go. Intelligence shows the brigade in a holding position south of Savate. We are to conduct a recon north until we can fix their position. Alpha and Charlie groups will be working their way up the left side. Charlie will hit their right flank with their predator package at the same time we attack from the south and Alpha will be the stopper group with the third group north of town. Once we have them located, we are to hold until nightfall, then attack.”
The tarps came off Bravo Group’s trucks and the men got to work unloading.
First off were Bravo Group’s attack force of forty hyenas. Standing just under two feet high on their articulated legs, and weighing roughly 40 kilos, the small robots were off-loaded and their integrated solar panels were unfolded to top off their battery charges.
The hyenas operated in pack formations via an encrypted mesh network. While they could be directed by human operators if needed and could send and receive data via satellite or drone relay, they were designed to operate in total autonomy at ranges up to 40 kilometers from their handlers.
Each hyena had a swiveling front section like a head with four sensors and a small speaker. The sensors were a camera and separate thermal camera, a range finder, and a laser designator/pointer. Built into the hump of the hyena’s back was a fixed rifle barrel in a bullpup configuration, chambered in 5.56mm, which fired in three round bursts.
On each side there was a pre-loaded 40mm double tube grenade launcher. The guided, low velocity grenades could be launched forward between 25-150 meters. The hyenas were loaded with a mix of HE, CS gas, HEAT, and thermite grenades. They could select targets themselves or have another hyena or human operator designate a target, in which case they were also capable of non-line-of-sight attacks. The attack dogs contained a five-kilo shaped charge limpet mine for attaching to vehicles. There were 24 attack hyenas.
Second off came the buffalos, the heavy weapons support element. There were six of the 350 kilo beasts. They were roughly the same size as a water buffalo, hence their name. They retained the same basic head sensor suite as the hyenas, and a larger, sturdier version of the hyena’s legs.
Three of them mounted an 81mm auto-loading mortar and on their backs were 10 concave docking stations each holding a three ounce helicopter drone called a sparrow. The drone had a ten-minute flight radius with its tiny motor. One ounce of the drone was plastic explosive. They had a simple optical sensor and were designed to land and detonate on anything matching their picture recognition algorithms, such as ammo crates, fuel cans, or engine hoods.
The fourth buffalo sported a small, sleek turret on a flat back, with a 12.7mm machine gun, and the buffalo held 500 rounds of armor-piercing tracer.
The fifth buffalo held an automatic grenade launcher with 200 smart rounds in a similar turret to the 12.7mm gun. The grenades were programmed as they fired and could detonate over trenches or beyond obstacles to hit men behind cover.
The sixth carried three anti-tank missiles in a telescoping turret. Like the mortars, their fire could be directed by hyenas, human operators, or self-directed.
Once the hyenas and buffalos were charging, the last truck was carefully unloaded. Off came the boars — suicide bombs on legs. Each of the 15 machines was short, with stubbier legs for stability. Their outer shells were composed of pre-scarred metal and were overlaid with a layer of small steel balls for enhanced shrapnel. Inside they packed 75 kilos of high explosive. For tonight’s mission each boar was downloaded with different sounds to blare from their speakers, with choices ranging from Zulu war cries, to lion roars, to AC/DC’s Thunderstruck. Chaos was their primary mission.
Between the three Recce groups, nine machines failed warmup. That left 180 fully autonomous and cooperative war machines to hunt the 1,200 strong Angolan 6th Mechanized Brigade.
(One Hour after Attack Began) Savate
Paulo and his team advanced, following spoor through the bush. The anti-tank team begged to go back but Paulo refused.
Suddenly there was a slight gap in the tall grass just as something in front of them on the far side of a clearing fired. It looked like a giant metal rhino, and it had an automatic grenade launcher on top of it. It fired a burst, then sat down on its haunches to hide.
So that’s why I can’t see them after they fire. Very clever, thought Paulo. He tried calling in fire support but all channels were jammed.
Paulo signaled with his hands for both gunners to shoot. The range was almost too close. Both gunners fired at the same time, striking the beast. It exploded with a surprising fury, blowing them all off their feet and lighting up the sky. They laid there stunned as debris pitter-pattered in the dirt around them.
That was enough for Paulo and the men. They headed back to the safety of the trenches.
As they returned, eight armored vehicles appeared. On the left was an Angolan T-72 tank and three Russian robot tanks. On the right there was a BMP-4 and three more Russian robot tanks.
An animal-machine was trotting close to the vegetation outside the trenches and one of the Russian tank’s lasers swiveled and fired, emitting a loud hum, hitting it. The animal-machine was cut in two. The tanks stopped near the trench to shoot at unseen targets in the dark as Paulo entered the trenches.
The hyena yipping increased in volume as predators began to swarm around the armored force. Five or six were circling their perimeter yipping and shooting grenades. Two others crept under some bushes 70 meters to Paulo’s right and laid down like dogs. A long, thin antenna rose out of the back of one dog with some small device on top. The tanks furiously fired at the fleeting targets which circled them.
Mortar rounds burst around the armor, striking a Russian tank on the thin turret top, destroying it.
From a new direction, the ghost machine gun struck a Russian robot tank with a dozen exploding armor-piercing rounds. The turret was pounded and the externally mounted rockets were hit, bouncing the tank in place from the explosions. A robot tank popped smoke, instantly covering the entire armored force in a blinding white cloud which only added to the chaos. Suddenly the Russian turrets all stopped firing just as a third robot tank was hit by armor-piercing rounds in the treads and disabled.
If you enjoyed this blog post, read “Demons in the Grass” in its entiretyhere, published by our colleagues at Small Wars Journal.
Mike Matson is a writer in Louisville, Kentucky, with a deep interest in national security and cyber matters. His writing focuses on military and intelligence-oriented science fiction. He has two previous articles published by Mad Scientist: the non-fiction “Complex Cyber Terrain in Hyper-Connected Urban Areas,” and the fictional story, “Gods of Olympus.” In addition to Louisville, Kentucky, and Washington, DC, he has lived, studied, and worked in Brussels, Belgium, and Tallinn, Estonia. He holds a B.A. in International Studies from The American University and an M.S. in Strategic Intelligence from the National Intelligence University, both in Washington, DC. He can be found on Twitter at @Mike40245.
[Editor’s Note: In the movie World War Z (I know… the book was way better!), an Israeli security operative describes how Israel prepared for the coming zombie plague. Their strategy was if nine men agreed on an analysis or a course of action,the tenth man had to take an alternative view.
This Devil’s Advocate or contrarian approach serves as a form ofalternative analysis and is a check against group think and mirror imaging. The Mad Scientist Laboratory will begin a series of posts entitled “The Tenth Man” to offer a platform for the contrarians in our network (I know you’re out there!) to share their alternative perspectives and analyses regarding the Future Operational Environment.]
Our foundational assumption about the Future Operational Environment is that the Character of Warfare ischanging due to an exponential convergence of emerging technologies. Artificial Intelligence, Robotics, Autonomy, Quantum Sciences, Nano Materials, and Neuro advances will mean more lethal warfare at machine speed, integrated seamlessly across all five domains – air, land, sea, cyber, and space.
We have consistently seen four main themes used to counter this idea of a changing character of war, driven by technology:
1. Cost of Robotic Warfare: All armies must plan for the need to reconstitute forces. This is particularly ingrained in the U.S. Army’s culture where we have often lost the first battles in any given conflict (e.g., Kasserine Pass in World War II and Task Force Smith in Korea). We cannot afford to have a “one loss” Army where our national wealth and industrial base can not support the reconstitution of a significant part of our Army. A high-cost, roboticized Army might also limit our political leaders’ options for the use of military force due to the risk of loss and associated cost.
2. Technology Hype: Technologists are well aware of the idea of a hype cycle when forecasting emerging technologies. Machine learning was all the rage in the 1970s, but the technology needed to drive these tools did not exist. Improved computing has finally helped us realize this vision, forty years later. The U.S. Army’s experience with the Future Combat System hits a nerve when assumptions of the future require the integration of emerging technologies.
3. Robotic Warfare: A roboticized Army is over-optimized to fight against a peer competitor, which is the least likely mission the Army will face. We build an Army and develop Leaders first and foremost to protect our Nation’s sovereignty. This means having an Army capable of deterring, and failing that, defeating peer competitors. At the same time, this Army must be versatile enough to execute a myriad of additional missions across the full spectrum of conflict. A hyper-connected Army enabled by robots with fewer Soldiers will be challenged in executing missions requiring significant human interactions such as humanitarian relief, building partner capacity, and counter-insurgency operations.
4. Coalition Warfare: A technology-enabled force will exasperate interoperability challenges with both our traditional and new allies. Our Army will not fight unilaterally on future battlefields. We have had difficulties with the interoperability of communications and have had gaps between capabilities that increased mission risks. These risks were offset by the skills our allies brought to the battlefield. We cannot build an Army that does not account for a coalition battlefield and our alliesmay not be able to afford the tech-enabled force envisioned in the Future Operational Environment.
All four of these assumptions are valid and should be further studied as we build the Army of 2028 and the Army of 2050. There are many other contrarian views about the Future Operational Environment, and so we are calling upon our network to put on their red hats and be our “Tenth Man.”
On 19-20 June 2018, the U.S. Army Training and Doctrine Command (TRADOC) Mad Scientist Initiative co-hosted the Installations of the Future Conference with the Office of the Assistant Secretary of the Army for Installations, Energy and Environment (OASA (IE&E)) and Georgia Tech Research Institute (GTRI). Emerging technologies supporting the hyper-connectivity revolution will enable improved training capabilities, security, readiness support (e.g., holistic medical facilities and brain gyms), and quality of life programs at Army installations. Our concepts and emerging doctrine for multi-domain operations recognizes this as increasingly important by including Army installations in the Strategic Support Area. Installations of the Future will serve as mission command platforms to project virtual power and expertise as well as Army formations directly to the battlefield.
We have identified the following “Top 10” takeaways related to our future installations:
1. Threats and Tensions. “Army Installations are no longer sanctuaries” — Mr. Richard G. Kidd IV, Deputy Assistant Secretary of the Army, Strategic Integration. There is a tension between openness and security that will need balancing to take advantage of smart technologies at our Army installations. The revolution in connected devices and the ability to virtually project power and expertise will increase the potential for adversaries to target our installations. Hyper-connectivity increases the attack surface for cyber-attacks and the access to publicly available information on our Soldiers and their families, making personalized warfare and the use ofpsychological attacks and deep fakes likely.
2. Exclusion vs. Inclusion. The role of and access to future Army installations depends on the balance between these two extremes. The connections between local communities and Army installations will increase potential threat vectors, but resilience might depend on expanding inclusion. Additionally, access to specialized expertise inrobotics, autonomy, and information technologies will require increased connections with outside-the-gate academic institutions and industry.
3. Infrastructure Sensorization. Increased sensorization of infrastructure runs the risk of driving efficiencies to the point of building in unforeseen risks. In the business world, these efficiencies are profit-driven, with clearer risks and rewards. Use of table top exercises can explore hidden risks and help Garrison Commanders to build resilient infrastructure and communities. Automation can causecascading failures as people begin to fall “out of the loop.”
4. Army Modernization Challenge. Installations of the Future is a microcosm of overarching Army Modernization challenges. We are simultaneously invested in legacy infrastructure that we need to upgrade, and making decisions to build new smart facilities. Striking an effective and efficient balance will start with public-private partnerships to capture the expertise that exists in our universities and in industry. The expertise needed to succeed in this modernization effort does not exist in the Army. There are significant opportunities for Army Installations to participate in ongoing consortiums like the “Middle Georgia” Smart City Community and the Global Cities Challenge to pilot innovations in spaces such as energy resilience.
5. Technology is outpacing regulations and policy. The sensorization and available edge analytics in our public space offers improved security but might be perceived as decreasing personal privacy. While we give up some personal privacy when we live and work on Army installations, this collection of data will require active engagement with our communities. We studied an ongoing Unmanned Aerial System (UAS) supportconcept to detect gunshot incidents in Louisville, KY, to determine the need to involve legislatures, local political leaders, communities, and multiple layers of law enforcement.
6. Synthetic Training Environment. The Installation of the Future offers the Army significant opportunities to divest itself of large brick and mortar training facilities and stove-piped, contractor support-intensive Training Aids, Devices, Simulations, and Simulators (TADSS). MG Maria Gervais, Deputy Commanding General, Combined Arms Center – Training (DCG, CAC-T), presented the Army’sSynthetic Training Environment (STE), incorporating Virtual Reality (VR), “big box” open-architecture simulations using a One World Terrain database, and reduced infrastructure and contractor-support footprints to improve Learning and Training. The STE, delivering high-fidelity simulations and the opportunity for our Soldiers and Leaders to exercise all Warfighting Functions across the full Operational Environment with greater repetitions at home station, will complement the Live Training Environment and enhance overall Army readiness.
7. Security Technologies. Many of the security-oriented technologies (autonomous drones, camera integration, facial recognition, edge analytics, and Artificial Intelligence) that triage and fuse information will also improve our deployed Intelligence, Surveillance, and Reconnaissance (ISR) capabilities. The Chinese lead the world in these technologies today.
8. Virtual Prototyping. The U.S. Army Engineer Research and Development Center (ERDC) is developing a computational testbed using virtual prototypingto determine the best investments for future Army installations. The four drivers in planning for Future Installations are: 1) Initial Maneuver Platform (Force Projection); 2) Resilient Installations working with their community partners; 3) Warfighter Readiness; and 4) Cost effectiveness in terms of efficiency and sustainability.
9. Standard Approach to Smart Installations. A common suite of tools is needed to integrate smart technologies onto installations. While Garrison Commanders need mission command to take advantage of the specific cultures of their installations and surrounding communities, the Army cannot afford to have installations going in different directions on modernization efforts. A method is needed to rapidly pilot prototypes and then determine whether and how to scale the technologies across Army installations.
10. “Low Hanging Fruit.” There are opportunities for Army Installations to lead their communities in tech integration. Partnerships in energy savings, waste management, and early 5G infrastructure provide the Army with early adopter opportunities for collaboration with local communities, states, and across the nation. We must educate contracting officers and Government consumers to look for and seize upon these opportunities.
Videos from each of the Installations of the Future Conference presentations are posted here. The associated slides will be postedhere within the week on the Mad Scientist All Partners Access Network site.
If you enjoyed this post, check out the following: