216. Russia: Our Current Pacing Threat

[Editor’s Note: The U.S. Army’s capstone unclassified document on the Operational Environment (OE) states:

“Russia can be considered our “pacing threat,” and will be our most capable potential foe for at least the first half of the Era of Accelerated Human Progress [now through 2035]. It will remain a key strategic competitor through the Era of Contested Equality [2035 through 2050].TRADOC Pamphlet (TP) 525-92, The Operational Environment and the Changing Character of Warfare, p. 12.

In today’s companion piece to the previously published China: Our Emergent Pacing Threat, the Mad Scientist Laboratory reviews what we’ve learned about Russia in an anthology of insights gleaned from previous posts regarding our current pacing threat — this is a far more sophisticated strategic competitor than your Dad’s (or Mom’s!) Soviet Union — Enjoy!]. 

The dichotomy of war and peace is no longer a useful construct for thinking about national security or the development of land force capabilities. There are no longer defined transitions from peace to war and competition to conflict. This state of simultaneous competition and conflict is continuous and dynamic, but not necessarily cyclical. Russia will seek to achieve its national interests short of conflict and will use a range of actions from cyber to kinetic against unmanned systems walking up to the line of a short or protracted armed conflict.

1. Hemispheric Competition and Conflict: Over the last twenty years, Russia has been viewed as regional competitor in Eurasia, seeking to undermine and fracture traditional Western institutions, democracies, and alliances. It is now transitioning into a hemispheric threat with a primary focus on challenging the U.S. Army all the way from our home station installations (i.e., the Strategic Support Area) to the Close Area fight. We can expect cyber attacks against critical infrastructure, the use of advanced information warfare such as deepfakes targeting units and families, and the possibility of small scale kinetic attacks during what were once uncontested administrative actions of deployment. There is no institutional memory for this type of threat and adding time and required speed for deployment is not enough to exercise Multi-Domain Operations.

See: Blurring Lines Between Competition and Conflict

2. Cyber Operations:  Russia has already employed tactics designed to exploit vulnerabilities arising from Soldier connectivity. In the ongoing Ukrainian conflict, for example, Russian cyber operations coordinated attacks against Ukrainian artillery, in just one case of a “really effective integration of all these [cyber] capabilities with kinetic measures.”  By sending spoofed text messages to Ukrainian soldiers informing them that their support battalion has retreated, their bank account has been exhausted, or that they are simply surrounded and have been abandoned, they trigger personal communications, enabling the Russians to fix and target Ukrainian positions. Taking it one step further, they have even sent false messages to the families of soldiers informing them that their loved one was killed in action. This sets off a chain of events where the family member will immediately call or text the soldier, followed by another spoofed message to the original phone. With a high number of messages to enough targets, an artillery strike is called in on the area where an excess of cellphone usage has been detected. To translate into plain English, Russia has successfully combined traditional weapons of land warfare (such as artillery) with the new potential of cyber warfare.

See: Nowhere to Hide: Information Exploitation and Sanitization and Hal Wilson‘s Britain, Budgets, and the Future of Warfare.

3. Influence Operations:  Russia seeks to shape public opinion and influence decisions through targeted information operations (IO) campaigns, often relying on weaponized social media. Russia recognizes the importance of AI, particularly to match and overtake the superior military capabilities that the United States and its allies have held for the past several decades.  Highlighting this importance, Russian President Vladimir Putin in 2017 stated that “whoever becomes the leader in this sphere will become the ruler of the world.” AI-guided IO tools can empathize with an audience to say anything, in any way needed, to change the perceptions that drive those physical weapons. Future IO systems will be able to individually monitor and affect tens of thousands of people at once.

Russian bot armies continue to make headlines in executing IO. The New York Times maintains about a dozen Twitter feeds and produces around 300 tweets a day, but Russia’s Internet Research Agency (IRA) regularly puts out 25,000 tweets in the same twenty-four hours. The IRA’s bots are really just low-tech curators; they collect, interpret, and display desired information to promote the Kremlin’s narratives.

Next-generation bot armies will employ far faster computing techniques and profit from an order of magnitude greater network speed when 5G services are fielded. If “Repetition is a key tenet of IO execution,” then this machine gun-like ability to fire information at an audience will, with empathetic precision and custom content, provide the means to change a decisive audience’s very reality. No breakthrough science is needed, no bureaucratic project office required. These pieces are already there, waiting for an adversary to put them together.

One future vignette posits Russia’s GRU (Military Intelligence) employing AI Generative Adversarial Networks (GANs) to create fake persona injects that mimic select U.S. Active Army, ARNG, and USAR commanders making disparaging statements about their confidence in our allies’ forces, the legitimacy of the mission, and their faith in our political leadership. Sowing these injects across unit social media accounts, Russian Information Warfare specialists could seed doubt and erode trust in the chain of command amongst a percentage of susceptible Soldiers, creating further friction.

See: Weaponized Information: One Possible Vignette, Own the Night, The Death of Authenticity: New Era Information Warfare, and MAJ Chris Telley‘s Influence at Machine Speed: The Coming of AI-Powered Propaganda

4. Isolation:  Russia seeks to cocoon itself from retaliatory IO and Cyber Operations.  At the October 2017 meeting of the Security Council, “the FSB [Federal Security Service] asked the government to develop an independent ‘Internet’ infrastructure for BRICS nations [Brazil, Russia, India, China, South Africa], which would continue to work in the event the global Internet malfunctions.” Security Council members argued the Internet’s threat to national security is due to:

“… the increased capabilities of Western nations to conduct offensive operations in the informational space as well as the increased readiness to exercise these capabilities.

Having its own root servers would make Russia independent of monitors like the International Corporation for Assigned Names and Numbers (ICANN) and protect the country in the event of “outages or deliberate interference.” “Putin sees [the] Internet as [a] CIA tool.”

See: Dr. Mica Hall‘s The Cryptoruble as a Stepping Stone to Digital Sovereignty and Howard R. Simkin‘s Splinternets

5. Battlefield Automation: Given the rapid proliferation of unmanned and autonomous technology, we are already in the midst of a new arms race. Russia’s Syria experience — and monitoring the U.S. use of unmanned systems for the past two decades — convinced the Ministry of Defense (MOD) that its forces need more expanded unmanned combat capabilities to augment existing Intelligence, Surveillance, and Reconnaissance (ISR) Unmanned Aerial Vehicle (UAV) systems that allow Russian forces to observe the battlefield in real time.

The next decade will see Russia complete the testing and evaluation of an entire lineup of combat drones that were in different stages of development over the previous decade. They include the heavy Ohotnik combat UAV (UCAV); mid-range Orion that was tested in Syria; Russian-made Forpost, a UAV that was originally assembled via Israeli license; mid-range Korsar; and long-range Altius that was billed as Russia’s equivalent to the American Global Hawk drone. All of these UAVs are several years away from potential acquisition by armed forces, with some going through factory tests, while others graduating to military testing and evaluation. These UAVs will have a range from over a hundred to possibly thousands of kilometers, depending on the model, and will be able to carry weapons for a diverse set of missions.

Russian ground forces have also been testing a full lineup of Unmanned Ground Vehicles (UGVs), from small to tank-sized vehicles armed with machine guns, cannon, grenade launchers, and sensors. The MOD is conceptualizing how such UGVs could be used in a range of combat scenarios, including urban combat. However, in a candid admission, Andrei P. Anisimov, Senior Research Officer at the 3rd Central Research Institute of the Ministry of Defense, reported on the Uran-9’s critical combat deficiencies during the 10th All-Russian Scientific Conference entitled “Actual Problems of Defense and Security,” held in April 2018. The Uran-9 is a test bed system and much has to take place before it could be successfully integrated into current Russian concept of operations. What is key is that it has been tested in a combat environment and the Russian military and defense establishment are incorporating lessons learned into next-gen systems. We could expect more eye-opening lessons learned from its’ and other UGVs potential deployment in combat.

Another significant trend is the gradual shift from manual control over unmanned systems to a fully autonomous mode, perhaps powered by a limited Artificial Intelligence (AI) program. The Russian MOD has already communicated its desire to have unmanned military systems operate autonomously in a fast-paced and fast-changing combat environment. While the actual technical solution for this autonomy may evade Russian designers in this decade due to its complexity, the MOD will nonetheless push its developers for near-term results that may perhaps grant such fighting vehicles limited semi-autonomous status. The MOD would also like this AI capability be able to direct swarms of air, land, and sea-based unmanned and autonomous systems.

The Russians have been public with both their statements about new technology being tested and evaluated, and with possible use of such weapons in current and future conflicts. There should be no strategic or tactical surprise when military robotics are finally encountered in future combat.

See proclaimed Mad Scientist Sam Bendett‘s Major Trends in Russian Military Unmanned Systems Development for the Next Decade, Autonomous Robotic Systems in the Russian Ground Forces, and Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward,

Russian Minister of Defense Shoigu briefs President Putin on the ERA Innovation / Source: en.kremlin.ru

6. Innovation:  Russia has developed a military innovation center —  Era Military Innovation Technopark — near the city of Anapa (Krasnodar Region) on the northern coast of the Black Sea.  Touted as “A Militarized Silicon Valley in Russia,” the facility will be co-located with representatives of Russia’s top arms manufacturers which will “facilitate the growth of the efficiency of interaction among educational, industrial, and research organizations.” By bringing together the best and brightest in the field of “breakthrough technology,” the Russian leadership hopes to see “development in such fields as nanotechnology and biotech, information and telecommunications technology, and data protection.”

That said, while Russian scientists have often been at the forefront of technological innovations, the country’s poor legal system prevents these discoveries from ever bearing fruit. Stifling bureaucracy and a broken legal system prevent Russian scientists and innovators from profiting from their discoveries. The jury is still out as to whether Russia’s Era Military Innovation Technopark can deliver real innovation.

See: Ray Finch‘s “The Tenth Man” — Russia’s Era Military Innovation Technopark

Russia’s embrace of these and other disruptive technologies and the way in which they adopt hybrid strategies that challenge traditional symmetric advantages and conventional ways of war increases their ability to challenge U.S. forces across multiple domains. As an authoritarian regime, Russia is able to more easily ensure unity of effort and a whole-of-government focus over the Western democracies.  It will continue to seek out and exploit fractures and gaps in the U.S. and its allies’ decision-making, governance, and policy.

If you enjoyed this post, check out these other Mad Scientist Laboratory anthologies:

212. A Scenario for a Hypothetical Private Nuclear Program

[Editor’s Note: Mad Scientist Laboratory is pleased to publish today’s guest blog post by Mr. Alexander Temerev addressing the possible democratization and proliferation of nuclear weapons expertise, currently residing with only a handful of nation states (i.e., the U.S., Russia, China, the UK, France, India, Pakistan, and North Korea).  We vetted this post with nuclear subject matter experts within our community of action (who wish to remain anonymous) – the following initial comments are their collective input regarding Mr. Temerev’s guest post that follows – read on!]

What is proposed below “is not beyond the realm of possibility and, with enough wise investment, rather feasible — there are no secrets left in achievement of the basic nuclear physics package, and there haven’t been for a while (the key being obtaining the necessary fissile material). A side note — I was a friend and school-mate of the apocryphal Princeton University Physics Undergraduate Student in 1978 who, as part of his final degree project, developed a workable nuclear weapons design with nothing more than the pre-Internet Science Library as a resource. They still talk about the visit from the FBI on campus, and the fact that his professor only begrudgingly gave him an A- as a final grade.”

“Considering the advances since then, it’s likewise no surprise that such a thing could be accomplished today with even greater ease, there remaining the issue of obtaining sufficient fissile material to warrant the effort. Of course, even failure in this regard, done artfully, could still accomplish a sub-critical reaction [aka “a fizzle“– an explosion caused by the two sub-critical masses of the bomb being brought together too slowly] resulting in a militarily (and psychologically) effective detonation. So, as my colleague [name redacted] (far more qualified in matters scientific and technical) points out, with the advances since the advent of the Internet and World Wide Web, the opportunity to obtain the ‘Secret Sauce’ necessary to achieve criticality have likewise advanced exponentially. He has opined that it is quite feasible for a malevolent private actor, armed with currently foreseeable emerging capabilities, to seek and achieve nuclear capabilities utilizing Artificial Intelligence (AI)-based data and communications analysis modalities. Balancing against this emerging capability are the competing and ever-growing capabilities of the state to surveil and discover such endeavors and frustrate them before (hopefully) reaching fruition. Of course, you’ll understand if I only allude to them in this forum and say nothing further in that regard.”

“Nonetheless, for both good guy and bad, given enough speed and capacity, these will serve as the lever to move the incorporeal data world. This realization will move the quiet but deadly arms race in the shadows, that being the potential confluence of matured Artificial Intelligence (AI) and Quantum technologies at a point in the foreseeable future that changes everything. Such a confluence would enable the potential achievement of these, and even worse, WMD developmental approaches through big-data analysis currently considered infeasible. Conversely, state surveillance modes of the Internet would likewise profit through identifying clusters of seemingly unrelated data searches that could be analyzed to identify and frustrate malevolent actors”.

“It is quite conceivable, in this context, that the future of the Internet for our purposes revolves around one continuous game of cat and mouse as identities are sought and hidden between white hat and black hat players. A real, but unanticipated, version of Ray Kurtzweil’s singularity that nonetheless poses fundamental challenges for a free society. In the operational environment to 2050, cyber-operations will no longer be a new domain but one to be taken into account as a matter of course.”

“Once again, all credit goes to [my colleague] for providing the technical insight into this challenge, my contribution being entirely eccentric in nature. I believe the blog is worth publishing, provided that it serves as an opening for furthering discussion of the potential long-range implications such developments would pose.”

A Scenario for a Hypothetical Private Nuclear Program

Let’s assume there is a non-government actor willing to acquire nuclear weapons for some reason. Assume that the group has unlimited financing (or some significant amount of free and untraced money available — e.g., $1 billion in cryptocurrencies). What would be the best way for them to proceed, and what would be the most vulnerable points where they could be stopped?

Stealing existing nuclear weapons would probably not be an option (or will be of limited utility — see below). Modern nuclear devices are all equipped with PALs (permissive action links), rendering them unusable without unlocking codes (the key idea of PAL is removing some small amount of explosives from the implosion shell, different for each detonator – and compensating by adjusting precise timings when each detonator goes off; these timings are different for each device and can be released only by central command authority). Without knowing the entire set of PAL timings and the entire encrypted protocol between PAL controller and detonators, achieving a bona fide nuclear explosion is technically impossible. Some countries like Pakistan and perhaps North Korea do not possess sophisticated PAL systems for their devices; to compensate, their nuclear cores are tightly guarded by the military.

Fat Man Casing, Trinity Site / Source: Flickr by Ed Siasoco via Creative Commons Attribution 2.0 Generic

Therefore, even if weapon-grade nuclear materials are available (which is of course another near impossible problem), designing the nuclear explosive device de novo is still unavoidable. The principal design of nuclear weapons is not secret, and achieving the nuclear explosion is a clearly defined problem (in terms of timing, compression and explosion hydrodynamics) that can be solved by a small group of competent physicists. Indeed, the “Nth Country Experiment” by Lawrence Livermore National Laboratory in 1964 has shown that three bright physicists (without previous nuclear expertise) can deliver a plausible design for a working nuclear weapon (they were building an analogue of the Fat Man device, which is bulky and nearly undeliverable; today, more compact options should be pursued instead). A heavily redacted report is available online.

With modern computers, open information about nuclear weapons, some OSINT, and determination, the same feat could probably be accomplished in less than a year. (Some open source software and libraries that can be useful in such an endeavor, e.g., Castro for explosion hydrodynamics; there is also a guidebook for anyone with a deep interest in the field.) Many ideas for the critical part of the device – the neutron initiator — are also discussed in the open literature (here I will refrain from mentioning exact books and papers, but the information is still publicly available). Again, the task is clearly formulated — injecting the neutrons at the very precise moment during the explosion — this is only an engineering problem.

Assembling the device itself is no easy task; it requires precision engineering and the casting of high explosives, which cannot be done without significant pre-existing expertise. However, the brightest mechanical engineers and even explosives technicians can be legally hired on the open market, if not for the direct participation in the project, then for training and knowledge transfer for the project team. Private organizations have achieved even more complicated engineering feats (e.g., rocket engines at SpaceX), so this part looks feasible.

All current nuclear devices require periodic maintenance and re-casting of their plutonium pits with additional weapon-grade plutonium added every few years; otherwise their neutronic profile will gradually become too unfavorable to achieve a full nuclear explosion. If the group has acquired nuclear materials by stealing them, they will have to make use of them during the following few years. Nuclear programs of sovereign states, of course, have the entire weapon-grade plutonium production pipelines at their disposal, so the fresh plutonium is always available. This will be a much harder feat to achieve for a non-state actor. Ironically, the plutonium could be provided by disassembling PAL-equipped stolen or captured nuclear devices, which are less heavily guarded. While it is true that PAL will prevent their full scale explosion, they still can be the priceless source of weapon-grade plutonium.

Source: Nick Youngson via Picpedia, Attribution-ShareAlike 3.0 Unported (CC BY-SA 3.0)

Conclusion: Safeguarding weapon-grade nuclear materials is the highest priority, as the design details of nuclear devices are hardly a secret these days, and can be readily reproduced by many competent and determined organizations. Emergence of nuclear production pipelines (isotope separation, SILEX [Separation of Isotopes by Laser Excitation], plutonium separation, plutonium-producing reactors) should be monitored everywhere. Even PAL-equipped weapons need to be closely guarded, as they can be the sources of these materials. Groups and non-state actors willing to acquire nuclear capabilities without building the full production pipeline need to act fast and have the design and device prototypes (sans cores) ready before acquiring nuclear materials, as their utility is diminishing every year since acquisition.

If you enjoyed this post, please also see:

REMINDER: Don’t forget to join us tomorrow on-line at the Mad Scientist GEN Z and the OE Livestream Event! This event is open to all, on any device, anywhere (but is best streamed via a commercial, non-DoD network) — plan on joining us at 1330 EST on 21 February 2020 at: www.tradoc.army.mil/watch and engage in the discussion by submitting your questions and comments via this site’s moderated interactive chat room. You can also follow along on Twitter @ArmyMadSci. For more information, click here!

ALSO:  Help Mad Scientist expand the U.S. Army’s understanding of the Operational Environment (OE) — join the 662 others representing 46 nations who have already done so and take a few minutes to complete our short, on-line Global Perspectives Survey. Check out our initial findings here and stay tuned to future blog posts on the Mad Scientist Laboratory to learn what further insights we will have gleaned from this survey about OE trends, challenges, technologies, and disruptors.

FINALLY:  Don’t forget to enter The Operational Environment in 2035 Mad Scientist Writing Contest and share your unique insights on the future of warfighting — click here to learn more (submission deadline is 1 March 2020!)

Mr. Alexander Temerev is a consultant in complex systems dynamics and network analysis; he is CEO and founder of Reactivity – a boutique consulting company in Geneva, Switzerland.

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or the Training and Doctrine Command (TRADOC).

122. The Guy Behind the Guy: AI as the Indispensable Marshal

[Editor’s Note: Mad Scientist Laboratory is pleased to present today’s guest blog post by Mr. Brady Moore and Mr. Chris Sauceda, addressing how Artificial Intelligence (AI) systems and entities conducting machine speed collection, collation, and analysis of battlefield information will free up warfighters and commanders to do what they do best — fight and make decisions, respectively. This Augmented Intelligence will enable commanders to focus on the battle with coup d’œil, or the “stroke of an eye,” maintaining situational awareness on future fights at machine speed, without losing precious time crunching data.]

Jon Favreau’s Mike character (left) is the “guy behind the guy,” to Vince Vaughn’s Trent character (right) in Swingers, directed by Doug Liman, Miramax;(1996) / Source: Pinterest

In the 1996 film Swingers, the characters Trent (played by Vince Vaughn) and Mike (played by Jon Favreau) star as a couple of young guys trying to make it in Hollywood. On a trip to Las Vegas, Trent introduces Mike as “the guy behind the guy” – implying that Mike’s value is that he has the know-how to get things done, acts quickly, and therefore is indispensable to a leading figure. Yes, I’m talking about Artificial Intelligence for Decision-Making on the future battlefield – and “the guy behind the guy” sums up how AI will provide a decisive advantage in Multi-Domain Operations (MDO).

Some of the problems commanders will have on future battlefields will be the same ones they have today and the same ones they had 200 years ago: the friction and fog of war. The rise of information availability and connectivity brings today’s challenges – of which most of us are aware. Advanced adversary technologies will bring future challenges for intelligence gathering, command, communication, mobility, and dispersion. Future commanders and their staffs must be able to deal with both perennial and novel challenges faster than their adversaries, in disadvantageous circumstances we can’t control. “The guy behind the guy” will need to be conversant in vast amounts of information and quick to act.

Louis-Alexandre Berthier was a French Marshal and Vice-Constable of the Empire, and Chief of Staff under Napoleon / oil portrait by Jacques Augustin Catherine Pajou (1766–1828), Source: Wikimedia Commons

In western warfare, the original “guy behind the guy” wasn’t Mike – it was this stunning figure. Marshal Louis-Alexandre Berthier was Napoleon Bonaparte’s Chief of Staff from the start of his first Italian campaign in 1796 until his first abdication in 1814. Famous for rarely sleeping while on campaign, Paul Thiebault said of Berthier in 1796:

“Quite apart from his specialist training as a topographical engineer, he had knowledge and experience of staff work and furthermore a remarkable grasp of everything to do with war. He had also, above all else, the gift of writing a complete order and transmitting it with the utmost speed and clarity…No one could have better suited General Bonaparte, who wanted a man capable of relieving him of all detailed work, to understand him instantly and to foresee what he would need.”

Bonaparte’s military record, his genius for war, and skill as a leader are undisputed, but Berthier so enhanced his capabilities that even Napoleon himself admitted about his absence at Waterloo, “If Berthier had been there, I would not have met this misfortune.”

Augmented Intelligence, where intelligent systems enhance human capabilities (rather than systems that aspire to replicate the full scope of human intelligence), has the potential to act as a digital Chief of Staff to a battlefield commander. Just like Berthier, AI for decision-making would free up leaders to clearly consider more factors and make better decisions – allowing them to command more, and research and analyze less. AI should allow humans to do what they do best in combat – be imaginative, compel others, and act with an inherent intuition, while the AI tool finds, processes, and presents the needed information in time.

So Augmented Intelligence would filter information to prioritize only the most relevant and timely information to help manage today’s information overload, as well as quickly help communicate intent – but what about yesterday’s friction and fog, and tomorrow’s adversary technology? The future battlefield seems like one where U.S. commanders will be starved for the kind of Intelligence, Surveillance, and Reconnaissance (ISR) and communication we are so used to today, a battlefield with contested Electromagnetic Spectrum (EMS) and active cyber effects, whether known or unknown. How can commanders and their staffs begin to overcome challenges we haven’t yet been presented in war?

Average is Over: Powering America Beyond the Age of the Great Stagnation, by Tyler Cowen / Dutton, The Penguin Group, published in 2013

In his 2013 book Average is Over, economist Tyler Cowen examines the way freestyle chess players (who are free to use computers when playing the game) use AI tools to compete and win, and makes some interesting observations that are absolutely applicable to the future of warfare at every level. He finds competitors have to play against foes who have AI tools themselves, and that AI tools make chess move decisions that can be recognized (by people) and countered. The most successful freestyle chess players use a combination of their own knowledge of the game, but pick and choose times and situations to use different kinds of AI throughout a game. Their opponents not only then have to consider which AI is being used against them, but also their human operator’s overall strategy. This combination of Augmented Intelligence with an AI tool, along with natural inclinations and human intuitions will likely result in a powerful equilibrium of human and AI perception, analysis, and ultimately enhanced complex decision-making.

With a well-trained and versatile “guy behind the guy,” a commander and staff could employ different aspects of Augmented Intelligence at different times, based on need or appropriateness. A company commander in a dense urban fight, equipped with an appropriate AI tool – a “guy behind the guy” that helps him make sense of the battlefield – what could that commander accomplish with his company? He could employ the tool to notice things humans don’t – or at least notice them faster and alert him. Changes in historic traffic patterns or electronic signals in an area could indicate an upcoming attack or a fleeing enemy, or the system could let the commander know that just a little more specific data could help establish a pattern where enemy data was scarce. And if the commander was presented with the very complex and large problems that characterize modern dense urban combat, the system could help shrink and sequence problems to make them more solvable – for instance find a good subset of information to experiment with and help prove a hypothesis before trying out a solution in the real world – risking bandwidth instead of blood.

The U.S. strategy for MDO has already identified the critical need to observe, orient, decide, and act faster than our adversaries – multiple AI tools that have all necessary information, and can present it and act quickly will certainly be indispensable to leaders on the battlefield. An AI “guy behind the guy” continuously sizing up the situation, finding the right information and allowing for better, faster decisions in difficult situations is how Augmented Intelligence will best serve leaders in combat and provide battlefield advantage.

If you enjoyed this post, please also read:

… watch Juliane Gallina‘s Arsenal of the Mind presentation at the Mad Scientist Robotics, AI, & Autonomy Visioning Multi Domain Battle in 2030-2050 Conference at Georgia Tech Research Institute, Atlanta, Georgia, on 7-8 March 2017

… and learn more about potential AI battlefield applications in our Crowdsourcing the Future of the AI Battlefield information paper.

Brady Moore is a Senior Enterprise Client Executive at Neudesic in New York City. A graduate of The Citadel, he is a former U.S. Army Infantry and Special Forces officer with service as a leader, planner, and advisor across Iraq, Afghanistan, Africa, and, South Asia. After leaving the Army in 2011, he obtained an MBA at Penn State and worked as an IBM Cognitive Solutions Leader covering analytics, AI, and Machine Learning in National Security. He’s the Junior Vice Commander of VFW Post 2906 in Pompton Lakes, NJ, and Cofounder of the Special Forces Association Chapter 58 in New York City. He also works with Elite Meet as often as he can.

Chris Sauceda is an account manager within the U.S. Army Defense and Intel IBM account, covering Command and Control, Cyber, and Advanced Analytics/ Artificial Intelligence. Chris served on active duty and deployed in support of Operation Iraqi Freedom, and has been in the Defense contracting business for over 13 years. Focused on driving cutting edge technologies to the warfighter, he also currently serves as a Signal Officer in the Texas Military Department.

82. Bias and Machine Learning

[Editor’s Note:  Today’s post poses four central questions to our Mad Scientist community of action regarding bias in machine learning and the associated ramifications for artificial intelligence, autonomy, lethality, and decision-making on future warfighting.]

We thought that we had the answers, it was the questions we had wrong” – Bono, U2

Source: www.vpnsrus.com via flickr

As machine learning and deep learning algorithms become more commonplace, it is clear that the utopian ideal of a bias-neutral Artificial Intelligence (AI) is exactly just that. These algorithms have underlying biases embedded in their coding, imparted by their human programmers (either consciously or unconsciously). These algorithms can develop further biases during the machine learning and training process.  Dr. Tolga Bolukbasi, Boston University, recently described algorithms as not being capable of distinguishing right from wrong, unlike humans that can judge their actions, even when they act against ethical norms. For algorithms, data is the ultimate determining factor.

Realizing that algorithms supporting future Intelligence, Surveillance, and Reconnaissance (ISR) networks and Commander’s decision support aids will have inherent biases — what is the impact on future warfighting? This question is exceptionally relevant as Soldiers and Leaders consider the influence of biases in man-machine relationships, and their potential ramifications on the battlefield, especially with regard to the rules of engagement (i.e., mission execution and combat efficiency versus the proportional use of force and minimizing civilian casualties and collateral damage).

It is difficult to make predictions, particularly about the future.” This quote has been attributed to anyone ranging from Mark Twain to Niels Bohr to Yogi Berra. Point prediction is a sucker’s bet. However, asking the right questions about biases in AI is incredibly important.

The Mad Scientist Initiative has developed a series of questions to help frame the discussion regarding what biases we are willing to accept and in what cases they will be acceptable. Feel free to share your observations and questions in the comments section of this blog post (below) or email them to us at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil.

1) What types of bias are we willing to accept? Will a so-called cognitive bias that forgoes a logical, deliberative process be allowable? What about a programming bias that is discriminative towards any specific gender(s), ethnicity(ies), race(s), or even age(s)?

2) In what types of systems will we accept biases? Will machine learning applications in supposedly non-lethal warfighting functions like sustainment, protection, and intelligence be given more leeway with regards to bias?

3) Will the biases in machine learning programming and algorithms be more apparent and/or outweigh the inherent biases of humans-in-the-loop? How will perceived biases affect trust and reliance on machine learning applications?

4) At what point will the pace of innovation and introduction of this technology on the battlefield by our adversaries cause us to forego concerns of bias and rapidly field systems to gain a decisive Observe, Orient, Decide, and Act (OODA) loop and combat speed advantage on the Hyperactive Battlefield?

For additional information impacting on this important discussion, please see the following:

An Appropriate Level of Trust… blog post

Ethical Dilemmas of Future Warfare blog post

Ethics and the Future of War panel discussion video

63. Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward

[Editor’s Note:  We are pleased to present Mad Scientist Sam Bendett‘s informative guest blog post on the ramifications of current Russian Unmanned Ground Vehicle (UGV) trials in Syria for future autonomous combat systems on the battlefield.  Please note that many of Mr. Bendett’s embedded links in the post below are best accessed using non-DoD networks.]

Russia’s Forpost UAV (licensed copy of IAI Searcher II) in Khmeimim, Syria; Source: https://t.co/PcNgJ811O8

Russia, like many other nations, is investing in the development of various unmanned military systems. The Russian defense establishment sees such systems as mission multipliers, highlighting two major advantages: saving soldiers’ lives and making military missions more effective. In this context, Russian developments are similar to those taking place around the world. Various militaries are fielding unmanned systems for surveillance, intelligence, logistics, or attack missions to make their forces or campaigns more effective. In fact, the Russian military has been successfully using Unmanned Aerial Vehicles (UAVs) in training and combat since 2013. It has used them with great effect in Syria, where these UAVs flew more mission hours than manned aircraft in various Intelligence, Surveillance, and Reconnaissance (ISR) roles.

Russia is also busy designing and testing many unmanned maritime and ground vehicles for various missions with diverse payloads. To underscore the significance of this emerging technology for the nation’s armed forces, Russian Defense Minister Sergei Shoigu recently stated that the serial production of ground combat robots for the military “may start already this year.”

Uran-9 combat UGV at Victory Day 2018 Parade in Red Square; Source: independent.co.uk

But before we see swarms of ground combat robots with red stars emblazoned on them, the Russian military will put these weapons through rigorous testing in order to determine if they can correspond to battlefield realities. Russian military manufacturers and contractors are not that different from their American counterparts in sometimes talking up the capabilities of their creations, seeking to create the demand for their newest achievement before there is proof that such technology can stand up to harsh battlefield conditions. It is for this reason that the Russian Ministry of Defense (MOD) finally established several centers such as Main Research and Testing Center of Robotics, tasked with working alongside the defense-industrial sector to create unmanned military technology standards and better communicate warfighters’ needs.  The MOD is also running conferences such as the annual “Robotization of the Armed Forces” that bring together military and industry decision-makers for a better dialogue on the development, growth, and evolution of the nation’s unmanned military systems.

Uran-9 combat UGV; Source:  nationalinterest.org

This brings us to one of the more interesting developments in Russian UGVs. Then Russian Deputy Defense Minister Borisov recently confirmed that the Uran-9 combat UGV was tested in Syria, which would be the first time this much-discussed system was put into combat. This particular UGV is supposed to operate in teams of three or four and is armed with a 30mm cannon and 7.62 mm machine guns, along with a variety of other weapons.

Just as importantly, it was designed to operate at a distance of up to three kilometers (3000 meters or about two miles) from its operator — a range that could be extended up to six kilometers for a team of these UGVs. This range is absolutely crucial for these machines, which must be operated remotely. Russian designers are developing operational electronics capable of rendering the Uran-9 more autonomous, thereby moving the operators to a safer distance from actual combat engagement. The size of a small tank, the Uran-9 impressed the international military community when first unveiled and it was definitely designed to survive battlefield realities….

Uran-9; Source: Defence-Blog.com

However, just as “no plan survives first contact with the enemy,” the Uran-9, though built to withstand punishment, came up short in its first trial run in Syria. In a candid admission, Andrei P. Anisimov, Senior Research Officer at the 3rd Central Research Institute of the Ministry of Defense, reported on the Uran-9’s critical combat deficiencies during the 10th All-Russian Scientific Conference entitled “Actual Problems of Defense and Security,” held in April 2018. In particular, the following issues came to light during testing:

• Instead of its intended range of several kilometers, the Uran-9 could only be operated at distance of “300-500 meters among low-rise buildings,” wiping out up to nine-tenths of its total operational range.

• There were “17 cases of short-term (up to one minute) and two cases of long-term (up to 1.5 hours) loss of Uran-9 control” recorded, which rendered this UGV practically useless on the battlefield.

• The UGV’s running gear had problems – there were issues with supporting and guiding rollers, as well as suspension springs.

• The electro-optic stations allowed for reconnaissance and identification of potential targets at a range of no more than two kilometers.

• The OCH-4 optical system did not allow for adequate detection of adversary’s optical and targeting devices and created multiple interferences in the test range’s ground and airspace.

Uran-9 undergoing testing; Source: YouTube

• Unstable operation of the UGV’s 30mm automatic cannon was recorded, with firing delays and failures. Moreover, the UGV could fire only when stationary, which basically wiped out its very purpose of combat “vehicle.”

• The Uran-9’s combat, ISR, and targeting weapons and mechanisms were also not stabilized.

On one hand, these many failures are a sign that this much–discussed and much-advertised machine is in need of significant upgrades, testing, and perhaps even a redesign before it gets put into another combat situation. The Russian military did say that it tested nearly 200 types of weapons in Syria, so putting the Uran-9 through its combat paces was a logical step in the long development of this particular UGV. If the Syrian trial was the first of its kind for this UGV, such significant technical glitches would not be surprising.

However, the MOD has been testing this Uran-9 for a while now, showing videos of this machine at a testing range, presumably in Russia. The truly unexpected issue arising during operations in Syria had to do with the failure of the Uran-9 to effectively engage targets with its cannon while in motion (along with a number of other issues). Still, perhaps many observers bought into the idea that this vehicle would perform as built – tracks, weapons, and all. A closer examination of the publicly-released testing video probably foretold some of the Syrian glitches – in this particular one, Uran-9 is shown firing its machine guns while moving, but its cannon was fired only when the vehicle was stationary. Another interesting aspect that is significant in hindsight is that the testing range in the video was a relatively open space – a large field with a few obstacles around, not the kind of complex terrain, dense urban environment encountered in Syria. While today’s and future battlefields will range greatly from open spaces to megacities, a vehicle like the Uran-9 would probably be expected to perform in all conditions. Unless, of course, Syrian tests would effectively limit its use in future combat.

Russian Soratnik UGV

On another hand, so many failures at once point to much larger issues with the Russian development of combat UGVs, issues that Anisimov also discussed during his presentation. He highlighted the following technological aspects that are ubiquitous worldwide at this point in the global development of similar unmanned systems:

• Low level of current UGV autonomy;

• Low level of automation of command and control processes of UGV management, including repairs and maintenance;

• Low communication range, and;

• Problems associated with “friend or foe” target identification.

Judging from the Uran-9’s Syrian test, Anisimov made the following key conclusions which point to the potential trajectory of Russian combat UGV development – assuming that other unmanned systems may have similar issues when placed in a simulated (or real) combat environment:

• These types of UGVs are equipped with a variety of cameras and sensors — and since the operator is presumably located a safe distance from combat, he may have problems understanding, processing, and effectively responding to what is taking place with this UGV in real-time.

• For the next 10-15 years, unmanned military systems will be unable to effectively take part in combat, with Russians proposing to use them in storming stationary and well-defended targets (effectively giving such combat UGVs a kamikaze role).

• One-time and preferably stationary use of these UGVs would be more effective, with maintenance and repair crews close by.

• These UGVs should be used with other military formations in order to target and destroy fortified and firing enemy positions — but never on their own, since their breakdown would negatively impact the military mission.

The presentation proposed that some of the above-mentioned problems could be overcome by domestic developments in the following UGV technology and equipment areas:

• Creating secure communication channels;

• Building miniaturized hi-tech navigation systems with a high degree of autonomy, capable of operating with a loss of satellite navigation systems;

• Developing miniaturized and effective ISR components;

• Integrating automated command and control systems, and;

• Better optics, electronics and data processing systems.

According to Anisimov’s report, the overall Russian UGV and unmanned military systems development arch is similar to the one proposed by the United States Army Capabilities Integration Center (ARCIC):  the gradual development of systems capable of more autonomy on the battlefield, leading to “smart” robots capable of forming “mobile networks” and operating in swarm configurations. Such systems should be “multifunctional” and capable of being integrated into existing armed forces formations for various combat missions, as well as operate autonomously when needed. Finally, each military robot should be able to function within existing and future military technology and systems.

Source: rusmilitary.wordpress.com

Such a candid review and critique of the Uran-9 in Syria, if true, may point to the Russian Ministry of Defense’s attitude towards its domestic manufacturers. The potential combat effectiveness of this UGV was advertised for the past two years, but its actual performance fell far short of expectations. It is a sign for developers of other Russian unmanned ground vehicles – like Soratnik, Vihr, and Nerehta — since it displays full range of deficiencies that take place outside of well-managed testing ranges where such vehicles are currently undergoing evaluation. It also brought to light significant problems with ISR equipment — this type of technology is absolutely crucial to any unmanned system’s successful deployment, and its failures during Uran-9 tests exposed a serious combat weakness.

It is also a useful lesson for many other designers of domestic combat UGVs who are seeking to introduce similar systems into existing order of battle. It appears that the Uran-9’s full effectiveness can only be determined at a much later time if it can perform its mission autonomously in the rapidly-changing and complex battlefield environment. Fully autonomous operation so far eludes its Russian developers, who are nonetheless still working towards achieving such operational goals for their combat UGVs. Moreover, Russian deliberations on using their existing combat UGV platforms in one-time attack mode against fortified adversary positions or firing points, tracking closely with ways that Western military analysts are thinking that such weapons could be used in combat.

Source: Nikolai Novichkov / Orbis Defense

The Uran-9 is still a test bed and much has to take place before it could be successfully integrated into current Russian concept of operations. We could expect more eye-opening “lessons learned” from its’ and other UGVs potential deployment in combat. Given the rapid proliferation of unmanned and autonomous technology, we are already in the midst of a new arms race. Many states are now designing, building, exporting, or importing various technologies for their military and security forces.

To make matters more interesting, the Russians have been public with both their statements about new technology being tested and evaluated, and with possible use of such weapons in current and future conflicts. There should be no strategic or tactical surprise when military robotics are finally encountered in future combat.

Source: Block13
by djahal; Deviantart.com

Samuel Bendett is a Research Analyst at the CNA Corporation and a Russia Studies Fellow at the American Foreign Policy Council. He is an official Mad Scientist, having presented and been so proclaimed at a previous Mad Scientist Conference.  The views expressed here are his own.