84. Quantum Surprise on the Battlefield?

[Editor’s Note:  In the following guest blog post, Mad Scientist Elsa B. Kania addresses quantum technology and the potential ramifications should the People’s Republic of China (PRC) win the current race in fielding operational quantum capabilities].

If China were to succeed in realizing the full potential of quantum technology, the Chinese People’s Liberation Army (PLA) might have the capability to offset core pillars of U.S. military power on the future battlefield.  Let’s imagine the worst-case (or, for China, best-case) scenarios.

The Chinese military and government could leverage quantum cryptography and communications to enable “perfect security” for its most sensitive information and communications. The PLA may look to employ ‘uncrackable’ quantum key distribution (QKD), which involves the provably secure exchange of keys in quantum states, over fiber optic networks for secure command and control, while extending the range of its quantum networks to more far-flung units or even ships at sea, through an expanding constellation of quantum satellites.

If China were to ‘go dark’ to U.S. intelligence capabilities as a result, then a new level of uncertainty could complicate U.S. calculus and assessments, while exacerbating the risks of surprise or misperception in a crisis or conflict scenario.

China’s massive investments in quantum computing could succeed someday in the decadal marathon towards a fully functional and universal quantum computer.

Liaoning Exercise in the West Pacific / Source: Flickr by rhk111

If developed in secret or operational sooner than expected, then these immense computing capabilities could be unleashed to break public key cryptography. Such asymmetric cryptography, which today is quite prevalent and integral to the security of our information technology ecosystem, relies upon the difficulty of prime factorization, a task beyond the capabilities of today’s classical computers but that could be cracked by a future quantum computer. The impact could be analogous to the advantage that the U.S. achieved through the efforts of American code-breakers ahead of the Battle of Midway.

Although there will be options available for ‘quantum-proof’ encryption, the use of public key cryptography could remain prevalent in older military and government information systems, such as legacy satellites. Moreover, any data previously collected while encrypted could be rapidly decrypted and exploited, exposing perhaps decades of sensitive information. Will the U.S. military and government take this potential security threat seriously enough to start the transition to quantum-resistant alternatives?

Future advances in quantum computing could be game changers for intelligence and information processing. In a new era in which data is a critical resource, the ability to process it rapidly is at a premium. In theory, quantum computing could also accelerate the development of artificial intelligence towards a closer approximation to “superintelligence,” provoking concerns of unexpected, by some accounts even existential, risks and powerful capabilities.

PLA Navy Kilo-Class Submarine / Source: Took-ranch at English Wikipedia https://commons.wikimedia.org/w/index.php?curid=12184725

Meanwhile, based on active efforts in the Chinese defense industry, the next generation of Chinese submarines could be equipped with a ‘quantum compass’ to enable greater precision in positioning and independence from space-based navigation systems, while perhaps also leveraging quantum communications underwater for secure control and covert coordination.

The PLA might realize its ambitions to develop quantum radar that could be the “nemesis” of U.S. stealth fighters and bolster Chinese missile defense. This “offset” technology could overcome the U.S. military’s advantage in stealth. Similarly, the ‘spooky’ sensitivity in detection enabled by techniques such as ghost imaging and quantum remote sensing could enhance PLA ISR capabilities.

In the aggregate, could China’s future advances in these technologies change the balance of power in the Indo-Pacific?

Su-27 Flanker fighter / Source: DoD photo by Staff Sgt. D. Myles Cullen

For China, the potential to disrupt paradigms of information dominance through quantum computing and cryptography, while perhaps undermining U.S. advantages in stealth technologies through quantum radar and sensing, and even more actively contesting the undersea domain, could create a serious challenge to U.S. military-technological predominance.

Perhaps, but this imagining of impactful military applications of quantum technology is far from a reality today. For the time being, these technologies still confront major constraints and limitations in their development.

It seems unlikely that quantum cryptography will ever enable truly perfect security, given the perhaps inevitable human and engineering challenges, along with remaining vulnerabilities to exploitation.

At present, quantum computing, while approaching the symbolic milestone of “quantum supremacy,” faces a long road ahead, due to challenges of scaling and error correction.

Certain quantum devices, for sensing, metrology, and positioning, may be quite useful but could enable fairly incremental, evolutionary improvements relative to the full range of alternatives.

There are also reasons to consider critically when Chinese official media discloses (especially in English) oft-hyped advances such as in quantum radar – since reporting on such apparent progress could be variously intended for purposes of signaling or perhaps even misdirection.

National Institute of Standards and Technology (NIST) neutral-atom quantum processors — prototype devices which designers are trying to develop into full-fledged quantum computers  https://www.flickr.com/photos/usnistgov/5940500587/

Although China’s advances and ambitions should be taken quite seriously – particularly considering the talent and resources evidently mobilized to advance these objectives – the U.S. military may also be well postured to leverage quantum technology on the future battlefield.

 

Inevitably, the timeframe for the actual operationalization of these technologies is challenging to evaluate, especially because a significant proportion of the relevant research may be occurring in secret.

For that reason, it is also difficult to determine with confidence whether the U.S. or China is truly leading in the advancement of various disciplines of quantum science.

Moreover, beyond concerns of competition between the U.S. and China, exciting research is occurring worldwide, from Canada and Europe to Australia, often with tech companies and start-ups at the forefront of the development and commercialization of these technologies.

Looking forward, the trajectory of this second quantum revolution will play out over decades to come. Future successes will require sustained investments, such as those China is actively pursuing in the range of tens of billions.

As the Chinese military and defense industry start testing and experimenting with quantum technology, the U.S. military should also explore further the potential – and evaluate the limitations – of these capabilities, including through deepening public-private partnership.

As China challenges American leadership in innovation, the U.S. military and government should recognize the real risks of future surprises that could result from truly ‘made in China’ innovation, while also taking full advantage of the opportunities to impose surprise upon strategic competitors.

The above blog post is based on the recently published Center for a New American Security (CNAS) report entitled Quantum Hegemony? – China’s Ambitions and the Challenges to U.S. Innovation Leadership, co-authored by Ms. Elsa Kania and  Mr. John Costello.  Mad Scientist believes that this report is the best primer on the current state of quantum technology.  Note that quantum science – communication, computing, and sensing – was previously addressed by the Mad Scientist Laboratory as a Pink Flamingo.

Ms. Kania was proclaimed an official Mad Scientist following her presentation on PLA Human-Machine Integration at the Bio Convergence and Soldier 2050 Conference at SRI International, Menlo Park, 8-9 March 2018.  Her podcast from this event, China’s Quest for Enhanced Military Technology, is hosted by Modern War Institute.

Disclaimer: The views expressed in this article belong to the author alone and do not represent the Department of Defense, the U.S. Army, or the U.S. Army Training and Training Doctrine Command.

Ms. Kania is an Adjunct Fellow with the Technology and National Security Program at CNAS.

83. A Primer on Humanity: Iron Man versus Terminator

[Editor’s Note: Mad Scientist Laboratory is pleased to present a post by guest blogger MAJ(P) Kelly McCoy, U.S. Army Training and Doctrine Command (TRADOC), with a theme familiar to anyone who has ever debated super powers in a schoolyard during recess. Yet despite its familiarity, it remains a serious question as we seek to modernize the U.S. Army in light of our pacing threat adversaries. The question of “human-in-the-loop” versus “human-out-of-the-loop” is an extremely timely and cogent question.]

Iron Man versus Terminator — who would win? It is a debate that challenges morality, firepower, ingenuity, and pop culture prowess. But when it comes down to brass tacks, who would really win and what does that say about us?

Mad Scientist maintains that:

  • Today: Mano a mano, Iron Man’s human ingenuity, grit, and irrationality would carry the day; however…
  • In the Future: Facing the entire Skynet distributed neural net, Iron Man’s human-in-the-loop would be overwhelmed by a coordinated, swarming attack of Terminators.
Soldier in Iron Man-like exoskeleton prototype suit

Iron Man is the super-empowered human utilizing Artificial Intelligence (AI) — Just A Rather Very Intelligent System or JARVIS — to augment the synthesizing of data and robotics to increase strength, speed, and lethality. Iron Man utilizes autonomous systems, but maintains a human-in-the- loop for lethality decisions. Conversely, the Terminator is pure machine – with AI at the helm for all decision-making. Terminators are built for specific purposes – and for this case let’s assume these robotic soldiers are designed specifically for urban warfare. Finally, strength, lethality, cyber vulnerabilities, and modularity of capabilities between Iron Man and Terminator are assumed to be relatively equal to each other.

Up front, Iron Man is constrained by individual human bias, retention and application of training, and physical and mental fatigue. Heading into the fight, the human behind a super powered robotic enhancing suit will make decisions based on their own biases. How does one respond to too much information or not enough? How do they react when needing to respond while wrestling with the details of what needs to be remembered at the right time and space? Compounding this is the retention and application of the individual human’s training leading up to this point. Have they successfully undergone enough repetitions to mitigate their biases and arrive at the best solution and response? Finally, our most human vulnerability is physical and mental fatigue. Without adding in psychoactive drugs, how would you respond to taking the Graduate Record Examinations (GRE) while simultaneously winning a combatives match? How long would you last before you are mentally and physically exhausted?

Terminator / Source: http://pngimg.com/download/29789

What the human faces is a Terminator who removes bias and optimizes responses through machine learning, access to a network of knowledge, options, and capabilities, and relentless speed to process information. How much better would a Soldier be with their biases removed and the ability to apply the full library of lessons learned? To process the available information that contextualizes environment without cognitive overload. Arriving at the optimum decision, based on the outcomes of thousands of scenarios.

Iron Man arrives to this fight with irrationality and ingenuity; the ability to quickly adapt to complex problems and environments; tenacity; and morality that is uniquely human. Given this, the Terminator is faced with an adversary who can not only adapt, but also persevere with utter unpredictability. And here the Terminator’s weaknesses come to light. Their algorithms are matched to an environment – but environments can change and render algorithms obsolete. Their energy sources are finite – where humans can run on empty, Terminators power off. Finally, there are always glitches and vulnerabilities. Autonomous systems depend on the environment that it is coded for – if you know how to corrupt the environment, you can corrupt the system.

Ultimately the question of Iron Man versus Terminator is a question of time and human value and worth. In time, it is likely that the Iron Man will fall in the first fight. However, the victor is never determined in the first fight, but the last. If you believe in human ingenuity, grit, irrationality, and consideration, the last fight is the true test of what it means to be human.

Note:  Nothing in this blog is intended as an implied or explicit endorsement of the “Iron Man” or “Terminator” franchises on the part of the Department of Defense, the U.S. Army, or TRADOC.

Kelly McCoy is a U.S. Army strategist officer and a member of the Military Leadership Circle. A blessed husband and proud father, when he has time he is either brewing beer, roasting coffee, or maintaining his blog (Drink Beer; Kill War at: https://medium.com/@DrnkBrKllWr). The views expressed in this article belong to the author alone and do not represent the Department of Defense.

82. Bias and Machine Learning

[Editor’s Note:  Today’s post poses four central questions to our Mad Scientist community of action regarding bias in machine learning and the associated ramifications for artificial intelligence, autonomy, lethality, and decision-making on future warfighting.]

We thought that we had the answers, it was the questions we had wrong” – Bono, U2

Source: www.vpnsrus.com via flickr

As machine learning and deep learning algorithms become more commonplace, it is clear that the utopian ideal of a bias-neutral Artificial Intelligence (AI) is exactly just that. These algorithms have underlying biases embedded in their coding, imparted by their human programmers (either consciously or unconsciously). These algorithms can develop further biases during the machine learning and training process.  Dr. Tolga Bolukbasi, Boston University, recently described algorithms as not being capable of distinguishing right from wrong, unlike humans that can judge their actions, even when they act against ethical norms. For algorithms, data is the ultimate determining factor.

Realizing that algorithms supporting future Intelligence, Surveillance, and Reconnaissance (ISR) networks and Commander’s decision support aids will have inherent biases — what is the impact on future warfighting? This question is exceptionally relevant as Soldiers and Leaders consider the influence of biases in man-machine relationships, and their potential ramifications on the battlefield, especially with regard to the rules of engagement (i.e., mission execution and combat efficiency versus the proportional use of force and minimizing civilian casualties and collateral damage).

It is difficult to make predictions, particularly about the future.” This quote has been attributed to anyone ranging from Mark Twain to Niels Bohr to Yogi Berra. Point prediction is a sucker’s bet. However, asking the right questions about biases in AI is incredibly important.

The Mad Scientist Initiative has developed a series of questions to help frame the discussion regarding what biases we are willing to accept and in what cases they will be acceptable. Feel free to share your observations and questions in the comments section of this blog post (below) or email them to us at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil.

1) What types of bias are we willing to accept? Will a so-called cognitive bias that forgoes a logical, deliberative process be allowable? What about a programming bias that is discriminative towards any specific gender(s), ethnicity(ies), race(s), or even age(s)?

2) In what types of systems will we accept biases? Will machine learning applications in supposedly non-lethal warfighting functions like sustainment, protection, and intelligence be given more leeway with regards to bias?

3) Will the biases in machine learning programming and algorithms be more apparent and/or outweigh the inherent biases of humans-in-the-loop? How will perceived biases affect trust and reliance on machine learning applications?

4) At what point will the pace of innovation and introduction of this technology on the battlefield by our adversaries cause us to forego concerns of bias and rapidly field systems to gain a decisive Observe, Orient, Decide, and Act (OODA) loop and combat speed advantage on the Hyperactive Battlefield?

For additional information impacting on this important discussion, please see the following:

An Appropriate Level of Trust… blog post

Ethical Dilemmas of Future Warfare blog post

Ethics and the Future of War panel discussion video

81. “Maddest” Guest Blogger!

[Editor’s Note: Since its inception last November, the Mad Scientist Laboratory has enabled us to expand our reach and engage global innovators from across industry, academia, and the Government regarding emergent disruptive technologies and their individual and convergent impacts on the future of warfare. For perspective, our blog has accrued almost 60K views by over 30K visitors from around the world!

Our Mad Scientist Community of Action continues to grow — in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. Almost half (36 out of 81) of the blog posts published have been submitted by guest bloggers. We challenge you to contribute your ideas!

In particular, we would like to recognize Mad Scientist Mr. Sam Bendett by re-posting his submission entitled “Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward,” originally published on 25 June 2018. This post generated a record number of visits and views during the past six month period. Consequently, we hereby declare Sam to be the Mad Scientist Laboratory’s “Maddest” Guest Blogger! for the latter half of FY18. In recognition of his achievement, Sam will receive much coveted Mad Scientist swag.

While Sam’s post revealed the many challenges Russia has experienced in combat testing the Uran-9 Unmanned Ground Vehicle (UGV) in Syria, it is important to note that Russia has designed, prototyped,  developed, and operationally tested this system in a combat environment, demonstrating a disciplined and proactive approach to innovation.  Russia is learning how to integrate robotic lethal ground combat systems….

Enjoy re-visiting Sam’s informative post below, noting that many of the embedded links are best accessed using non-DoD networks.]

Russia’s Forpost UAV (licensed copy of IAI Searcher II) in Khmeimim, Syria; Source: https://t.co/PcNgJ811O8

Russia, like many other nations, is investing in the development of various unmanned military systems. The Russian defense establishment sees such systems as mission multipliers, highlighting two major advantages: saving soldiers’ lives and making military missions more effective. In this context, Russian developments are similar to those taking place around the world. Various militaries are fielding unmanned systems for surveillance, intelligence, logistics, or attack missions to make their forces or campaigns more effective. In fact, the Russian military has been successfully using Unmanned Aerial Vehicles (UAVs) in training and combat since 2013. It has used them with great effect in Syria, where these UAVs flew more mission hours than manned aircraft in various Intelligence, Surveillance, and Reconnaissance (ISR) roles.

Russia is also busy designing and testing many unmanned maritime and ground vehicles for various missions with diverse payloads. To underscore the significance of this emerging technology for the nation’s armed forces, Russian Defense Minister Sergei Shoigu recently stated that the serial production of ground combat robots for the military “may start already this year.”

Uran-9 combat UGV at Victory Day 2018 Parade in Red Square; Source: independent.co.uk

But before we see swarms of ground combat robots with red stars emblazoned on them, the Russian military will put these weapons through rigorous testing in order to determine if they can correspond to battlefield realities. Russian military manufacturers and contractors are not that different from their American counterparts in sometimes talking up the capabilities of their creations, seeking to create the demand for their newest achievement before there is proof that such technology can stand up to harsh battlefield conditions. It is for this reason that the Russian Ministry of Defense (MOD) finally established several centers such as Main Research and Testing Center of Robotics, tasked with working alongside the defense-industrial sector to create unmanned military technology standards and better communicate warfighters’ needs.  The MOD is also running conferences such as the annual “Robotization of the Armed Forces” that bring together military and industry decision-makers for a better dialogue on the development, growth, and evolution of the nation’s unmanned military systems.

Uran-9 Combat UGV, Source: nationalinterest.org

This brings us to one of the more interesting developments in Russian UGVs. Then Russian Deputy Defense Minister Borisov recently confirmed that the Uran-9 combat UGV was tested in Syria, which would be the first time this much-discussed system was put into combat. This particular UGV is supposed to operate in teams of three or four and is armed with a 30mm cannon and 7.62 mm machine guns, along with a variety of other weapons.

Just as importantly, it was designed to operate at a distance of up to three kilometers (3000 meters or about two miles) from its operator — a range that could be extended up to six kilometers for a team of these UGVs. This range is absolutely crucial for these machines, which must be operated remotely. Russian designers are developing operational electronics capable of rendering the Uran-9 more autonomous, thereby moving the operators to a safer distance from actual combat engagement. The size of a small tank, the Uran-9 impressed the international military community when first unveiled and it was definitely designed to survive battlefield realities….

Uran-9; Source: Defence-Blog.com

However, just as “no plan survives first contact with the enemy,” the Uran-9, though built to withstand punishment, came up short in its first trial run in Syria. In a candid admission, Andrei P. Anisimov, Senior Research Officer at the 3rd Central Research Institute of the Ministry of Defense, reported on the Uran-9’s critical combat deficiencies during the 10th All-Russian Scientific Conference entitled “Actual Problems of Defense and Security,” held in April 2018. In particular, the following issues came to light during testing:

• Instead of its intended range of several kilometers, the Uran-9 could only be operated at distance of “300-500 meters among low-rise buildings,” wiping out up to nine-tenths of its total operational range.

• There were “17 cases of short-term (up to one minute) and two cases of long-term (up to 1.5 hours) loss of Uran-9 control” recorded, which rendered this UGV practically useless on the battlefield.

• The UGV’s running gear had problems – there were issues with supporting and guiding rollers, as well as suspension springs.

• The electro-optic stations allowed for reconnaissance and identification of potential targets at a range of no more than two kilometers.

• The OCH-4 optical system did not allow for adequate detection of adversary’s optical and targeting devices and created multiple interferences in the test range’s ground and airspace.

Uran-9 undergoing testing; Source: YouTube

• Unstable operation of the UGV’s 30mm automatic cannon was recorded, with firing delays and failures. Moreover, the UGV could fire only when stationary, which basically wiped out its very purpose of combat “vehicle.”

• The Uran-9’s combat, ISR, and targeting weapons and mechanisms were also not stabilized.

On one hand, these many failures are a sign that this much–discussed and much-advertised machine is in need of significant upgrades, testing, and perhaps even a redesign before it gets put into another combat situation. The Russian military did say that it tested nearly 200 types of weapons in Syria, so putting the Uran-9 through its combat paces was a logical step in the long development of this particular UGV. If the Syrian trial was the first of its kind for this UGV, such significant technical glitches would not be surprising.

However, the MOD has been testing this Uran-9 for a while now, showing videos of this machine at a testing range, presumably in Russia. The truly unexpected issue arising during operations in Syria had to do with the failure of the Uran-9 to effectively engage targets with its cannon while in motion (along with a number of other issues). Still, perhaps many observers bought into the idea that this vehicle would perform as built – tracks, weapons, and all. A closer examination of the publicly-released testing video probably foretold some of the Syrian glitches – in this particular one, Uran-9 is shown firing its machine guns while moving, but its cannon was fired only when the vehicle was stationary. Another interesting aspect that is significant in hindsight is that the testing range in the video was a relatively open space – a large field with a few obstacles around, not the kind of complex terrain, dense urban environment encountered in Syria. While today’s and future battlefields will range greatly from open spaces to megacities, a vehicle like the Uran-9 would probably be expected to perform in all conditions. Unless, of course, Syrian tests would effectively limit its use in future combat.

Russian Soratnik UGV

On another hand, so many failures at once point to much larger issues with the Russian development of combat UGVs, issues that Anisimov also discussed during his presentation. He highlighted the following technological aspects that are ubiquitous worldwide at this point in the global development of similar unmanned systems:

• Low level of current UGV autonomy;

• Low level of automation of command and control processes of UGV management, including repairs and maintenance;

• Low communication range, and;

• Problems associated with “friend or foe” target identification.

Judging from the Uran-9’s Syrian test, Anisimov made the following key conclusions which point to the potential trajectory of Russian combat UGV development – assuming that other unmanned systems may have similar issues when placed in a simulated (or real) combat environment:

• These types of UGVs are equipped with a variety of cameras and sensors — and since the operator is presumably located a safe distance from combat, he may have problems understanding, processing, and effectively responding to what is taking place with this UGV in real-time.

• For the next 10-15 years, unmanned military systems will be unable to effectively take part in combat, with Russians proposing to use them in storming stationary and well-defended targets (effectively giving such combat UGVs a kamikaze role).

• One-time and preferably stationary use of these UGVs would be more effective, with maintenance and repair crews close by.

• These UGVs should be used with other military formations in order to target and destroy fortified and firing enemy positions — but never on their own, since their breakdown would negatively impact the military mission.

The presentation proposed that some of the above-mentioned problems could be overcome by domestic developments in the following UGV technology and equipment areas:

• Creating secure communication channels;

• Building miniaturized hi-tech navigation systems with a high degree of autonomy, capable of operating with a loss of satellite navigation systems;

• Developing miniaturized and effective ISR components;

• Integrating automated command and control systems, and;

• Better optics, electronics and data processing systems.

According to Anisimov’s report, the overall Russian UGV and unmanned military systems development arch is similar to the one proposed by the United States Army Capabilities Integration Center (ARCIC):  the gradual development of systems capable of more autonomy on the battlefield, leading to “smart” robots capable of forming “mobile networks” and operating in swarm configurations. Such systems should be “multifunctional” and capable of being integrated into existing armed forces formations for various combat missions, as well as operate autonomously when needed. Finally, each military robot should be able to function within existing and future military technology and systems.

Source: rusmilitary.wordpress.com

Such a candid review and critique of the Uran-9 in Syria, if true, may point to the Russian Ministry of Defense’s attitude towards its domestic manufacturers. The potential combat effectiveness of this UGV was advertised for the past two years, but its actual performance fell far short of expectations. It is a sign for developers of other Russian unmanned ground vehicles – like Soratnik, Vihr, and Nerehta — since it displays the full range of deficiencies that take place outside of well-managed testing ranges where such vehicles are currently undergoing evaluation. It also brought to light significant problems with ISR equipment — this type of technology is absolutely crucial to any unmanned system’s successful deployment, and its failures during Uran-9 tests exposed a serious combat weakness.

It is also a useful lesson for many other designers of domestic combat UGVs who are seeking to introduce similar systems into existing order of battle. It appears that the Uran-9’s full effectiveness can only be determined at a much later time if it can perform its mission autonomously in the rapidly-changing and complex battlefield environment. Fully autonomous operation so far eludes its Russian developers, who are nonetheless still working towards achieving such operational goals for their combat UGVs. Moreover, Russian deliberations on using their existing combat UGV platforms in one-time attack mode against fortified adversary positions or firing points, tracking closely with ways that Western military analysts are thinking that such weapons could be used in combat.

Source: Nikolai Novichkov / Orbis Defense

The Uran-9 is still a test bed and much has to take place before it could be successfully integrated into current Russian concept of operations. We could expect more eye-opening “lessons learned” from its and other UGVs potential deployment in combat. Given the rapid proliferation of unmanned and autonomous technology, we are already in the midst of a new arms race. Many states are now designing, building, exporting, or importing various technologies for their military and security forces.

To make matters more interesting, the Russians have been public with both their statements about new technology being tested and evaluated, and with the possible use of such weapons in current and future conflicts. There should be no strategic or tactical surprise when military robotics are finally encountered in future combat.

Source: Block13
by djahal; Diviantart.com

For another perspective on Russian military innovation, please read Mr. Ray Finch’s guest post The Tenth Man” — Russia’s Era Military Innovation Technopark.

Samuel Bendett is a Research Analyst at the CNA Corporation and a Russia Studies Fellow at the American Foreign Policy Council. He is an official Mad Scientist, having presented and been so proclaimed at a previous Mad Scientist Conference.  The views expressed here are his own.

80. “The Queue”

[Editor’s Note:  Mad Scientist Laboratory is pleased to present our August edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

Gartner Hype Cycle / Source:  Nicole Saraco Loddo, Gartner

1.5 Trends Emerge in the Gartner Hype Cycle for Emerging Technologies,” by Kasey Panetta, Gartner, 16 August 2018.

Gartner’s annual hype cycle highlights many of the technologies and trends explored by the Mad Scientist program over the last two years. This year’s cycle added 17 new technologies and organized them into five emerging trends: 1) Democratized Artificial Intelligence (AI), 2) Digitalized Eco-Systems, 3) Do-It-Yourself Bio-Hacking, 4) Transparently Immersive Experiences, and 5) Ubiquitous Infrastructure. Of note, many of these technologies have a 5–10 year horizon until the Plateau of Productivity. If this time horizon is accurate, we believe these emerging technologies and five trends will have a significant role in defining the Character of Future War in 2035 and should have modernization implications for the Army of 2028. For additional information on the disruptive technologies identified between now and 2035, see the Era of Accelerated Human Progress portion of our Potential Game Changers broadsheet.

[Gartner disclaimer:  Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.]

Artificial Intelligence by GLAS-8 / Source: Flickr

2.Should Evil AI Research Be Published? Five Experts Weigh In,” by Dan Robitzski, Futurism, 27 August 2018.

The following rhetorical (for now) question was posed to the “AI Race and Societal Impacts” panel during last month’s The Joint Multi-Conference on Human-Level Artificial Intelligence in Prague, The Czech Republic:

“Let’s say you’re an AI scientist, and you’ve found the holy grail of your field — you figured out how to build an artificial general intelligence (AGI). That’s a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas — things no computer can do today.

That’s great, right? Except for one big catch: your AGI system is evil or could only be used for malicious purposes.

So, now a conundrum. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil? Do you file a patent so that no one else (except for you) could bring such an algorithm into existence? Or do you sit on your research, protecting the world from your creation but also passing up on the astronomical paycheck that would surely arrive in the wake of such a discovery?”

The panel’s responses ranged from controlling — “Don’t publish it!” and treat it like a grenade, “one would not hand it to a small child, but maybe a trained soldier could be trusted with it”; to the altruistic — “publish [it]… immediately” and “there is no evil technology, but there are people who would misuse it. If that AGI algorithm was shared with the world, people might be able to find ways to use it for good”; to the entrepreneurial – “sell the evil AGI to [me]. That way, they wouldn’t have to hold onto the ethical burden of such a powerful and scary AI — instead, you could just pass it to [me and I will] take it from there.

While no consensus of opinion was arrived at, the panel discussion served a useful exercise in illustrating how AI differs from previous eras’ game changing technologies. Unlike Nuclear, Biological, and Chemical weapons, no internationally agreed to and implemented control protocols can be applied to AI, as there are no analogous gas centrifuges, fissile materials, or triggering mechanisms; no restricted access pathogens; no proscribed precursor chemicals to control. Rather, when AGI is ultimately achieved, it is likely to be composed of nothing more than diffuse code; a digital will’o wisp that can permeate across the global net to other nations, non-state actors, and super-empowered individuals, with the potential to facilitate unprecedentedly disruptive Information Operation (IO) campaigns and Virtual Warfare, revolutionizing human affairs. The West would be best served in emulating the PRC with its Military-Civil Fusion Centers and integrate the resources of the State with the innovation of industry to achieve their own AGI solutions soonest. The decisive edge will “accrue to the side with more autonomous decision-action concurrency on the Hyperactive Battlefield” — the best defense against a nefarious AGI is a friendly AGI!

Scales Sword Of Justice / Source: https://www.maxpixel.net/

3.Can Justice be blind when it comes to machine learning? Researchers present findings at ICML 2018,” The Alan Turing Institute, 11 July 2018.

Can justice really be blind? The International Conference on Machine Learning (ICML) was held in Stockholm, Sweden, in July 2018. This conference explored the notion of machine learning fairness and proposed new methods to help regulators provide better oversight and practitioners to develop fair and privacy-preserving data analyses. Like ethical discussions taking place within the DoD, there are rising legal concerns that commercial machine learning systems (e.g., those associated with car insurance pricing) might illegally or unfairly discriminate against certain subgroups of the population. Machine learning will play an important role in assisting battlefield decisions (e.g., the targeting cycle and commander’s decisions) – especially lethal decisions. There is a common misperception that machines will make unbiased and fair decisions, divorced from human bias. Yet the issue of machine learning bias is significant because humans, with their host of cognitive biases, code the very programming that will enable machines to learn and make decisions. Making the best, unbiased decisions will become critical in AI-assisted warfighting. We must ensure that machine-based learning outputs are verified and understood to preclude the inadvertent introduction of human biases.  Read the full report here.

Robot PNG / Source: pngimg.com

4.Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans,” by Katyanna Quach, The Register, 3 August 2018.

In a study published by PLOS ONE, researchers found that a robot’s personality affected a human’s decision-making. In the study, participants were asked to dialogue with a robot that was either sociable (chatty) or functional (focused). At the end of the study, the researchers let the participants know that they could switch the robot off if they wanted to. At that moment, the robot would make an impassioned plea to the participant to resist shutting them down. The participants’ actions were then recorded. Unexpectedly, there were  a large number of participants who resisted shutting down the functional robots after they made their plea, as opposed to the sociable ones. This is significant. It shows, beyond the unexpected result, that decision-making is affected by robotic personality. Humans will form an emotional connection to artificial entities despite knowing they are robotic if they mimic and emulate human behavior. If the Army believes its Soldiers will be accompanied and augmented heavily by robots in the near future, it must also understand that human-robot interaction will not be the same as human-computer interaction. The U.S. Army must explore how attain the appropriate level of trust between Soldiers and their robotic teammates on the future battlefield. Robots must be treated more like partners than tools, with trust, cooperation, and even empathy displayed.

IoT / Source: Pixabay

5.Spending on Internet of Things May More Than Double to Over Half a Trillion Dollars,” by Aaron Pressman, Fortune, 8 August 2018.

While the advent of the Internet brought home computing and communication even deeper into global households, the revolution of smart phones brought about the concept of constant personal interconnectivity. Today and into the future, not only are humans being connected to the global commons via their smart devices, but a multitude of devices, vehicles, and various accessories are being integrated into the Internet of Things (IoT). Previously, the IoT was addressed as a game changing technology. The IoT is composed of trillions of internet-linked items, creating opportunities and vulnerabilities. There has been explosive growth in low Size Weight and Power (SWaP) and connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness).

Large companies are expected to quickly grow their spending on Internet-connected devices (i.e., appliances, home devices [such as Google Home, Alexa, etc.], various sensors) to approximately $520 billion. This is a massive investment into what will likely become the Internet of Everything (IoE). While growth is focused on known devices, it is likely that it will expand to embedded and wearable sensors – think clothing, accessories, and even sensors and communication devices embedded within the human body. This has two major implications for the Future Operational Environment (FOE):

– The U.S. military is already struggling with the balance between collecting, organizing, and using critical data, allowing service members to use personal devices, and maintaining operations and network security and integrity (see banning of personal fitness trackers recently). A segment of the IoT sensors and devices may be necessary or critical to the function and operation of many U.S. Armed Forces platforms and weapons systems, inciting some critical questions about supply chain security, system vulnerabilities, and reliance on micro sensors and microelectronics

– The U.S. Army of the future will likely have to operate in and around dense urban environments, where IoT devices and sensors will be abundant, degrading blue force’s ability to sense the battlefield and “see” the enemy, thereby creating a veritable needle in a stack of needles.

6.Battlefield Internet: A Plan for Securing Cyberspace,” by Michèle Flournoy and Michael Sulmeyer, Foreign Affairs, September/October 2018. Review submitted by Ms. Marie Murphy.

With the possibility of a “cyber Pearl Harbor” becoming increasingly imminent, intelligence officials warn of the rising danger of cyber attacks. Effects of these attacks have already been felt around the world. They have the power to break the trust people have in institutions, companies, and governments as they act in the undefined gray zone between peace and all-out war. The military implications are quite clear: cyber attacks can cripple the military’s ability to function from a command and control aspect to intelligence communications and materiel and personnel networks. Besides the military and government, private companies’ use of the internet must be accounted for when discussing cyber security. Some companies have felt the effects of cyber attacks, while others are reluctant to invest in cyber protection measures. In this way, civilians become affected by acts of cyber warfare, and attacks on a country may not be directed at the opposing military, but the civilian population of a state, as in the case of power and utility outages seen in eastern Europe. Any actor with access to the internet can inflict damage, and anyone connected to the internet is vulnerable to attack, so public-private cooperation is necessary to most effectively combat cyber threats.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

79. Character vs. Nature of Warfare: What We Can Learn (Again) from Clausewitz

[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by guest blogger LTC Rob Taber, U.S. Army Training and Doctrine Command (TRADOC) G-2 Futures Directorate, clarifying the often confused character and nature of warfare, and addressing their respective mutability.]

No one is arguing that warfare is not changing. Where people disagree, however, is whether the nature of warfare, the character of warfare, or both are changing.

Source:  Office of the Director of National Intelligence

Take, for example, the National Intelligence Council’s assertion in “Global Trends: Paradox of Progress.” They state, “The nature of conflict is changing. The risk of conflict will increase due to diverging interests among major powers, an expanding terror threat, continued instability in weak states, and the spread of lethal, disruptive technologies. Disrupting societies will become more common, with long-range precision weapons, cyber, and robotic systems to target infrastructure from afar, and more accessible technology to create weapons of mass destruction.”[I]

Additionally, Brad D. Williams, in an introduction to an interview he conducted with Amir Husain, asserts, “Generals and military theorists have sought to characterize the nature of war for millennia, and for long periods of time, warfare doesn’t dramatically change. But, occasionally, new methods for conducting war cause a fundamental reconsideration of its very nature and implications.”[II] Williams then cites “cavalry, the rifled musket and Blitzkrieg as three historical examples”[III] from Husain and General John R. Allen’s (ret.) article, “On Hyperwar.”

Unfortunately, the NIC and Mr. Williams miss the reality that the nature of war is not changing, and it is unlikely to ever change. While these authors may have simply interchanged “nature” when they meant “character,” it is important to be clear on the difference between the two and the implications for the military. To put it more succinctly, words have meaning.

The nature of something is the basic make up of that thing. It is, at core, what that “thing” is. The character of something is the combination of all the different parts and pieces that make up that thing. In the context of warfare, it is useful to ask every doctrine writer’s personal hero, Carl Von Clausewitz, what his views are on the matter.

Source: Tetsell’s Blog. https://tetsell.wordpress.com/2014/10/13/clausewitz/

He argues that war is “subjective,”[IV]an act of policy,”[V] and “a pulsation of violence.”[VI] Put another way, the nature of war is chaotic, inherently political, and violent. Clausewitz then states that despite war’s “colorful resemblance to a game of chance, all the vicissitudes of its passion, courage, imagination, and enthusiasm it includes are merely its special characteristics.”[VII] In other words, all changes in warfare are those smaller pieces that evolve and interact to make up the character of war.

The argument that artificial intelligence (AI) and other technologies will enable military commanders to have “a qualitatively unsurpassed level of situational awareness and understanding heretofore unavailable to strategic commander[s][VIII] is a grand claim, but one that has been made many times in the past, and remains unfulfilled. The chaos of war, its fog, friction, and chance will likely never be deciphered, regardless of what technology we throw at it. While it is certain that AI-enabled technologies will be able to gather, assess, and deliver heretofore unimaginable amounts of data, these technologies will remain vulnerable to age-old practices of denial, deception, and camouflage.

 

The enemy gets a vote, and in this case, the enemy also gets to play with their AI-enabled technologies that are doing their best to provide decision advantage over us. The information sphere in war will be more cluttered and more confusing than ever.

Regardless of the tools of warfare, be they robotic, autonomous, and/or AI-enabled, they remain tools. And while they will be the primary tools of the warfighter, the decision to enable the warfighter to employ those tools will, more often than not, come from political leaders bent on achieving a certain goal with military force.

Drone Wars are Coming / Source: USNI Proceedings, July 2017, Vol. 143 / 7 /  1,373

Finally, the violence of warfare will not change. Certainly robotics and autonomy will enable machines that can think and operate without humans in the loop. Imagine the future in which the unmanned bomber gets blown out of the sky by the AI-enabled directed energy integrated air defense network. That’s still violence. There are still explosions and kinetic energy with the potential for collateral damage to humans, both combatants and civilians.

Source: Lockheed Martin

Not to mention the bomber carried a payload meant to destroy something in the first place. A military force, at its core, will always carry the mission to kill things and break stuff. What will be different is what tools they use to execute that mission.

To learn more about the changing character of warfare:

– Read the TRADOC G-2’s The Operational Environment and the Changing Character of Warfare paper.

– Watch The Changing Character of Future Warfare video.

Additionally, please note that the content from the Mad Scientist Learning in 2050 Conference at Georgetown University, 8-9 August 2018, is now posted and available for your review:

– Read the Top Ten” Takeaways from the Learning in 2050 Conference.

– Watch videos of each of the conference presentations on the TRADOC G-2 Operational Environment (OE) Enterprise YouTube Channel here.

– Review the conference presentation slides (with links to the associated videos) on the Mad Scientist All Partners Access Network (APAN) site here.

LTC Rob Taber is currently the Deputy Director of the Futures Directorate within the TRADOC G-2. He is an Army Strategic Intelligence Officer and holds a Master of Science of Strategic Intelligence from the National Intelligence University. His operational assignments include 1st Infantry Division, United States European Command, and the Defense Intelligence Agency.

Note:  The featured graphic at the top of this post captures U.S. cavalrymen on General John J. Pershing’s Punitive Expedition into Mexico in 1916.  Less than two years later, the United States would find itself fully engaged in Europe in a mechanized First World War.  (Source:  Tom Laemlein / Armor Plate Press, courtesy of Neil Grant, The Lewis Gun, Osprey Publishing, 2014, page 19)

_______________________________________________________

[I] National Intelligence Council, “Global Trends: Paradox of Progress,” January 2017, https://www.dni.gov/files/documents/nic/GT-Full-Report.pdf, p. 6.
[II] Brad D. Williams, “Emerging ‘Hyperwar’ Signals ‘AI-Fueled, machine waged’ Future of Conflict,” Fifth Domain, August 7, 2017, https://www.fifthdomain.com/dod/2017/08/07/emerging-hyperwar-signals-ai-fueled-machine-waged-future-of-conflict/.
[III] Ibid.
[VI] Carl Von Clausewitz, On War, ed. Michael Howard and Peter Paret (Princeton: Princeton University Press, 1976), 85.
[V] Ibid, 87.
[VI] Ibid.
[VII] Ibid, 86.
[VIII] John Allen, Amir Hussain, “On Hyper-War,” Fortuna’s Corner, July 10, 2017, https://fortunascorner.com/2017/07/10/on-hyper-war-by-gen-ret-john-allenusmc-amir-hussain/.

78. The Classified Mind – The Cyber Pearl Harbor of 2034

[Editor’s Note: Mad Scientist Laboratory is pleased to publish the following post by guest blogger Dr. Jan Kallberg, faculty member, United States Military Academy at West Point, and Research Scientist with the Army Cyber Institute at West Point. His post serves as a cautionary tale regarding our finite intellectual resources and the associated existential threat in failing to protect them!]

Preface: Based on my experience in cybersecurity, migrating to a broader cyber field, there have always been those exceptional individuals that have an unreplicable ability to see the challenge early on, create a technical solution, and know how to play it in the right order for maximum impact. They are out there – the Einsteins, Oppenheimers, and Fermis of cyber. The arrival of Artificial Intelligence increases our reliance on these highly capable individuals – because someone must set the rules, the boundaries, and point out the trajectory for Artificial Intelligence at initiation.

Source: https://thebulletin.org/2017/10/neuroscience-and-the-new-weapons-of-the-mind/

As an industrialist society, we tend to see technology and the information that feeds it as the weapons – and ignore the few humans that have a large-scale direct impact. Even if identified as a weapon, how do you make a human mind classified? Can we protect these high-ability individuals that in the digital world are weapons, not as tools but compilers of capability, or are we still focused on the tools? Why do we see only weapons that are steel and electronics and not the weaponized mind as a weapon?  I believe firmly that we underestimate the importance of Applicable Intelligence – the ability to play the cyber engagement in the optimal order.  Adversaries are often good observers because they are scouting for our weak spots. I set the stage for the following post in 2034, close enough to be realistic and far enough for things to happen when our adversaries are betting that we rely more on a few minds than we are willing to accept.

Post:  In a not too distant future, 20th of August 2034, a peer adversary’s first strategic moves are the targeted killings of less than twenty individuals as they go about their daily lives:  watching a 3-D printer making a protein sandwich at a breakfast restaurant; stepping out from the downtown Chicago monorail; or taking a taste of a poison-filled retro Jolt Cola. In the gray zone, when the geopolitical temperature increases, but we are still not at war yet, our adversary acts quickly and expedites a limited number of targeted killings within the United States of persons whom are unknown to mass media, the general public, and have only one thing in common – Applicable Intelligence (AI).

The ability to apply is a far greater asset than the technology itself. Cyber and card games have one thing in common, the order you play your cards matters. In cyber, the tools are publicly available, anyone can download them from the Internet and use them, but the weaponization of the tools occurs when used by someone who understands how to play the tools in an optimal order. These minds are different because they see an opportunity to exploit in a digital fog of war where others don’t or can’t see it. They address problems unburdened by traditional thinking, in new innovative ways, maximizing the dual-purpose of digital tools, and can create tangible cyber effects.

It is the Applicable Intelligence (AI) that creates the procedures, the application of tools, and turns simple digital software in sets or combinations as a convergence to digitally lethal weapons. This AI is the intelligence to mix, match, tweak, and arrange dual purpose software. In 2034, it is as if you had the supernatural ability to create a thermonuclear bomb from what you can find at Kroger or Albertson.

Sadly we missed it; we didn’t see it. We never left the 20th century. Our adversary saw it clearly and at the dawn of conflict killed off the weaponized minds, without discretion, and with no concern for international law or morality.

These intellects are weapons of growing strategic magnitude. In 2034, the United States missed the importance of these few intellects. This error left them unprotected.

All of our efforts were instead focusing on what they delivered, the application and the technology, which was hidden in secret vaults and only discussed in sensitive compartmented information facilities. Therefore, we classify to the highest level to ensure the confidentiality and integrity of our cyber capabilities. Meanwhile, the most critical component, the militarized intellect, we put no value to because it is a human. In a society marinated in an engineering mindset, humans are like desk space, electricity, and broadband; it is a commodity that is input in the production of the technical machinery. The marveled technical machinery is the only thing we care about today, 2018, and as it turned out in 2034 as well.

We are stuck in how we think, and we are unable to see it coming, but our adversaries see it. At a systematic level, we are unable to see humans as the weapon itself, maybe because we like to see weapons as something tangible, painted black, tan, or green, that can be stored and brought to action when needed. As the armory of the war of 1812, as the stockpile of 1943, and as the launch pad of 2034. Arms are made of steel, or fancier metals, with electronics – we failed in 2034 to see weapons made of corn, steak, and an added combative intellect.

General Nakasone stated in 2017, “Our best ones [coders] are 50 or 100 times better than their peers,” and continued “Is there a sniper or is there a pilot or is there a submarine driver or anyone else in the military 50 times their peer? I would tell you, some coders we have are 50 times their peers.” In reality, the success of cyber and cyber operations is highly dependent not on the tools or toolsets but instead upon the super-empowered individual that General Nakasone calls “the 50-x coder.”

Manhattan Project K-25 Gaseous Diffusion Process Building, Oak Ridge, TN / Source: atomicarchive.com

There were clear signals that we could have noticed before General Nakasone pointed it out clearly in 2017. The United States’ Manhattan Project during World War II had at its peak 125,000 workers on the payroll, but the intellects that drove the project to success and completion were few. The difference with the Manhattan Project and the future of cyber is that we were unable to see the human as a weapon, being locked in by our path dependency as an engineering society where we hail the technology and forget the importance of the humans behind it.

J. Robert Oppenheimer – the militarized intellect behind the  Manhattan Project / Source: Life Magazine

America’s endless love of technical innovations and advanced machinery reflects in a nation that has celebrated mechanical wonders and engineered solutions since its creation. For America, technical wonders are a sign of prosperity, ability, self-determination, and advancement, a story that started in the early days of the colonies, followed by the intercontinental railroad, the Panama Canal, the manufacturing era, the moon landing, and all the way to the autonomous systems, drones, and robots. In a default mindset, there is always a tool, an automated process, a software, or a set of technical steps that can solve a problem or act.

The same mindset sees humans merely as an input to technology, so humans are interchangeable and can be replaced. In 2034, the era of digital conflicts and the war between algorithms with engagements occurring at machine speed with no time for leadership or human interaction, it is the intellects that design and understand how to play it. We didn’t see it.

In 2034, with fewer than twenty bodies piled up after targeted killings, resides the Cyber Pearl Harbor. It was not imploding critical infrastructure, a tsunami of cyber attacks, nor hackers flooding our financial systems, but instead traditional lead and gunpowder. The super-empowered individuals are gone, and we are stuck in a digital war at speeds we don’t understand, unable to play it in the right order, and with limited intellectual torque to see through the fog of war provided by an exploding kaleidoscope of nodes and digital engagements.

Source: Shutterstock

If you enjoyed this post, read our Personalized Warfare post.

Dr. Jan Kallberg is currently an Assistant Professor of Political Science with the Department of Social Sciences, United States Military Academy at West Point, and a Research Scientist with the Army Cyber Institute at West Point. He was earlier a researcher with the Cyber Security Research and Education Institute, The University of Texas at Dallas, and is a part-time faculty member at George Washington University. Dr. Kallberg earned his Ph.D. and MA from the University of Texas at Dallas and earned a JD/LL.M. from Juridicum Law School, Stockholm University. Dr. Kallberg is a certified CISSP, ISACA CISM, and serves as the Managing Editor for the Cyber Defense Review. He has authored papers in the Strategic Studies Quarterly, Joint Forces Quarterly, IEEE IT Professional, IEEE Access, IEEE Security and Privacy, and IEEE Technology and Society.

77. “The Tenth Man” — Russia’s Era Military Innovation Technopark

[Editor’s Note: Mad Scientist Laboratory is pleased to publish the second in our series of “The Tenth Man” posts (read the first one here). This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging. The Mad Scientist Laboratory offers it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Future Operational Environment.

Today’s post is by guest blogger Mr. Ray Finch addressing Russia’s on-going efforts to develop a military innovation center —  Era Military Innovation Technopark — near the city of Anapa (Krasnodar Region) on the northern coast of the Black Sea.  Per The Operational Environment and the Changing Character of Future Warfare, “Russia can be considered our ‘pacing threat,’ and will be our most capable potential foe for at least the first half of the Era of Accelerated Human Progress [now through 2035]. It will remain a key adversary through the Era of Contested Equality [2035-2050].” So any Russian attempts at innovation to create “A Militarized Silicon Valley in Russia” should be sounding alarms throughout the NATO Alliance, right?  Well, maybe not….]

(Please note that several of Mr. Finch’s embedded links in the post below are best accessed using non-DoD networks.)

Only a Mad Russian Scientist could write the paragraph below:

Russia Resurgent, Source: Bill Butcher, The Economist

If all goes according to plan, in October 2035 the Kremlin will host a gala birthday party to commemorate President Putin’s 83d birthday. Ever since the Russian leader began receiving special biosynthetic plasma developed by military scientists at the country’s premier Era Technopolis Center in Anapa, the president’s health and overall fitness now resembles that of a 45-year old. This development was just one in a series of innovations which have helped to transform – not just the Kremlin leader – but the entire country.  By focusing its best and brightest on new technologies, Russia has become the global leader in information and telecommunication systems, artificial intelligence, robotic complexes, supercomputers, technical vision and pattern recognition, information security, nanotechnology and nanomaterials, energy tech and technology life support cycle, as well as bioengineering, biosynthetic, and biosensor technologies. In many respects, Russia is now the strongest country in the world.

While this certainly echoes the current Kremlin propaganda, a more sober analysis regarding the outcomes of the Era Military Innovation Technopark in Anapa (Krasnodar Region) ought to consider those systemic factors which will likely retard its future development. Below are five reasons why Putin and Russia will likely have less to celebrate in 2035.

President Putin and Defense Minister Shoigu being briefed on Technopark-Era, Kremlin, 23 Feb 2018. Source: http://kremlin.ru/events/president/news/56923, CC BY 4.0.

You can’t have milk without a cow

The primary reason that the Kremlin’s attempt to create breakthrough innovations at the Era Technopark will result in disappointment stems from the lack of a robust social structure to support such innovations. And it’s not simply the absence of good roads or adequate healthcare. As the renowned MIT scientist, Dr. Loren R. Graham recently pointed out, the Kremlin leadership wants to enjoy the “milk” of technology, without worrying about supporting the system needed to support a “cow.” Graham elaborates on his observation by pointing out that even though Russian scientists have often been at the forefront of technological innovations, the country’s poor legal system prevents these discoveries from ever bearing fruit. Stifling bureaucracy and a broken legal system prevent Russian scientists and innovators from profiting from their discoveries. This dilemma leads to the second factor.

Brain drain

Despite all of the Kremlin’s patriotic hype over the past several years, many young and talented Russians are voting with their feet and pursuing careers abroad. As the senior Russian analyst, Dr. Gordon M. Hahn noted, “instead of voting for pro-democratic forces and/or fomenting unrest, Russia’s discontented, highly educated, highly skilled university graduates tend to move abroad to find suitable work.” And even though the US is maligned on a daily basis in the Kremlin-supported Russian media, many of these smart, young Russians are moving to America. Indeed, according to a recent Radio Free Europe/Radio Liberty (RFE/RL) report, “the number of asylum applications by Russian citizens in the United States hit a 24-year high in 2017, jumping nearly 40 percent from the previous year and continuing an upward march that began after Russian President Vladimir Putin returned to the Kremlin in 2012.” These smart, young Russians believe that their country is headed in the wrong direction and are looking for opportunities elsewhere.

Everything turns out to be a Kalashnikov

There’s no doubt that Russian scientists and technicians are capable of creating effective weapon systems. President Putin’s recent display of military muscle-power was not a mere campaign stratagem, but rather a reminder to his Western “partners” that since Russia remains armed to the teeth, his country deserves respect. And there’s little question that the new Era Technopark will help to create advanced weapon systems of “which there is no analogous version in the world.” But that’s just the point. While Russia is famous for its tanks, artillery, and rocket systems, it has struggled to create anything which might be qualified as a technological marvel in the civilian sector. As some Russian observers have put it, “no matter what the state tries to develop, it ends up being a Kalashnikov.”

Soviet AK-47. Type 2 made from 1951 to 1954/55. Source: http://www.dodmedia.osd.mil Public Domain

The Boss knows what’s best

The current Kremlin leadership now parades itself as being at the forefront of a global conservative and traditional movement. In their favorite narrative, the conniving US is forever trying to weaken Russia (and other autocratic countries) by infecting them with a liberal bacillus, often referred to as a “color revolution.” In their rendition, Russia was contaminated by this democratic disease during the 1990s, only to find itself weakened and taken advantage of by America.

Since then, the Kremlin leadership has retained the form of democracy, but has removed its essence. Elections are held, ballots are cast, but the winner is pre-determined from above. So far, the Russian population has played along with this charade, but at some point, perhaps in an economic crisis, the increasingly plugged-in Russian population might demand a more representative form of government. Regardless, while this top-down, conservative model is ideal for maintaining control and staging major events, it lacks the essential freedom inherent within innovation. Moreover, such a quasi-autocratic system tends to promote Russia’s most serious challenge.

The cancer of corruption

Despite the façade of a uniformed, law-governed state, Russia continues to rank near the bottom on the global corruption index. According to a recent Russian report, “90 percent of entrepreneurs have encountered corruption at least once.” Private Russian companies will likely think twice before deciding to invest in the Era Technopark, unless of course, the Kremlin makes them an offer they cannot refuse. Moreover, as suggested earlier, the young Era scientists may not be fully committed, understanding that the “milk” of their technological discoveries will likely by expropriated by their uniformed bosses.

Technopark Era is not scheduled to be fully operational until 2020, and the elevated rhetoric over its innovative mandate will likely prompt concern among some US defense officials. While the center could advance Russian military technology over the next 15-25 years, it is doubtful that Era will usher in a new era for Russia.

If you enjoyed this edition of the “Tenth Man”:

– Learn more about Russia’s Era Military Innovation Technopark in the April 2018 edition of the TRADOC G-2’s Foreign Military Studies Office (FMSO) OE Watch, Volume 8, Issue 4, pages 10-11.

– Read Mad Scientist Sam Bendett‘s guest blog post on Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward.

Ray Finch works as a Eurasian Analyst at the Foreign Military Studies Office. He’s a former Army officer (Artillery and Russian FAO).

 

76. “Top Ten” Takeaways from the Learning in 2050 Conference

On 8-9 August 2018, the U.S. Army Training and Doctrine Command (TRADOC) co-hosted the Learning in 2050 Conference with Georgetown University’s Center for Security Studies in Washington, DC.  Leading scientists, innovators, and scholars from academia, industry, and the government gathered to address future learning techniques and technologies that are critical in preparing for Army operations in the mid-21st century against adversaries in rapidly evolving battlespaces.  The new and innovative learning capabilities addressed at this conference will enable our Soldiers and Leaders to act quickly and decisively in a changing Operational Environment (OE) with fleeting windows of opportunity and more advanced and lethal technologies.

We have identified the following “Top 10” takeaways related to Learning in 2050:

1. Many learning technologies built around commercial products are available today (Amazon Alexa, Smart Phones, Immersion tech, Avatar experts) for introduction into our training and educational institutions. Many of these technologies are part of the Army’s concept for a Synthetic Training Environment (STE) and there are nascent manifestations already.  For these technologies to be widely available to the future Army, the Army of today must be prepared to address:

– The collection and exploitation of as much data as possible;

– The policy concerns with security and privacy;

 – The cultural challenges associated with changing the dynamic between learners and instructors, teachers, and coaches; and

– The adequate funding to produce capabilities at scale so that digital tutors or other technologies (Augmented Reality [AR] / Virtual Reality [VR], etc.) and skills required in a dynamic future, like critical thinking/group think mitigation, are widely available or perhaps ubiquitous.

2. Personalization and individualization of learning in the future will be paramount, and some training that today takes place in physical schools will be more the exception, with learning occurring at the point of need. This transformation will not be limited to lesson plans or even just learning styles:

Intelligent tutors, Artificial Intelligence (AI)-driven instruction, and targeted mentoring/tutoring;

– Tailored timing and pacing of learning (when, where, and for what duration best suits the individual learner or group of learners?);

– Collaborative learners will be teams partnering to learn;

Targeted Neuroplasticity Training / Source: DARPA

– Various media and technologies that enable enhanced or accelerated learning (Targeted Neuroplasticity Training (TNT), haptic sensors, AR/VR, lifelong personal digital learning partners, pharmaceuticals, etc.) at scale;

– Project-oriented learning; when today’s high school students are building apps, they are asked “What positive change do you want to have?” One example is an open table for Bully Free Tables. In the future, learners will learn through working on projects;

– Project-oriented learning will lead to a convergence of learning and operations, creating a chicken (learning) or the egg (mission/project) relationship; and

– Learning must be adapted to consciously address the desired, or extant, culture.

Drones Hanger / Source: Oshanin

3. Some jobs and skill sets have not even been articulated yet. Hobbies and recreational activities engaged in by kids and enthusiasts today could become occupations or Military Occupational Specialties (MOS’s) of the future (e.g., drone creator/maintainer, 3-D printing specialist, digital and cyber fortification construction engineer — think Minecraft and Fortnite with real-world physical implications). Some emerging trends in personalized warfare, big data, and virtual nations could bring about the necessity for more specialists that don’t currently exist (e.g., data protection and/or data erasure specialists).

Mechanical Animal / Source: Pinterest

4. The New Human (who will be born in 2032 and is the recruit of 2050) will be fundamentally different from the Old Human. The Chief of Staff of the Army (CSA) in 2050 is currently a young Captain in our Army today. While we are arguably cyborgs today (with integrated electronics in our pockets and on our wrists), the New Humans will likely be cyborgs in the truest sense of the word, with some having embedded sensors. How will those New Humans learn? What will they need to learn? Why would they want to learn something? These are all critical questions the Army will continue to ask over the next several decades.

Source: iLearn

5. Learning is continuous and self-initiated, while education is a point in time and is “done to you” by someone else. Learning may result in a certificate or degree – similar to education – or can lead to the foundations of a skill or a deeper understanding of operations and activity. How will organizations quantify learning in the future? Will degrees or even certifications still be the benchmark for talent and capability?

Source: The Data Feed Toolbox

6. Learning isn’t slowing down, it’s speeding up. More and more things are becoming instantaneous and humans have no concept of extreme speed. Tesla cars have the ability to update software, with owners getting into a veritably different car each day. What happens to our Soldiers when military vehicles change much more iteratively? This may force a paradigm shift wherein learning means tightening local and global connections (tough to do considering government/military network securities, firewalls, vulnerabilities, and constraints); viewing technology as extended brains all networked together (similar to Dr. Alexander Kott’s look at the Internet of Battlefield Things [IoBT]); and leveraging these capabilities to enable Soldier learning at extremely high speeds.

Source: Connecting Universes

7. While there are a number of emerging concepts and technologies to improve and accelerate learning (TNT, extended reality, personalized learning models, and intelligent tutors), the focus, training stimuli, data sets, and desired outcomes all have to be properly tuned and aligned or the Learner could end up losing correct behavior habits (developing maladaptive plasticity), developing incorrect or skewed behaviors (per the desired capability), or assuming inert cognitive biases.

Source: TechCrunch

8. Geolocation may become increasingly less important when it comes to learning in the future. If Apple required users to go to Silicon Valley to get trained on an iPhone, they would be exponentially less successful. But this is how the Army currently trains. The ubiquity of connectivity, the growth of the Internet of Things (and eventually Internet of Everything), the introduction of universal interfaces (think one XBOX controller capable of controlling 10 different types of vehicles), major advances in modeling and simulations, and social media innovation all converge to minimize the importance of teachers, students, mentors, and learners being collocated at the same physical location.

Transdisciplinarity at Work / Source: https://www.cetl.hku.hk

9. Significant questions have to be asked regarding the specificity of training in children at a young age to the point that we may be overemphasizing STEM from an early age and not helping them learn across a wider spectrum. We need Transdisciplinarity in the coming generations.

10. 3-D reconstructions of bases, training areas, cities, and military objectives coupled with mixed reality, haptic sensing, and intuitive controls have the potential to dramatically change how Soldiers train and learn when it comes to not only single performance tasks (e.g., marksmanship, vehicle driving, reconnaissance, etc.) but also in dense urban operations, multi-unit maneuver, and command and control.

Heavy Duty by rOEN911 / Source: DeviantArt

During the next two weeks, we will be posting the videos from each of the Learning in 2050 Conference presentations on the TRADOC G-2 Operational Environment (OE) Enterprise YouTube Channel and the associated slides on our Mad Scientist APAN site — stay connected here at the Mad Scientist Laboratory.

One of the main thrusts in the Mad Scientist lines of effort is harnessing and cultivating the Intellect of the Nation. In this vein, we are asking Learning in 2050 Conference participants (both in person and online) to share their ideas on the presentations and topic. Please consider:

– What topics were most important to you personally and professionally?

– What were your main takeaways from the event?

– What topics did you want the speakers to extrapolate more on?

– What were the implications for your given occupation/career field from the findings of the event?

Your input will be of critical importance to our analysis and products that will have significant impact on the future of the force in design, structuring, planning, and training!  Please submit your input to Mad Scientist at: usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil.