101. TRADOC 2028

[Editor’s Note:  The U.S. Army Training and Doctrine Command (TRADOC) mission is to recruit, train, and educate the Army, driving constant improvement and change to ensure the Total Army can deter, fight, and win on any battlefield now and into the future. Today’s post addresses how TRADOC will need to transform to ensure that it continues to accomplish this mission with the next generation of Soldiers.]

Per The Army Vision:

The Army of 2028 will be ready to deploy, fight, and win decisively against any adversary, anytime and anywhere, in a joint, multi-domain, high-intensity conflict, while simultaneously deterring others and maintaining its ability to conduct irregular warfare. The Army will do this through the employment of modern manned and unmanned ground combat vehicles, aircraft, sustainment systems, and weapons, coupled with robust combined arms formations and tactics based on a modern warfighting doctrine and centered on exceptional Leaders and Soldiers of unmatched lethality.” GEN Mark A. Milley, Chief of Staff of the Army, and Dr. Mark T. Esper, Secretary of the Army, June 7, 2018.

In order to achieve this vision, the Army of 2028 needs a TRADOC 2028 that will recruit, organize, and train future Soldiers and Leaders to deploy, fight, and win decisively on any future battlefield. This TRADOC 2028 must account for: 1) the generational differences in learning styles; 2) emerging learning support technologies; and 3) how the Army will need to train and learn to maintain cognitive overmatch on the future battlefield. The Future Operational Environment, characterized by the speeding up of warfare and learning, will challenge the artificial boundaries between institutional and organizational learning and training (e.g., Brigade mobile training teams [MTTs] as a Standard Operating Procedure [SOP]).

Soldiers will be “New Humans” – beyond digital natives, they will embrace embedded and integrated sensors, Artificial Intelligence (AI), mixed reality, and ubiquitous communications. “Old Humans” adapted their learning style to accommodate new technologies (e.g., Classroom XXI). New Humans’ learning style will be a result of these technologies, as they will have been born into a world where they code, hack, rely on intelligent tutors and expert avatars (think the nextgen of Alexa / Siri), and learn increasingly via immersive Augmented / Virtual Reality (AR/VR), gaming, simulations, and YouTube-like tutorials, rather than the desiccated lectures and interminable PowerPoint presentations of yore. TRADOC must ensure that our cadre of instructors know how to use (and more importantly, embrace and effectively incorporate) these new learning technologies into their programs of instruction, until their ranks are filled with “New Humans.”

Delivering training for new, as of yet undefined MOSs and skillsets. The Army will have to compete with Industry to recruit the requisite talent for Army 2028. These recruits may enter service with fundamental technical skills and knowledges (e.g., drone creator/maintainer, 3-D printing specialist, digital and cyber fortification construction engineer) that may result in a flattening of the initial learning curve and facilitate more time for training “Green” tradecraft. Cyber recruiting will remain critical, as TRADOC will face an increasingly difficult recruiting environment as the Army competes to recruit new skillsets, from training deep learning tools to robotic repair. Initiatives to appeal to gamers (e.g., the Army’s eSports team) will have to be reflected in new approaches to all TRADOC Lines of Effort. AI may assist in identifying potential recruits with the requisite aptitudes.

“TRADOC in your ruck.” Personal AI assistants bring Commanders and their staffs all of the collected expertise of today’s institutional force. Conducting machine speed collection, collation, and analysis of battlefield information will free up warfighters and commanders to do what they do best — fight and make decisions, respectively. AI’s ability to quickly sift through and analyze the plethora of input received from across the battlefield, fused with the lessons learned data from thousands of previous engagements, will lessen the commander’s dependence on having had direct personal combat experience with conditions similar to his current fight when making command decisions.

Learning in the future will be personalized and individualized with targeted learning at the point of need. Training must be customizable, temporally optimized in a style that matches the individual learners, versus a one size fits all approach. These learning environments will need to bring gaming and micro simulations to individual learners for them to experiment. Similar tools could improve tactical war-gaming and support Commander’s decision making.  This will disrupt the traditional career maps that have defined success in the current generation of Army Leaders.  In the future, courses will be much less defined by the rank/grade of the Soldiers attending them.

Geolocation of Training will lose importance. We must stop building and start connecting. Emerging technologies – many accounted for in the Synthetic Training Environment (STE) – will connect experts and Soldiers, creating a seamless training continuum from the training base to home station to the fox hole. Investment should focus on technologies connecting and delivering expertise to the Soldier rather than brick and mortar infrastructure.  This vision of TRADOC 2028 will require “Big Data” to effectively deliver this personalized, immersive training to our Soldiers and Leaders at the point of need, and comes with associated privacy issues that will have to be addressed.

In conclusion, TRADOC 2028 sets the conditions to win warfare at machine speed. This speeding up of warfare and learning will challenge the artificial boundaries between institutional and organizational learning and training.

If you enjoyed this post, please also see:

– Mr. Elliott Masie’s presentation on Dynamic Readiness from the Learning in 2050 Conference, co-hosted with Georgetown University’s Center for Security Studies in Washington, DC, on 8-9 August 2018.

Top Ten” Takeaways from the Learning in 2050 Conference.

97. The Cryptoruble as a Stepping Stone to Digital Sovereignty

“By 2038, there won’t just be one internet — there will be many, split along national lines” — An Xiao Mina, 2038 podcast, Episode 2, New York Magazine Intelligencer, 25 October 2018.

[Editor’s Note:  While the prediction above is drawn from a podcast that posits an emerging tech cold war between China and the U.S., the quest for digital sovereignty and national cryptocurrencies is an emerging global trend that portends the fracturing of the contemporary internet into national intranets.  This trend erodes the prevailing Post-Cold War direction towards globalization.  In today’s post, Mad Scientist Laboratory welcomes back guest blogger Dr. Mica Hall, who addresses Russia’s move to adopt a national cryptocurrency, the cryptoruble, as a means of asserting its digital sovereignty and ensuring national security.  The advent of the cryptoruble will have geopolitical ramifications far beyond Mother Russia’s borders, potentially ushering in an era of economic hegemony over those states that embrace this supranational cryptocurrency. (Note:  Some of the embedded links in this post are best accessed using non-DoD networks.)]

At the nexus of monetary policy, geopolitics, and information control is Russia’s quest to expand its digital sovereignty. At the October 2017 meeting of the Security Council, “the FSB [Federal Security Service] asked the government to develop an independent ‘Internet’ infrastructure for BRICS nations [Brazil, Russia, India, China, South Africa], which would continue to work in the event the global Internet malfunctions.” 1 Security Council members argued the Internet’s threat to national security is due to:

“… the increased capabilities of Western nations to conduct offensive operations in the informational space as well as the increased readiness to exercise these capabilities.”2

This echoes the sentiment of Dmitry Peskov, Putin’s Press Secretary, who stated in 2014,

We all know who the chief administrator of the global Internet is. And due to its volatility, we have to think about how to ensure our national security.”3

At that time, the Ministry of Communications (MinCom) had just tested a Russian back-up to the Internet to support a national “Intranet,” lest Russia be left vulnerable if the global Domain Name Servers (DNS) are attacked. MinCom conducted “a major exercise in which it simulated ‘switching off’ global Internet services,” and in 2017, the Security Council decided to create just such a backup system “which would not be subject to control by international organizations” for use by the BRICS countries.4

While an Internet alternative (or Alternet) may be sold to the Russian public as a way to combat the West’s purported advantage in the information war, curb excessive dependency on global DNS, and protect the country from the foreign puppet masters of the Internet that “pose a serious threat to Russia’s security,”5 numerous experts doubt Russia’s actual ability to realize the plan, given its track record.

Take the Eurasian Economic Union (EAEU), for example, an international organization comprised of Russia, Kazakhstan, Kyrgyzstan, Armenia, and Belarus. Russia should be able to influence the EAEU even more than the BRICS countries, given its leading role in establishing the group. The EAEU was stood up in January 2016, and by December, “MinCom and other government agencies were given the order to develop and confirm a program for the ‘Digital Economy,’ including plans to develop [it in] the EAEU.”6 As Slavin observes, commercial ventures have already naturally evolved to embrace the actual digital economy: “The digital revolution has already occurred, business long ago switched to electronic interactions,”7 while the state has yet to realize its Digital Economy platform.

Changing the way the government does business has proven more difficult than changing the actual economy. According to Slavin, “The fact that Russia still has not developed a system of digital signatures, that there’s no electronic interaction between government and business or between countries of the EAEU, and that agencies’ information systems are not integrated – all of that is a problem for the withered electronic government that just cannot seem to ripen.”8 The bridge between the state and the actual digital economy is still waiting for “legislation to support it and to recognize the full equality of electronic and paper forms.”9 Consequently, while the idea to create a supranational currency to be used in the EAEU has been floated many times, the countries within the organization have not been able to agree on what that currency would be.

The cryptoruble could be used to affect geopolitical relationships. In addition to wielding untraceable resources, Russia could also leverage this technology to join forces with some countries against others. According to the plan President Putin laid out upon announcing the launch of a cryptoruble, Russia would form a “single payment space” for the member states of the EAEU, based on “the use of new financial technologies, including the technology of distributed registries.”10 Notably, three months after the plan to establish a cryptoruble was announced, Russia’s Central Bank stated the value of working on establishing a supranational currency to be used either across the BRICS countries or across the EAEU, or both, instead of establishing a cryptoruble per se.11

This could significantly affect the balance of power not only in the region, but also in the world. Any country participating in such an economic agreement, however, would subject themselves to being overrun by a new hegemony, that of the supranational currency.

 

As long as the state continues to cloak its digital sovereignty efforts in the mantle of national security – via the cryptoruble or the Yarovaya laws, which increase Internet surveillance – it can continue to constrict the flow of information without compunction. As Peskov stated, “It’s not about disconnecting Russia from the World Wide Web,” but about “protecting it from external influence.”12 After Presidents Putin and Trump met at the G20 Summit in July 2017, MinCom Nikiforov said the two countries would establish a working group “for the control and security of cyberspace,” which the U.S. Secretary of State said would “develop a framework for cybersecurity and a non-interference agreement.”13 Prime Minister Medvedev, however, said digitizing the economy is both “a matter of Russia’s global competitiveness and national security,”14 thus indicating Russia is focused not solely inward, but on a strategic competitive stance. MinCom Nikiforov makes the shortcut even clearer, stating, “In developing the economy, we need digital sovereignty,”15 indicating a need to fully control how the country interacts with the rest of the world in the digital age.

The Kremlin’s main proponent for digital sovereignty, Igor Ashmanov, claims, “Digital sovereignty is the right of the government to independently determine what is happening in their digital sphere. And make its own decisions.” He adds, “Only the Americans have complete digital sovereignty. China is growing its sovereignty. We are too.”16 According to Lebedev, “Various incarnations of digital sovereignty are integral to the public discourse in most countries,” and in recent years, “The idea of reining in global information flows and at least partially subjugating them to the control of certain traditional or not-so-traditional jurisdictions (the European Union, the nation-state, municipal administrations) has become more attractive.”17   In the Russian narrative, which portrays every nation as striving to gain the upper hand on the information battlefield, Ashmanov’s fear that, “The introduction of every new technology is another phase in the digital colonization of our country,”18 does not sound too far-fetched.

The conspiracy theorists to the right of the administration suggest the “global world order” represented by the International Monetary Fund intends to leave Russia out of its new replacement reference currency, saying “Big Brother is coming to blockchain.”19 Meanwhile, wikireality.ru reports the Russian government could limit web access in the name of national security, because the Internet “is a CIA project and the U.S. is using information wars to destroy governments,” using its “cybertroops.”20 As the site notes, the fight against terrorism has been invoked as a basis for establishing a black list of websites available within Russia. Just as U.S. citizens have expressed concerns over the level of surveillance made legal by the Patriot Act, so Russian netizens have expressed concerns over the Yarovaya laws and moves the state has made to facilitate information sovereignty.

According to the Financial Times, “This interest in cryptocurrencies shows Russia’s desire to take over an idea originally created without any government influence. It was like that with the Internet, which the Kremlin has recently learned to tame.”21 Meanwhile, a healthy contingent of Russian language netizens continue to express their lack of faith in the national security argument, preferring to embrace a more classical skepticism, as reflected in comments in response to a 2017 post by msmash called, “From the Never-Say-Never-But-Never Department,” — “In Putin’s Russia, currency encrypts you!”22 To these netizens, the state looks set to continue to ratchet down on Internet traffic: “It’s really descriptive of just how totalitarian the country has become that they’re hard at work out-Chinaing China itself when it comes to control of the Internet,” but “China is actually enforcing those kind of laws against its people. In Russia, on the other hand, the severity of the laws is greatly mitigated by the fact that nobody gives a **** about the law.”23 In addition to suggesting personal security is a fair price to be paid for national security via surveillance and Internet laws, the state appears poised to argue all information about persons in the country, including about their finances, should also be “transparent” to fight terrorism and crime in general.

If you enjoyed reading this post, please also see:

Dr. Mica Hall is a Russian linguist and holds an MA and PhD in Slavic Linguistics and an MPA.

The views expressed in this article are those of the author and do not reflect the official policy or position of the Department of the Army, DoD, or the U.S. Government.


1 Russia to Launch ‘Independent Internet’ for BRICS Nations – Report, 2017, RT.com, https://www.rt.com/politics/411156-russia-to-launch-independent-internet/, 28 November 2017.

2 Russia to Launch.

3 Russia to Launch.

4 Russia to Launch.

5 Russia to Launch.

6 Boris Slavin, 2017, People or Digits: Which One Do We Need More? vedomosti.ru, https://www.vedomosti.ru/opinion/articles/2017/01/17/673248-lyudi-tsifri-nuzhnee, 17 January 2017.

7 Slavin, People or Digits.

8 Slavin, People or Digits.

9 Slavin, People or Digits.

10 Kyree Leary, 2017, Vladimir Putin Just Revealed Russia’s Plans for Cryptocurrencies, futurism.com, https://futurism.com/vladimir-putin-just-revealed-russias-plans-for-cryptocurrencies/, 26 October 26017.

11 CB is Discussing Creating a Supranational Cryptocurrency Together With EAEU and BRICS, 2017, vedomosti.ru, https://www.vedomosti.ru/finance/news/2017/12/28/746856-sozdanie-kriptovalyuti-v-ramkah-eaes-i-briks-bank-rossii-v-2018-g, 28 December 2017.

12 Russia to Launch.

13 Russia and the US to Create a Working Group for the Regulation of Cyberspace, 2017, RIA Novosti, https://ria.ru/world/20170708/1498126496.html?=inj=1, 8 July 2017.

14 MinComSvyazi: We Need Digital Sovereignty to Develop the Economy, 2017, RIA Novosti, https://ria.ru/soceity/20170905/1501809181.html, 5 September 2017.

15 MinComSvyazi: We Need Digital Sovereignty.

16 Irina Besedovala, 2016, The Yarovaya Laws Will Save Us from the CIA, fontanka.ru, http://www.fontanka.ru/2016/10/22/061/, 22 October 2016.

17 Dmitry Lebedev, 2017, Digital Sovereignty à la Russe, opendemocracy.net, https://www.opendemocracy.net/od-russia/dmitry-lebedev/digital-sovereignty-a-la-russe, 3 November 2017.

18 Igor Ashmanov, 2017, The Recipe for Digital Sovereignty, Rossijskoe Agentstvo Novostej, http://www.ru-an.info/, 22 August 2017.

19 Global Elites’ Secret Plan for Cryptocurrencies, 2017, pravosudija.net, http://www. pravdosudija.net/article/sekretynyy-plan-globalnyh-elit-otnositelno-kriptovalyut, 5 September 2017.

20 Information Sovereignty, 2017, wikireality.ru, http://www.wikireality.ru/wiki/Информационный_сувернитет, 28 March 2017.

21 FT: Russia Is Looking For A Way to “Cut Off” Cryptocurrencies, 2018, Russian RT, https://russian.rt.com/inotv/2018-01-02/FT-Rossiya-ishhet-sposob-ukrotit, 2 January 2018.

22 msmash, 2017, We’ll Never Legalize Bitcoin, Says Russian Minister, yro.slashdot.org, https://yro.slashdot.org/story/17/11/22/2111216/well-never-legalize-bitcoin-says-russian-minister, 22 November 2017.

23 We’ll Never Legalize Bitcoin.

96. Weaponizing an Economy: The Cryptoruble and Russia’s Dystopian Future

[Editor’s Note: The U.S. Army Training and Doctrine Command (TRADOC) G-2’s Mad Scientist Initiative tracks a number of emergent disruptive technologies that have the potential to impact the Future Operational Environment.  We have already seen a number of these technologies being applied by regimes as a means for social control and manipulation — China’s use of facial recognition cameras to surveil the Uighur population in Xinjiang province, and social credit scores to control the general population across width and breadth of the Middle Kingdom (beginning in 2020) are but two examples.  Mad Scientist Laboratory is pleased to publish guest blogger Dr. Mica Hall‘s post addressing the potential societal, economic, and political disruptions posed by Russia’s embrace of cryptocurrency technology.  (Note:  Some of the embedded links in this post are best accessed using non-DoD networks.)]

Cryptocurrencies and Distributed Ledger Technology (DLT), including blockchain, have clear implications for the Future Operational Environment — affecting domestic infrastructure, the race for information sovereignty, domestic politics, and geopolitics. What may appear to be a purely economic factor is being used as a lever to affect state access to citizens’ personal information, control of information flows, and foreign relations both at a regional and global level.

Cryptocurrencies, untethered from traditional economic paradigms, can be used for illicit transactions in support of crime and terrorism, proliferation, countering sanctions, and potential existential economic threats. If money is an idea based on trust, understanding it is an information-related capability. A country’s degree of digital sovereignty can have both foreign policy and military consequences, so the race to control information is a significant effort in hybrid warfare.

Cryptocurrency in its “traditional” definition has three primary characteristics:

1) It is decentralized (i.e., the information is not held by one organization, such as a nation’s central bank or the Federal Reserve) and decisions to approve a payment and move “funds” from one account to another are made by multiple users, commonly known as “miners”;

2) Ownership of funds is anonymous – the system itself often does not require any identification for membership and a user’s identity is not identified in any way in the transfers (although identities could potentially be traced via IP addresses, credit card numbers, and e-mail addresses used); and

3) All transactions are transparent and immutable (unless overwritten by a longer chain) – once completed, everyone can see the accounts involved in the transaction, the amount, and when it was transferred.  Once transferred, there is (typically) no way to block the transfer, even if one party claims that the other did not provide the promised goods or services, or they were not of the commensurate quality, and there is no recourse regarding the seller/provider.

President Putin has both scared the public by talking about the criminal potential evidenced in cryptocurrencies, while simultaneously promoting the Digital Economy.  He has announced plans to launch the cryptoruble as part of realizing his Digital Economy platform. If Putin’s administration implements a DLT-based national cryptocurrency and legislates that all Russian citizens convert to the new system by allowing only one way to participate in the economy (e.g., by removing paper rubles from circulation), they will have an open ledger to every citizen’s finances. The state could also use it to exclude state-identified dissidents completely from the economy. 1

In a potential nightmare scenario, the elimination of the paper ruble would eliminate any ability for individuals to engage in anonymous transactions or even remain anonymous at all:

Cash is the most important factor in people’s freedom and independence. If we turn away from cash voluntarily… we’ll become bio-objects who are that much more manipulable. And if you even squeak, you’ll become a pariah in the best case scenario, and homeless in the worst case scenario, with no way to support yourself.”2

It begs the question whether the current freedom of speech netizens currently enjoy might also disappear, once each individual is ultimately trackable.

The beauty of the cryptoruble, from the administration’s standpoint, is that it “bring(s) under its control a technology in complete anarchy,”3
and provides the government access to Russian citizens’ information while doing so in the name of protecting citizens from criminals who use cash rubles to hide crimes, such as money laundering and terrorism.4

This technology would allow the Russian government to have complete control over currency inventory and flow via visibility over all money operations.  “It would be dumb to think that the authorities would pass up these fantastic opportunities.”5 As Polčák and Svantesson suggest,

Data not only represent an integral part of the identity of a person, they also represent, together with other essentials, an integral part of the identity of a state. Keeping control over such data is equally important for both an individual and for a state to retain their sovereign existence.”6

The cryptoruble is the ultimate foil to any desire by individual citizens to protect their privacy and anonymity, providing for “protection” by the state for the greater good for all its citizens. In this way, President Putin’s Digital Economy project, a political platform, deftly works towards full digital sovereignty and information sovereignty on the foundation of technological sovereignty and in the name of national security.

The opinions expressed in the Russian-language media regarding what the future the cryptoruble may portend run the gamut, with both supporters and dissenters agreeing on the significance of this level of government control of the economy.  Cryptoruble skeptics predict a dystopian future, warning of this transparent ledger system, “The President will know everything about everyone in the country – who paid who and how much.”7 In a way strangely similar to the current method of issuing social security numbers in the United States, @dimon777 suggests, “Newborns could be assigned cryptowallets at birth.”8

A state-issued blockchain currency could also bring order – via total control – to all government documentation processes. DLT has already been proposed as a system for recording real estate transactions, the argument being they would be processed faster than paper documentation and are a matter of public record.  While banks may process credit requests faster, a centralized information hub may actually provide all the information the state knows about the applicant at the touch of a button, via an “interagency electronic cooperation system,” with data on marriages and divorces, births, and deaths; “all the data about an applicant’s family situation;” data regarding the Pension Fund of the RF; “about their place of work and payments made into the fund;” and about their immigration status, in addition to their actual credit history.9

Once blockchain-based processes become the norm for doing business in Russia, several sources suggest the next step could be using biometrics to verify identity. Perhaps with the one added benefit of never having to remember a password again, the Russian banking system could soon move to a system of virtual identity verification via biometrics.10

In June 2016, President Putin announced plans to establish a “federal information system for biometric registration that would store data about ‘persons involved in terrorism and extremism'” and since then, the Russian authorities have been “increasingly active in their collection and use of various biometric data (fingerprints, DNA samples, photographs, etc.).”11

The justification provided for this data collection has been national security, yet the scope is broad, including cases covered by legislation on “defense, security, combatting terrorism, transport safety, anti-corruption, investigative activity, civil service, criminal enforcement legislation, requirements for entry and exit from the country, and citizenship,”12 so expanding the system even further is plausible.

One cryptocurrency that could be controlled if needed is Byteball, so called for the shape of its chains. Like “traditional” cryptocurrency payments, Byteball transactions take place cryptowallet to cryptowallet, yet Byteball has a parallel, non-transparent system called “blackbytes” whose transactions are both visible in a public ledger and untraceable. These coins could be used when transactions “need to be concealed, for example, in funding secret programs.”13 These are the only conditions in which Russia will embark on a cryptocurrency épopée – if it is fully controlled by the state.

As Telley suggests, “Cryptocurrencies must now be counted as an impactful part of the operational environment.”14 In the case of the cryptoruble, it is the nexus of the political, economic, social, information, and infrastructure effects that may manifest the greatest danger or the greatest change. While the Digital Economy program may resemble a simple slide backwards towards a centrally controlled economy, a DLT-based currency issued by the Russian Central Bank would allow the administration to wield a significant level of access to personal information, in addition to economic control.

For a deeper dive into this topic, go to the TRADOC G-2’s Foreign Military Studies Office (FMSO) OEWatch page and download Volume 8, Issue #1, January 2018, featuring a host of articles on Cryptocurrencies and Blockchains and their impact in nations around the world.

Also see the following guest blog posts describing addressing other potential disruptors that may affect the Future Operational Environment:

Dr. Mica Hall is a Russian linguist and holds an MA and PhD in Slavic Linguistics and an MPA.

The views expressed in this article are those of the author and do not reflect the official policy or position of the Department of the Army, DoD, or the U.S. Government.


1 nalivaiko43 (2017), It’s Going to End up Being an Electronic Concentration Camp, golos.io, https://golos.io/ru–konczlagerx/@nalivaiko/elektronnyi-konclager-nacionalnykh-kriptovalyut, 16 October 2017.

2 nalivaiko43, It’s Going to End up.

3 Cryptoruble: What is it, Can I Buy it, When Are They Issuing it, and How Can I Use it to Make Money? kripto-rubl.ru, https://kripto-rubl.ru, 24 October 2017.

4 Fyodor Naumov, 2017, Digital Sovereignty: Why the Government Needs the Cryptoruble, Forbes.ru, http://www.forbes.ru/finansy-i-investicii/352381-cifrovoy-suvernitet-zachem-pravitelstvu-ponadobitsya-kriptorubl, 3 November 2017.

5 mr-kryply59, 2017, CryplyNews. Cryptoruble and Cryptoyuan, Two Bitcoin Killers, golos.io, https://golos.io/ru–bitkoin/@mr-kryply/cryplynews-kriptorubl-i-kriptoyuan-srazu-dva-ubiicy-bitkoina, 16 October 2017.

6 Radim Polčák and Dan Jerker Svantesson, 2017, Information Sovereignty: Data Privacy, Sovereign Powers and the Rule of Law, Northampton, MA, Edward Edgar Publishing.

7 @dimon777, 2017, Phantasmagoria about the Cryptoruble, golos.io, https://golos.io/ru–bitkoin/@dimon777/fantasmagoriya-o-kriptoruble, 25 August 2017.

8 @dimon777, Phantasmagoria about the Cryptoruble.

9 Nikolay Alekseenko, 2017, Blockchain without The Middleman: What Developments Does the Digital Economy Hold? realty.rbc.ru, https://realty.rbc.ru/news/59788fab9a7947d94ee1ddcb, 26 July 2017.

10 Alekseenko, Blockchain without the Middleman.

11 Agora International Human Rights Group, 2017, Russia under Surveillance 2017: How The Russian State Is Setting Up A System Of Total Control Over Its Citizens, http://en.agora.legal/articles/Report-of-Agora-International-%E2%80%98Russia-under-surveillance-2017%E2%80%99/6, 1 November 2017.

12 Agora, Russia under Surveillance 2017.

13 freeman39, 2017, The Cryptoruble Already Exists – It’s Called Byteball, Golos.io. https://golos.io/ru–kriptorublx/@freeman39/kriptorubl-uzhe-sushestvuet-eto-byteball, 24 October 2017.

14 MAJ Chris Telley, 2018, A Coin for the Tsar: The Two Disruptive Sides of Cryptocurrency, Small Wars Journal, http://smallwarsjournal.com/jrnl/art/coin-tsar-two-disruptive-sides-cryptocurrency, 15 January 2018.

92. Ground Warfare in 2050: How It Might Look

[Editor’s Note: Mad Scientist Laboratory is pleased to review proclaimed Mad Scientist Dr. Alexander Kott’s paper, Ground Warfare in 2050: How It Might Look, published by the US Army Research Laboratory in August 2018. This paper offers readers with a technological forecast of autonomous intelligent agents and robots and their potential for employment on future battlefields in the year 2050. In this post, Mad Scientist reviews Dr. Kott’s conclusions and provides links to our previously published posts that support his findings.]

In his paper, Dr. Kott addresses two major trends (currently under way) that will continue to affect combat operations for the foreseeable future. They are:

•  The employment of small aerial drones for Intelligence, Surveillance, and Reconnaissance (ISR) will continue, making concealment difficult and eliminating distance from opposing forces as a means of counter-detection. This will require the development and use of decoy capabilities (also intelligent robotic devices). This counter-reconnaissance fight will feature prominently on future battlefields between autonomous sensors and countermeasures – “a robot-on-robot affair.”

See our related discussions regarding Concealment in the Fundamental Questions Affecting Army Modernization post and Finders vs Hiders in our Timeless Competitions post.

  The continued proliferation of intelligent munitions, operating at greater distances, collaborating in teams to seek out and destroy designated targets, and able to defeat armored and other hardened targets, as well as defiladed and entrenched targets.

See our descriptions of the future recon / strike complex in our Advanced Engagement Battlespace and the “Hyperactive Battlefield” post, and Robotics and Swarms / Semi Autonomous capabilities in our Potential Game Changers post.

These two trends will, in turn, drive the following forecasted developments:

  Increasing reliance on unmanned systems, “with humans becoming a minority within the overall force, being further dispersed across the battlefield.”

See Mr. Jeff Becker’s post on The Multi-Domain “Dragoon” Squad: A Hyper-enabled Combat System, and Mr. Mike Matson’s Demons in the Tall Grass, both of which envision future tactical units employing greater numbers of autonomous combat systems; as well as Mr. Sam Bendett’s post on Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward, addressing the contemporary hurdles that one of our strategic competitors must address in operationalizing Unmanned Ground Vehicles.

•  Intelligent munitions will be neutralized “primarily by missiles and only secondarily by armor and entrenchments. Specialized autonomous protection vehicles will be required that will use their extensive load of antimissiles to defeat the incoming intelligent munitions.”

See our discussion of what warfare at machine-speed looks like in our Advanced Engagement Battlespace and the “Hyperactive Battlefield”.

Source: Fausto De Martini / Kill Command

  Forces will exploit “very complex terrain, such as dense forest and urban environments” for cover and concealment, requiring the development of highly mobile “ground robots with legs and limbs,” able to negotiate this congested landscape.

 

See our Megacities: Future Challenges and Responses and Integrated Sensors: The Critical Element in Future Complex Environment Warfare posts that address future complex operational environments.

Source: www.defenceimages.mod.uk

  The proliferation of autonomous combat systems on the battlefield will generate an additional required capability — “a significant number of specialized robotic vehicles that will serve as mobile power generation plants and charging stations.”

See our discussion of future Power capabilities on our Potential Game Changers handout.

 “To gain protection from intelligent munitions, extended subterranean tunnels and facilities will become important. This in turn will necessitate the tunnel-digging robotic machines, suitably equipped for battlefield mobility.”

See our discussion of Multi-Domain Swarming in our Black Swans and Pink Flamingos post.

  All of these autonomous, yet simultaneously integrated and networked battlefield systems will be vulnerable to Cyber-Electromagnetic Activities (CEMA). Consequently, the battle within the Cyber domain will “be fought largely by various autonomous cyber agents that will attack, defend, and manage the overall network of exceptional complexity and dynamics.”

See MAJ Chris Telley’s post addressing Artificial Intelligence (AI) as an Information Operations tool in his Influence at Machine Speed: The Coming of AI-Powered Propaganda.

 The “high volume and velocity of information produced and demanded by the robot-intensive force” will require an increasingly autonomous Command and Control (C2) system, with humans increasingly being on, rather than in, the loop.

See Mr. Ian Sullivan’s discussion of AI vs. AI and how the decisive edge accrues to the combatant with more autonomous decision-action concurrency in his Lessons Learned in Assessing the Operational Environment post.

If you enjoyed reading this post, please watch Dr. Alexander Kott’s presentation, “The Network is the Robot,” from the Mad Scientist Robotics, Artificial Intelligence, and Autonomy: Visioning Multi-Domain Warfare in 2030-2050 Conference, co-sponsored by the Georgia Tech Research Institute (GTRI), in Atlanta, Georgia, 7-8 March 2017.

Dr. Alexander Kott serves as the ARL’s Chief Scientist. In this role he provides leadership in development of ARL technical strategy, maintaining technical quality of ARL research, and representing ARL to external technical community. He published over 80 technical papers and served as the initiator, co-author and primary editor of over ten books, including most recently Cyber Defense and Situational Awareness (2015) and Cyber Security of SCADA and other Industrial Control Systems (2016), and the forthcoming Cyber Resilience of Systems and Networks (2019).

87. LikeWar — The Weaponization of Social Media

[Editor’s Note: Regular readers will note that one of our enduring themes is the Internet’s emergence as a central disruptive innovation. With the publication of proclaimed Mad Scientist P.W. Singer and co-author Emerson T. Brooking’s LikeWar – The Weaponization of Social Media, Mad Scientist Laboratory addresses what is arguably the most powerful manifestation of the internet — Social Media — and how it is inextricably linked to the future of warfare. Messrs. Singer and Brooking’s new book is essential reading if today’s Leaders (both in and out of uniform) are to understand, defend against, and ultimately wield the non-kinetic, yet violently manipulative effects of Social Media.]

“The modern internet is not just a network, but an ecosystem of 4 billion souls…. Those who can manipulate this swirling tide, steer its direction and flow, can…. accomplish astonishing evil. They can foment violence, stoke hate, sow falsehoods, incite wars, and even erode the pillars of democracy itself.”

As noted in The Operational Environment and the Changing Character of Future Warfare, Social Media and the Internet of Things have spawned a revolution that has connected “all aspects of human engagement where cognition, ideas, and perceptions, are almost instantaneously available.” While this connectivity has been a powerfully beneficial global change agent, it has also amplified human foibles and biases. Authors Singer and Brookings note that humans by nature are social creatures that tend to gravitate into like-minded groups. We “Like” and share things online that resonate with our own beliefs. We also tend to believe what resonates with us and our community of friends.

Whether the cause is dangerous (support for a terrorist group), mundane (support for a political party), or inane (belief that the earth is flat), social media guarantees that you can find others who share your views and even be steered to them by the platforms’ own algorithms… As groups of like-minded people clump together, they grow to resemble fanatical tribes, trapped in echo chambers of their own design.”

Weaponization of Information

The advent of Social Media less than 20 years ago has changed how we wage war.

Attacking an adversary’s most important center of gravity — the spirit of its people — no longer requires massive bombing runs or reams of propaganda. All it takes is a smartphone and a few idle seconds. And anyone can do it.”

Nation states and non-state actors alike are leveraging social media to manipulate like-minded populations’ cognitive biases to influence the dynamics of conflict. This continuous on-line fight for your mind represents “not a single information war but thousands and potentially millions of them.”

 

LikeWar provides a host of examples describing how contemporary belligerents are weaponizing Social Media to augment their operations in the physical domain. Regarding the battle to defeat ISIS and re-take Mosul, authors Singer and Brookings note that:

Social media had changed not just the message, but the dynamics of conflict. How information was being accessed, manipulated, and spread had taken on new power. Who was involved in the fight, where they were located, and even how they achieved victory had been twisted and transformed. Indeed, if what was online could swing the course of a battle — or eliminate the need for battle entirely — what, exactly, could be considered ‘war’ at all?

Even American gang members are entering the fray as super-empowered individuals, leveraging social media to instigate killings via “Facebook drilling” in Chicago or “wallbanging” in Los Angeles.

And it is only “a handful of Silicon Valley engineers,” with their brother and sister technocrats in Beijing, St. Petersburg, and a few other global hubs of Twenty-first Century innovation that are forging and then unleashing the code that is democratizing this virtual warfare.

Artificial Intelligence (AI)-Enabled Information Operations

Seeing is believing, right? Not anymore! Previously clumsy efforts to photo-shop images and fabricate grainy videos and poorly executed CGI have given way to sophisticated Deepfakes, using AI algorithms to create nearly undetectable fake images, videos, and audio tracks that then go viral on-line to dupe, deceive, and manipulate. This year, FakeApp was launched as free software, enabling anyone with an artificial neural network and a graphics processor to create and share bogus videos via Social Media. Each Deepfake video that:

“… you watch, like, or share represents a tiny ripple on the information battlefield, privileging one side at the expense of others. Your online attention and actions are thus both targets and ammunition in an unending series of skirmishes.”

Just as AI is facilitating these distortions in reality, the race is on to harness AI to detect and delete these fakes and prevent “the end of truth.”

If you enjoyed this post:

– Listen to the accompanying playlist composed by P.W. Singer while reading LikeWar.

– Watch P.W. Singer’s presentation on Meta Trends – Technology, and a New Kind of Race from Day 2 of the Mad Scientist Strategic Security Environment in 2025 and Beyond Conference at Georgetown University, 9 August 2016.

– Read more about virtual warfare in the following Mad Scientist Laboratory blog posts:

— MAJ Chris Telley’s Influence at Machine Speed: The Coming of AI-Powered Propaganda

— COL(R) Stefan J. Banach’s Virtual War – A Revolution in Human Affairs (Parts I and II)

— Mad Scientist Intiative’s Personalized Warfare

— Ms. Marie Murphy’s Virtual Nations: An Emerging Supranational Cyber Trend

— Lt Col Jennifer Snow’s Alternet: What Happens When the Internet is No Longer Trusted?

85. Benefits, Vulnerabilities, and the Ethics of Soldier Enhancement

[Editor’s Note: The United States Army Training and Doctrine Command (TRADOC) co-hosted the Mad Scientist Bio Convergence and Soldier 2050 Conference with SRI International at their Menlo Park, CA, campus on 8-9 March 2018, where participants discussed the advent of new biotechnologies and the associated benefits, vulnerabilities, and ethics associated with Soldier enhancement for the Army of the Future.  The following post is an excerpt from this conference’s final report.]

Source:  Max Pixel

Advances in synthetic biology likely will enhance future Soldier performance – speed, strength, endurance, and resilience – but will bring with it vulnerabilities, such as genomic targeting, that can be exploited by an adversary and/or potentially harm the individual undergoing the enhancement.

 

Emerging synthetic biology tools – e.g., CRISPR, Talon, and ZFN – present an opportunity to engineer Soldiers’ DNA and enhance their abilities. Bioengineering is becoming easier and cheaper as a bevy of developments are reducing biotechnology transaction costs in gene reading, writing, and editing. [1] Due to the ever-increasing speed and lethality of the future battlefield, combatants will need cognitive and physical enhancement to survive and thrive.

Cognitive enhancement could make Soldiers more lethal, more decisive, and perhaps more resilient. Using neurofeedback, a process that allows a user to see their brain activity in real-time, one can identify ideal brain states, and use them to enhance an individual’s mental performance. Through the mapping and presentation of identified expert brains, novices can rapidly improve their acuity after just a few training sessions. [2] Further, there are studies being conducted that explore the possibility of directly emulating those expert brain states with non-invasive EEG caps that could improve performance almost immediately. [3]  Dr. Amy Kruse, the Chief Scientific Officer at the Platypus Institute, referred to this phenomenon as “sitting on a gold mine of brains.”

There is also the potential to change and improve Soldier’s physical attributes. Scientists can develop drugs, specific dietary plans, and potentially use genetic editing to improve speed, strength, agility, and endurance.

Source: Andrew Herr, CEO Helicase

In order to fully leverage the capability of human performance enhancement, Andrew Herr, CEO of Helicase and an Adjunct Fellow at CNAS, suggested that human performance R&D be moved out of the medical field and become its own research area due to its differing objectives and the convergence between varying technologies.

Soldiers, Airmen, Marines, and Sailors are already trying to enhance themselves with commercial products – often containing unknown or unsafe ingredients – so it is incumbent on the U.S. military to, at the very least, help those who want to improve.

However, a host of new vulnerabilities, at the genetic level, accompany this revolutionary leap in human evolution. If one can map the human genome and more thoroughly scan and understand the brain, they can target genomes and brains in the same ways. Soldiers could become incredibly vulnerable at the genomic level, forcing the Army to not only protect Soldiers using body armor and armored vehicles, but also protect their identities, genomes, and physiologies.

Adversaries will exploit all biological enhancements to gain competitive advantage over U.S. forces. Targeted genome editing technology such as CRISPR will enable adversarial threats to employ super-empowered Soldiers on the battlefield and target specific populations with bioweapons. U.S. adversaries may use technologies recklessly to achieve short term gains with no consideration of long range effects. [4] [5]

There are numerous ethical questions that come with the enhancement of Soldiers such as the moral acceptability of the Army making permanent enhancements to Soldiers, the responsibility for returning transitioning Soldiers to a “baseline human,” and the general definition of what a “baseline human” is legally defined as.

Transhumanism H+ symbol by Antonu / Source:  https://commons.wikimedia.org/wiki/File:Transhumanism_h%2B.svg

By altering, enhancing, and augmenting the biology of the human Soldier, the United States Army will potentially enter into uncharted ethical territory. Instead of issuing items to Soldiers to complement their physical and cognitive assets, by 2050, the U.S. Army may have the will and the means to issue them increased biological abilities in those areas. The future implications and the limits or thresholds for enhancement have not yet been considered. The military is already willing to correct the vision of certain members – laser eye surgery, for example – a practice that could be accurately referred to as human enhancement, so discretely defining where the threshold lies will be important. It is already known that other countries, and possible adversaries, are willing to cross the line where we are not. Russia, most recently, was banned from competition in the 2018 Winter Olympics for widespread performance-enhancing drug violations that were believed to be supported by the Russian Government. [6] Those drugs violate the spirit of competition in the Olympics, but no such spirit exists in warfare.

Another consideration is whether or not the Soldier enhancements are permanent. By enhancing Soldiers’ faculties, the Army is, in fact, enhancing their lethality or their ability to defeat the enemy. What happens with these enhancements—whether the Army can or should remove them— when a Soldier leaves the Army is an open question. As stated previously, the Army is willing and able to improve eyesight, but does not revert that eyesight back to its original state after the individual has separated. Some possible moral questions surrounding Soldier enhancement include:

• If the Army were to increase a Soldier’s stamina, visual acuity, resistance to disease, and pain tolerance, making them a more lethal warfighter, is it incumbent upon the Army to remove those enhancements?

• If the Soldier later used those enhancements in civilian life for nefarious purposes, would the Army be responsible?

Answers to these legal questions are beyond the scope of this paper, but can be considered now before the advent of these new technologies becomes widespread.

Image by Leonardo da Vinci / Source: Flickr

If the Army decides to reverse certain Soldier enhancements, it likely will need to determine the definition of a “baseline human.” This would establish norms for features, traits, and abilities that can be permanently enhanced and which must be removed before leaving service. This would undoubtedly involve both legal and moral challenges.

 

The complete Mad Scientist Bio Convergence and Soldier 2050 Final Report can be read here.

To learn more about the ramifications of Soldier enhancement, please go to:

– Dr. Amy Kruse’s Human 2.0 podcast, hosted by our colleagues at Modern War Institute.

– The Ethics and the Future of War panel discussion, facilitated by LTG Jim Dubik (USA-Ret.) from Day 2 (26 July 2017) of the Mad Scientist Visualizing Multi Domain Battle in 2030-2050 Conference at Georgetown University.


[1] Ahmad, Zarah and Stephanie Larson, “The DNA Utility in Military Environments,” slide 5, presented at Mad Scientist Bio Convergence and the Soldier 2050 Conference, 8 March 2018.
[2] Kruse, Amy, “Human 2.0 Upgrading Human Performance,” Slide 12, presented at Mad Scientist Bio Convergence and the Soldier 2050 Conference, 8 March 2018
[3]https://www.frontiersin.org/articles/10.3389/fnhum.2016.00034/full
[4] https://www.technologyreview.com/the-download/610034/china-is-already-gene-editing-a-lot-of-humans/
[5] https://www.c4isrnet.com/unmanned/2018/05/07/russia-confirms-its-armed-robot-tank-was-in-syria/
[6] https://www.washingtonpost.com/sports/russia-banned-from-2018-olympics-following-doping-allegations/2017/12/05/9ab49790-d9d4-11e7-b859-fb0995360725_story.html?noredirect=on&utm_term=.d12db68f42d1

82. Bias and Machine Learning

[Editor’s Note:  Today’s post poses four central questions to our Mad Scientist community of action regarding bias in machine learning and the associated ramifications for artificial intelligence, autonomy, lethality, and decision-making on future warfighting.]

We thought that we had the answers, it was the questions we had wrong” – Bono, U2

Source: www.vpnsrus.com via flickr

As machine learning and deep learning algorithms become more commonplace, it is clear that the utopian ideal of a bias-neutral Artificial Intelligence (AI) is exactly just that. These algorithms have underlying biases embedded in their coding, imparted by their human programmers (either consciously or unconsciously). These algorithms can develop further biases during the machine learning and training process.  Dr. Tolga Bolukbasi, Boston University, recently described algorithms as not being capable of distinguishing right from wrong, unlike humans that can judge their actions, even when they act against ethical norms. For algorithms, data is the ultimate determining factor.

Realizing that algorithms supporting future Intelligence, Surveillance, and Reconnaissance (ISR) networks and Commander’s decision support aids will have inherent biases — what is the impact on future warfighting? This question is exceptionally relevant as Soldiers and Leaders consider the influence of biases in man-machine relationships, and their potential ramifications on the battlefield, especially with regard to the rules of engagement (i.e., mission execution and combat efficiency versus the proportional use of force and minimizing civilian casualties and collateral damage).

It is difficult to make predictions, particularly about the future.” This quote has been attributed to anyone ranging from Mark Twain to Niels Bohr to Yogi Berra. Point prediction is a sucker’s bet. However, asking the right questions about biases in AI is incredibly important.

The Mad Scientist Initiative has developed a series of questions to help frame the discussion regarding what biases we are willing to accept and in what cases they will be acceptable. Feel free to share your observations and questions in the comments section of this blog post (below) or email them to us at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil.

1) What types of bias are we willing to accept? Will a so-called cognitive bias that forgoes a logical, deliberative process be allowable? What about a programming bias that is discriminative towards any specific gender(s), ethnicity(ies), race(s), or even age(s)?

2) In what types of systems will we accept biases? Will machine learning applications in supposedly non-lethal warfighting functions like sustainment, protection, and intelligence be given more leeway with regards to bias?

3) Will the biases in machine learning programming and algorithms be more apparent and/or outweigh the inherent biases of humans-in-the-loop? How will perceived biases affect trust and reliance on machine learning applications?

4) At what point will the pace of innovation and introduction of this technology on the battlefield by our adversaries cause us to forego concerns of bias and rapidly field systems to gain a decisive Observe, Orient, Decide, and Act (OODA) loop and combat speed advantage on the Hyperactive Battlefield?

For additional information impacting on this important discussion, please see the following:

An Appropriate Level of Trust… blog post

Ethical Dilemmas of Future Warfare blog post

Ethics and the Future of War panel discussion video

81. “Maddest” Guest Blogger!

[Editor’s Note: Since its inception last November, the Mad Scientist Laboratory has enabled us to expand our reach and engage global innovators from across industry, academia, and the Government regarding emergent disruptive technologies and their individual and convergent impacts on the future of warfare. For perspective, our blog has accrued almost 60K views by over 30K visitors from around the world!

Our Mad Scientist Community of Action continues to grow — in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. Almost half (36 out of 81) of the blog posts published have been submitted by guest bloggers. We challenge you to contribute your ideas!

In particular, we would like to recognize Mad Scientist Mr. Sam Bendett by re-posting his submission entitled “Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward,” originally published on 25 June 2018. This post generated a record number of visits and views during the past six month period. Consequently, we hereby declare Sam to be the Mad Scientist Laboratory’s “Maddest” Guest Blogger! for the latter half of FY18. In recognition of his achievement, Sam will receive much coveted Mad Scientist swag.

While Sam’s post revealed the many challenges Russia has experienced in combat testing the Uran-9 Unmanned Ground Vehicle (UGV) in Syria, it is important to note that Russia has designed, prototyped,  developed, and operationally tested this system in a combat environment, demonstrating a disciplined and proactive approach to innovation.  Russia is learning how to integrate robotic lethal ground combat systems….

Enjoy re-visiting Sam’s informative post below, noting that many of the embedded links are best accessed using non-DoD networks.]

Russia’s Forpost UAV (licensed copy of IAI Searcher II) in Khmeimim, Syria; Source: https://t.co/PcNgJ811O8

Russia, like many other nations, is investing in the development of various unmanned military systems. The Russian defense establishment sees such systems as mission multipliers, highlighting two major advantages: saving soldiers’ lives and making military missions more effective. In this context, Russian developments are similar to those taking place around the world. Various militaries are fielding unmanned systems for surveillance, intelligence, logistics, or attack missions to make their forces or campaigns more effective. In fact, the Russian military has been successfully using Unmanned Aerial Vehicles (UAVs) in training and combat since 2013. It has used them with great effect in Syria, where these UAVs flew more mission hours than manned aircraft in various Intelligence, Surveillance, and Reconnaissance (ISR) roles.

Russia is also busy designing and testing many unmanned maritime and ground vehicles for various missions with diverse payloads. To underscore the significance of this emerging technology for the nation’s armed forces, Russian Defense Minister Sergei Shoigu recently stated that the serial production of ground combat robots for the military “may start already this year.”

Uran-9 combat UGV at Victory Day 2018 Parade in Red Square; Source: independent.co.uk

But before we see swarms of ground combat robots with red stars emblazoned on them, the Russian military will put these weapons through rigorous testing in order to determine if they can correspond to battlefield realities. Russian military manufacturers and contractors are not that different from their American counterparts in sometimes talking up the capabilities of their creations, seeking to create the demand for their newest achievement before there is proof that such technology can stand up to harsh battlefield conditions. It is for this reason that the Russian Ministry of Defense (MOD) finally established several centers such as Main Research and Testing Center of Robotics, tasked with working alongside the defense-industrial sector to create unmanned military technology standards and better communicate warfighters’ needs.  The MOD is also running conferences such as the annual “Robotization of the Armed Forces” that bring together military and industry decision-makers for a better dialogue on the development, growth, and evolution of the nation’s unmanned military systems.

Uran-9 Combat UGV, Source: nationalinterest.org

This brings us to one of the more interesting developments in Russian UGVs. Then Russian Deputy Defense Minister Borisov recently confirmed that the Uran-9 combat UGV was tested in Syria, which would be the first time this much-discussed system was put into combat. This particular UGV is supposed to operate in teams of three or four and is armed with a 30mm cannon and 7.62 mm machine guns, along with a variety of other weapons.

Just as importantly, it was designed to operate at a distance of up to three kilometers (3000 meters or about two miles) from its operator — a range that could be extended up to six kilometers for a team of these UGVs. This range is absolutely crucial for these machines, which must be operated remotely. Russian designers are developing operational electronics capable of rendering the Uran-9 more autonomous, thereby moving the operators to a safer distance from actual combat engagement. The size of a small tank, the Uran-9 impressed the international military community when first unveiled and it was definitely designed to survive battlefield realities….

Uran-9; Source: Defence-Blog.com

However, just as “no plan survives first contact with the enemy,” the Uran-9, though built to withstand punishment, came up short in its first trial run in Syria. In a candid admission, Andrei P. Anisimov, Senior Research Officer at the 3rd Central Research Institute of the Ministry of Defense, reported on the Uran-9’s critical combat deficiencies during the 10th All-Russian Scientific Conference entitled “Actual Problems of Defense and Security,” held in April 2018. In particular, the following issues came to light during testing:

• Instead of its intended range of several kilometers, the Uran-9 could only be operated at distance of “300-500 meters among low-rise buildings,” wiping out up to nine-tenths of its total operational range.

• There were “17 cases of short-term (up to one minute) and two cases of long-term (up to 1.5 hours) loss of Uran-9 control” recorded, which rendered this UGV practically useless on the battlefield.

• The UGV’s running gear had problems – there were issues with supporting and guiding rollers, as well as suspension springs.

• The electro-optic stations allowed for reconnaissance and identification of potential targets at a range of no more than two kilometers.

• The OCH-4 optical system did not allow for adequate detection of adversary’s optical and targeting devices and created multiple interferences in the test range’s ground and airspace.

Uran-9 undergoing testing; Source: YouTube

• Unstable operation of the UGV’s 30mm automatic cannon was recorded, with firing delays and failures. Moreover, the UGV could fire only when stationary, which basically wiped out its very purpose of combat “vehicle.”

• The Uran-9’s combat, ISR, and targeting weapons and mechanisms were also not stabilized.

On one hand, these many failures are a sign that this much–discussed and much-advertised machine is in need of significant upgrades, testing, and perhaps even a redesign before it gets put into another combat situation. The Russian military did say that it tested nearly 200 types of weapons in Syria, so putting the Uran-9 through its combat paces was a logical step in the long development of this particular UGV. If the Syrian trial was the first of its kind for this UGV, such significant technical glitches would not be surprising.

However, the MOD has been testing this Uran-9 for a while now, showing videos of this machine at a testing range, presumably in Russia. The truly unexpected issue arising during operations in Syria had to do with the failure of the Uran-9 to effectively engage targets with its cannon while in motion (along with a number of other issues). Still, perhaps many observers bought into the idea that this vehicle would perform as built – tracks, weapons, and all. A closer examination of the publicly-released testing video probably foretold some of the Syrian glitches – in this particular one, Uran-9 is shown firing its machine guns while moving, but its cannon was fired only when the vehicle was stationary. Another interesting aspect that is significant in hindsight is that the testing range in the video was a relatively open space – a large field with a few obstacles around, not the kind of complex terrain, dense urban environment encountered in Syria. While today’s and future battlefields will range greatly from open spaces to megacities, a vehicle like the Uran-9 would probably be expected to perform in all conditions. Unless, of course, Syrian tests would effectively limit its use in future combat.

Russian Soratnik UGV

On another hand, so many failures at once point to much larger issues with the Russian development of combat UGVs, issues that Anisimov also discussed during his presentation. He highlighted the following technological aspects that are ubiquitous worldwide at this point in the global development of similar unmanned systems:

• Low level of current UGV autonomy;

• Low level of automation of command and control processes of UGV management, including repairs and maintenance;

• Low communication range, and;

• Problems associated with “friend or foe” target identification.

Judging from the Uran-9’s Syrian test, Anisimov made the following key conclusions which point to the potential trajectory of Russian combat UGV development – assuming that other unmanned systems may have similar issues when placed in a simulated (or real) combat environment:

• These types of UGVs are equipped with a variety of cameras and sensors — and since the operator is presumably located a safe distance from combat, he may have problems understanding, processing, and effectively responding to what is taking place with this UGV in real-time.

• For the next 10-15 years, unmanned military systems will be unable to effectively take part in combat, with Russians proposing to use them in storming stationary and well-defended targets (effectively giving such combat UGVs a kamikaze role).

• One-time and preferably stationary use of these UGVs would be more effective, with maintenance and repair crews close by.

• These UGVs should be used with other military formations in order to target and destroy fortified and firing enemy positions — but never on their own, since their breakdown would negatively impact the military mission.

The presentation proposed that some of the above-mentioned problems could be overcome by domestic developments in the following UGV technology and equipment areas:

• Creating secure communication channels;

• Building miniaturized hi-tech navigation systems with a high degree of autonomy, capable of operating with a loss of satellite navigation systems;

• Developing miniaturized and effective ISR components;

• Integrating automated command and control systems, and;

• Better optics, electronics and data processing systems.

According to Anisimov’s report, the overall Russian UGV and unmanned military systems development arch is similar to the one proposed by the United States Army Capabilities Integration Center (ARCIC):  the gradual development of systems capable of more autonomy on the battlefield, leading to “smart” robots capable of forming “mobile networks” and operating in swarm configurations. Such systems should be “multifunctional” and capable of being integrated into existing armed forces formations for various combat missions, as well as operate autonomously when needed. Finally, each military robot should be able to function within existing and future military technology and systems.

Source: rusmilitary.wordpress.com

Such a candid review and critique of the Uran-9 in Syria, if true, may point to the Russian Ministry of Defense’s attitude towards its domestic manufacturers. The potential combat effectiveness of this UGV was advertised for the past two years, but its actual performance fell far short of expectations. It is a sign for developers of other Russian unmanned ground vehicles – like Soratnik, Vihr, and Nerehta — since it displays the full range of deficiencies that take place outside of well-managed testing ranges where such vehicles are currently undergoing evaluation. It also brought to light significant problems with ISR equipment — this type of technology is absolutely crucial to any unmanned system’s successful deployment, and its failures during Uran-9 tests exposed a serious combat weakness.

It is also a useful lesson for many other designers of domestic combat UGVs who are seeking to introduce similar systems into existing order of battle. It appears that the Uran-9’s full effectiveness can only be determined at a much later time if it can perform its mission autonomously in the rapidly-changing and complex battlefield environment. Fully autonomous operation so far eludes its Russian developers, who are nonetheless still working towards achieving such operational goals for their combat UGVs. Moreover, Russian deliberations on using their existing combat UGV platforms in one-time attack mode against fortified adversary positions or firing points, tracking closely with ways that Western military analysts are thinking that such weapons could be used in combat.

Source: Nikolai Novichkov / Orbis Defense

The Uran-9 is still a test bed and much has to take place before it could be successfully integrated into current Russian concept of operations. We could expect more eye-opening “lessons learned” from its and other UGVs potential deployment in combat. Given the rapid proliferation of unmanned and autonomous technology, we are already in the midst of a new arms race. Many states are now designing, building, exporting, or importing various technologies for their military and security forces.

To make matters more interesting, the Russians have been public with both their statements about new technology being tested and evaluated, and with the possible use of such weapons in current and future conflicts. There should be no strategic or tactical surprise when military robotics are finally encountered in future combat.

Source: Block13
by djahal; Diviantart.com

For another perspective on Russian military innovation, please read Mr. Ray Finch’s guest post The Tenth Man” — Russia’s Era Military Innovation Technopark.

Samuel Bendett is a Research Analyst at the CNA Corporation and a Russia Studies Fellow at the American Foreign Policy Council. He is an official Mad Scientist, having presented and been so proclaimed at a previous Mad Scientist Conference.  The views expressed here are his own.

80. “The Queue”

[Editor’s Note:  Mad Scientist Laboratory is pleased to present our August edition of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the past month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]

Gartner Hype Cycle / Source:  Nicole Saraco Loddo, Gartner

1.5 Trends Emerge in the Gartner Hype Cycle for Emerging Technologies,” by Kasey Panetta, Gartner, 16 August 2018.

Gartner’s annual hype cycle highlights many of the technologies and trends explored by the Mad Scientist program over the last two years. This year’s cycle added 17 new technologies and organized them into five emerging trends: 1) Democratized Artificial Intelligence (AI), 2) Digitalized Eco-Systems, 3) Do-It-Yourself Bio-Hacking, 4) Transparently Immersive Experiences, and 5) Ubiquitous Infrastructure. Of note, many of these technologies have a 5–10 year horizon until the Plateau of Productivity. If this time horizon is accurate, we believe these emerging technologies and five trends will have a significant role in defining the Character of Future War in 2035 and should have modernization implications for the Army of 2028. For additional information on the disruptive technologies identified between now and 2035, see the Era of Accelerated Human Progress portion of our Potential Game Changers broadsheet.

[Gartner disclaimer:  Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.]

Artificial Intelligence by GLAS-8 / Source: Flickr

2.Should Evil AI Research Be Published? Five Experts Weigh In,” by Dan Robitzski, Futurism, 27 August 2018.

The following rhetorical (for now) question was posed to the “AI Race and Societal Impacts” panel during last month’s The Joint Multi-Conference on Human-Level Artificial Intelligence in Prague, The Czech Republic:

“Let’s say you’re an AI scientist, and you’ve found the holy grail of your field — you figured out how to build an artificial general intelligence (AGI). That’s a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas — things no computer can do today.

That’s great, right? Except for one big catch: your AGI system is evil or could only be used for malicious purposes.

So, now a conundrum. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil? Do you file a patent so that no one else (except for you) could bring such an algorithm into existence? Or do you sit on your research, protecting the world from your creation but also passing up on the astronomical paycheck that would surely arrive in the wake of such a discovery?”

The panel’s responses ranged from controlling — “Don’t publish it!” and treat it like a grenade, “one would not hand it to a small child, but maybe a trained soldier could be trusted with it”; to the altruistic — “publish [it]… immediately” and “there is no evil technology, but there are people who would misuse it. If that AGI algorithm was shared with the world, people might be able to find ways to use it for good”; to the entrepreneurial – “sell the evil AGI to [me]. That way, they wouldn’t have to hold onto the ethical burden of such a powerful and scary AI — instead, you could just pass it to [me and I will] take it from there.

While no consensus of opinion was arrived at, the panel discussion served a useful exercise in illustrating how AI differs from previous eras’ game changing technologies. Unlike Nuclear, Biological, and Chemical weapons, no internationally agreed to and implemented control protocols can be applied to AI, as there are no analogous gas centrifuges, fissile materials, or triggering mechanisms; no restricted access pathogens; no proscribed precursor chemicals to control. Rather, when AGI is ultimately achieved, it is likely to be composed of nothing more than diffuse code; a digital will’o wisp that can permeate across the global net to other nations, non-state actors, and super-empowered individuals, with the potential to facilitate unprecedentedly disruptive Information Operation (IO) campaigns and Virtual Warfare, revolutionizing human affairs. The West would be best served in emulating the PRC with its Military-Civil Fusion Centers and integrate the resources of the State with the innovation of industry to achieve their own AGI solutions soonest. The decisive edge will “accrue to the side with more autonomous decision-action concurrency on the Hyperactive Battlefield” — the best defense against a nefarious AGI is a friendly AGI!

Scales Sword Of Justice / Source: https://www.maxpixel.net/

3.Can Justice be blind when it comes to machine learning? Researchers present findings at ICML 2018,” The Alan Turing Institute, 11 July 2018.

Can justice really be blind? The International Conference on Machine Learning (ICML) was held in Stockholm, Sweden, in July 2018. This conference explored the notion of machine learning fairness and proposed new methods to help regulators provide better oversight and practitioners to develop fair and privacy-preserving data analyses. Like ethical discussions taking place within the DoD, there are rising legal concerns that commercial machine learning systems (e.g., those associated with car insurance pricing) might illegally or unfairly discriminate against certain subgroups of the population. Machine learning will play an important role in assisting battlefield decisions (e.g., the targeting cycle and commander’s decisions) – especially lethal decisions. There is a common misperception that machines will make unbiased and fair decisions, divorced from human bias. Yet the issue of machine learning bias is significant because humans, with their host of cognitive biases, code the very programming that will enable machines to learn and make decisions. Making the best, unbiased decisions will become critical in AI-assisted warfighting. We must ensure that machine-based learning outputs are verified and understood to preclude the inadvertent introduction of human biases.  Read the full report here.

Robot PNG / Source: pngimg.com

4.Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans,” by Katyanna Quach, The Register, 3 August 2018.

In a study published by PLOS ONE, researchers found that a robot’s personality affected a human’s decision-making. In the study, participants were asked to dialogue with a robot that was either sociable (chatty) or functional (focused). At the end of the study, the researchers let the participants know that they could switch the robot off if they wanted to. At that moment, the robot would make an impassioned plea to the participant to resist shutting them down. The participants’ actions were then recorded. Unexpectedly, there were  a large number of participants who resisted shutting down the functional robots after they made their plea, as opposed to the sociable ones. This is significant. It shows, beyond the unexpected result, that decision-making is affected by robotic personality. Humans will form an emotional connection to artificial entities despite knowing they are robotic if they mimic and emulate human behavior. If the Army believes its Soldiers will be accompanied and augmented heavily by robots in the near future, it must also understand that human-robot interaction will not be the same as human-computer interaction. The U.S. Army must explore how attain the appropriate level of trust between Soldiers and their robotic teammates on the future battlefield. Robots must be treated more like partners than tools, with trust, cooperation, and even empathy displayed.

IoT / Source: Pixabay

5.Spending on Internet of Things May More Than Double to Over Half a Trillion Dollars,” by Aaron Pressman, Fortune, 8 August 2018.

While the advent of the Internet brought home computing and communication even deeper into global households, the revolution of smart phones brought about the concept of constant personal interconnectivity. Today and into the future, not only are humans being connected to the global commons via their smart devices, but a multitude of devices, vehicles, and various accessories are being integrated into the Internet of Things (IoT). Previously, the IoT was addressed as a game changing technology. The IoT is composed of trillions of internet-linked items, creating opportunities and vulnerabilities. There has been explosive growth in low Size Weight and Power (SWaP) and connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness).

Large companies are expected to quickly grow their spending on Internet-connected devices (i.e., appliances, home devices [such as Google Home, Alexa, etc.], various sensors) to approximately $520 billion. This is a massive investment into what will likely become the Internet of Everything (IoE). While growth is focused on known devices, it is likely that it will expand to embedded and wearable sensors – think clothing, accessories, and even sensors and communication devices embedded within the human body. This has two major implications for the Future Operational Environment (FOE):

– The U.S. military is already struggling with the balance between collecting, organizing, and using critical data, allowing service members to use personal devices, and maintaining operations and network security and integrity (see banning of personal fitness trackers recently). A segment of the IoT sensors and devices may be necessary or critical to the function and operation of many U.S. Armed Forces platforms and weapons systems, inciting some critical questions about supply chain security, system vulnerabilities, and reliance on micro sensors and microelectronics

– The U.S. Army of the future will likely have to operate in and around dense urban environments, where IoT devices and sensors will be abundant, degrading blue force’s ability to sense the battlefield and “see” the enemy, thereby creating a veritable needle in a stack of needles.

6.Battlefield Internet: A Plan for Securing Cyberspace,” by Michèle Flournoy and Michael Sulmeyer, Foreign Affairs, September/October 2018. Review submitted by Ms. Marie Murphy.

With the possibility of a “cyber Pearl Harbor” becoming increasingly imminent, intelligence officials warn of the rising danger of cyber attacks. Effects of these attacks have already been felt around the world. They have the power to break the trust people have in institutions, companies, and governments as they act in the undefined gray zone between peace and all-out war. The military implications are quite clear: cyber attacks can cripple the military’s ability to function from a command and control aspect to intelligence communications and materiel and personnel networks. Besides the military and government, private companies’ use of the internet must be accounted for when discussing cyber security. Some companies have felt the effects of cyber attacks, while others are reluctant to invest in cyber protection measures. In this way, civilians become affected by acts of cyber warfare, and attacks on a country may not be directed at the opposing military, but the civilian population of a state, as in the case of power and utility outages seen in eastern Europe. Any actor with access to the internet can inflict damage, and anyone connected to the internet is vulnerable to attack, so public-private cooperation is necessary to most effectively combat cyber threats.

If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at:  usarmy.jble.tradoc.mbx.army-mad-scientist@mail.mil — we may select it for inclusion in our next edition of “The Queue”!

79. Character vs. Nature of Warfare: What We Can Learn (Again) from Clausewitz

[Editor’s Note: Mad Scientist Laboratory is pleased to present the following post by guest blogger LTC Rob Taber, U.S. Army Training and Doctrine Command (TRADOC) G-2 Futures Directorate, clarifying the often confused character and nature of warfare, and addressing their respective mutability.]

No one is arguing that warfare is not changing. Where people disagree, however, is whether the nature of warfare, the character of warfare, or both are changing.

Source:  Office of the Director of National Intelligence

Take, for example, the National Intelligence Council’s assertion in “Global Trends: Paradox of Progress.” They state, “The nature of conflict is changing. The risk of conflict will increase due to diverging interests among major powers, an expanding terror threat, continued instability in weak states, and the spread of lethal, disruptive technologies. Disrupting societies will become more common, with long-range precision weapons, cyber, and robotic systems to target infrastructure from afar, and more accessible technology to create weapons of mass destruction.”[I]

Additionally, Brad D. Williams, in an introduction to an interview he conducted with Amir Husain, asserts, “Generals and military theorists have sought to characterize the nature of war for millennia, and for long periods of time, warfare doesn’t dramatically change. But, occasionally, new methods for conducting war cause a fundamental reconsideration of its very nature and implications.”[II] Williams then cites “cavalry, the rifled musket and Blitzkrieg as three historical examples”[III] from Husain and General John R. Allen’s (ret.) article, “On Hyperwar.”

Unfortunately, the NIC and Mr. Williams miss the reality that the nature of war is not changing, and it is unlikely to ever change. While these authors may have simply interchanged “nature” when they meant “character,” it is important to be clear on the difference between the two and the implications for the military. To put it more succinctly, words have meaning.

The nature of something is the basic make up of that thing. It is, at core, what that “thing” is. The character of something is the combination of all the different parts and pieces that make up that thing. In the context of warfare, it is useful to ask every doctrine writer’s personal hero, Carl Von Clausewitz, what his views are on the matter.

Source: Tetsell’s Blog. https://tetsell.wordpress.com/2014/10/13/clausewitz/

He argues that war is “subjective,”[IV]an act of policy,”[V] and “a pulsation of violence.”[VI] Put another way, the nature of war is chaotic, inherently political, and violent. Clausewitz then states that despite war’s “colorful resemblance to a game of chance, all the vicissitudes of its passion, courage, imagination, and enthusiasm it includes are merely its special characteristics.”[VII] In other words, all changes in warfare are those smaller pieces that evolve and interact to make up the character of war.

The argument that artificial intelligence (AI) and other technologies will enable military commanders to have “a qualitatively unsurpassed level of situational awareness and understanding heretofore unavailable to strategic commander[s][VIII] is a grand claim, but one that has been made many times in the past, and remains unfulfilled. The chaos of war, its fog, friction, and chance will likely never be deciphered, regardless of what technology we throw at it. While it is certain that AI-enabled technologies will be able to gather, assess, and deliver heretofore unimaginable amounts of data, these technologies will remain vulnerable to age-old practices of denial, deception, and camouflage.

 

The enemy gets a vote, and in this case, the enemy also gets to play with their AI-enabled technologies that are doing their best to provide decision advantage over us. The information sphere in war will be more cluttered and more confusing than ever.

Regardless of the tools of warfare, be they robotic, autonomous, and/or AI-enabled, they remain tools. And while they will be the primary tools of the warfighter, the decision to enable the warfighter to employ those tools will, more often than not, come from political leaders bent on achieving a certain goal with military force.

Drone Wars are Coming / Source: USNI Proceedings, July 2017, Vol. 143 / 7 /  1,373

Finally, the violence of warfare will not change. Certainly robotics and autonomy will enable machines that can think and operate without humans in the loop. Imagine the future in which the unmanned bomber gets blown out of the sky by the AI-enabled directed energy integrated air defense network. That’s still violence. There are still explosions and kinetic energy with the potential for collateral damage to humans, both combatants and civilians.

Source: Lockheed Martin

Not to mention the bomber carried a payload meant to destroy something in the first place. A military force, at its core, will always carry the mission to kill things and break stuff. What will be different is what tools they use to execute that mission.

To learn more about the changing character of warfare:

– Read the TRADOC G-2’s The Operational Environment and the Changing Character of Warfare paper.

– Watch The Changing Character of Future Warfare video.

Additionally, please note that the content from the Mad Scientist Learning in 2050 Conference at Georgetown University, 8-9 August 2018, is now posted and available for your review:

– Read the Top Ten” Takeaways from the Learning in 2050 Conference.

– Watch videos of each of the conference presentations on the TRADOC G-2 Operational Environment (OE) Enterprise YouTube Channel here.

– Review the conference presentation slides (with links to the associated videos) on the Mad Scientist All Partners Access Network (APAN) site here.

LTC Rob Taber is currently the Deputy Director of the Futures Directorate within the TRADOC G-2. He is an Army Strategic Intelligence Officer and holds a Master of Science of Strategic Intelligence from the National Intelligence University. His operational assignments include 1st Infantry Division, United States European Command, and the Defense Intelligence Agency.

Note:  The featured graphic at the top of this post captures U.S. cavalrymen on General John J. Pershing’s Punitive Expedition into Mexico in 1916.  Less than two years later, the United States would find itself fully engaged in Europe in a mechanized First World War.  (Source:  Tom Laemlein / Armor Plate Press, courtesy of Neil Grant, The Lewis Gun, Osprey Publishing, 2014, page 19)

_______________________________________________________

[I] National Intelligence Council, “Global Trends: Paradox of Progress,” January 2017, https://www.dni.gov/files/documents/nic/GT-Full-Report.pdf, p. 6.
[II] Brad D. Williams, “Emerging ‘Hyperwar’ Signals ‘AI-Fueled, machine waged’ Future of Conflict,” Fifth Domain, August 7, 2017, https://www.fifthdomain.com/dod/2017/08/07/emerging-hyperwar-signals-ai-fueled-machine-waged-future-of-conflict/.
[III] Ibid.
[VI] Carl Von Clausewitz, On War, ed. Michael Howard and Peter Paret (Princeton: Princeton University Press, 1976), 85.
[V] Ibid, 87.
[VI] Ibid.
[VII] Ibid, 86.
[VIII] John Allen, Amir Hussain, “On Hyper-War,” Fortuna’s Corner, July 10, 2017, https://fortunascorner.com/2017/07/10/on-hyper-war-by-gen-ret-john-allenusmc-amir-hussain/.