190. Weaponized Information: One Possible Vignette

[Editor’s Note:  The Information Environment (IE) is the point of departure for all events across the Multi-Domain Operations (MDO) spectrum. It’s a unique space that demands our understanding, as the Internet of Things (IoT) and hyper-connectivity have democratized accessibility, extended global reach, and amplified the effects of weaponized information. Our strategic competitors and adversaries have been quick to grasp and employ it to challenge our traditional advantages and exploit our weaknesses.

    • Our near-peers confront us globally, converging IE capabilities with hybrid strategies to expand the battlefield across all domains and create hemispheric threats challenging us from home station installations (i.e., the Strategic Support Area) to the Close Area fight.
    • Democratization of weaponized information empowers regional hegemons and non-state actors, enabling them to target the U.S. and our allies and achieve effects at a fraction of the cost of conventional weapons, without risking armed conflict.
    • The IE enables our adversaries to frame the conditions of future competition and/or escalation to armed conflict on their own terms.

Today’s post imagines one such vignette, with Russia exploiting the IE to successfully out-compete us and accomplish their political objectives, without expending a single bullet!]

Ethnic Russian minorities’ agitation against their respective governments in Estonia, Lithuania, and Latvia spike. Simultaneously, the Russian Government ratchets up tensions, with inflammatory statements of support for these ethnic Russian minorities in the Baltic States; coordinated movements and exercises by Russian ground, naval, and air forces adjacent to the region; and clandestine support to ethnic Russians in these States. The Russian Government started a covert campaign to shape people’s views about the threats against the Russian diaspora. More than 200,000 twitter accounts send 3.6 million tweets trending #protectRussianseverywhere. This sprawling Russian disinformation campaign is focused on building internal support for the Russian President and a possible military action. The U.S. and NATO respond…

The 2nd Cav Regt is placed on alert; as it prepares to roll out of garrison for Poland, several videos surface across social media, purportedly showing the sexual assault of several underage German nationals by U.S. personnel. These disturbingly graphic deepfakes appear to implicate key Leaders within the Regiment. German political and legal authorities call for an investigation and host nation protests erupt outside the gates of Rose Barracks, Vilseck, disrupting the unit’s deployment.

Simultaneously, in units comprising the initial Force Package earmarked to deploy to Europe, key personnel (and their dependents) are targeted, distracting troops from their deployment preparations and disrupting unit cohesion:

    • Social media accounts are hacked/hijacked, with false threats by dependents to execute mass/school shootings, accusations of sexual abuse, hate speech posts by Leaders about their minority troops, and revelations of adulterous affairs between unit spouses.
    • Bank accounts are hacked: some are credited with excessive amounts of cash followed by faux “See Something, Say Something” hotline accusations being made about criminal and espionage activities; while others are zeroed out, disrupting families’ abilities to pay bills.

Russia’s GRU (Military Intelligence) employs AI Generative Adversarial Networks (GANs) to create fake persona injects that mimic select U.S. Active Army, ARNG, and USAR commanders making disparaging statements about their confidence in our allies’ forces, the legitimacy of the mission, and their faith in our political leadership. Sowing these injects across unit social media accounts, Russian Information Warfare specialists seed doubt and erode trust in the chain of command amongst a percentage of susceptible Soldiers, creating further friction in deployment preparations.

As these units load at railheads or begin their road march towards their respective ports of embarkation, Supervisory Control and Data Acquisition (SCADA) attacks are launched on critical rail, road, port, and airfield infrastructures, snarling rail lines, switching yards, and crossings; creating bottlenecks at key traffic intersections; and spoofing navigation systems to cause sealift asset collisions and groundings at key maritime chokepoints. The fly-by-wire avionics are hacked on a departing C-17, causing a crash with the loss of all 134 Soldiers onboard. All C-17s are grounded, pending an investigation.

Salvos of personalized, “direct inject” psychological warfare attacks are launched against Soldiers via immersive media (Augmented, Virtual, and Mixed Reality; 360o Video/Gaming), targeting them while they await deployment and are in-transit to Theater. Similarly, attacks are vectored at spouses, parents, and dependents, with horrifying imagery of their loved ones’ torn and maimed bodies on Artificial Intelligence-generated battlefields (based on scraped facial imagery from social media accounts).

Multi-Domain Operations has improved Jointness, but exacerbated problems with “the communications requirements that constitute the nation’s warfighting Achilles heel.” As units arrive in Theater, seams within and between these U.S. and NATO Intelligence, Surveillance, and Reconnaissance; Fires; Sustainment; and Command and Control inter-connected and federated tactical networks that facilitate partner-to-partner data exchanges are exploited with specifically targeted false injects, sowing doubt and distrust across the alliance for the Multi-Domain Common Operating Picture. Spoofing of these systems leads to accidental air defense engagements, resulting in Blue-on-Blue fratricide or the downing of a commercial airliner, with additional civilian deaths on the ground from spent ordnance, providing more opportunities for Russian Information Operations to spread acrimony within the alliance and create dissent in public opinion back home.

With the flow of U.S. forces into the Baltic Nations, real instances of ethnic Russians’ livelihoods being disrupted (e.g., accidental destruction of livestock and crops, the choking off of main routes to market, and damage to essential services [water, electricity, sewerage]) by maneuver units on exercise are captured on video and enhanced digitally to exacerbate their cumulative effects. Proliferated across the net via bots, these instances further stoke anti-Baltic / anti-U.S. opinion amongst Russian-sympathetic and non-aligned populations alike.

Following years of scraping global social media accounts and building profiles across the full political spectrum, artificial influencers are unleashed on-line that effectively target each of these profiles within the U.S. and allied civilian populations. Ostensibly engaging populations via key “knee-jerk” on-line affinities (e.g., pro-gun, pro-choice, etc.), these artificial influencers, ever so subtly, begin to shift public opinion to embrace a sympathetic position on the rights of the Russian diaspora to greater autonomy in the Baltic States.

The release of deepfake videos showing Baltic security forces massacring ethnic Russians creates further division and causes some NATO partners to hesitate, question, and withhold their support, as required under Article 5. The alliance is rent asunder — Checkmate!

Many of the aforementioned capabilities described in this vignette are available now. Threats in the IE space will only increase in verisimilitude with augmented reality and multisensory content interaction. Envisioning what this Bot 2.0 Competition will look like is essential in building whole-of-government countermeasures and instilling resiliency in our population and military formations.

The Mad Scientist Initiative will continue to explore the significance of the IE to Competition and Conflict and information weaponization throughout our FY20 events — stay tuned to the MadSci Laboratory for more information. In anticipation of this, we have published The Information Environment:  Competition and Conflict anthology, a collection of previously published blog posts that serves as a primer on this topic and examines the convergence of technologies that facilitates information weaponization — Enjoy!

183. Ethics, Morals, and Legal Implications

[Editor’s Note: The U.S. Army Futures Command (AFC) and Training and Doctrine Command (TRADOC) co-sponsored the Mad Scientist Disruption and the Operational Environment Conference with the Cockrell School of Engineering at The University of Texas at Austin on 24-25 April 2019 in Austin, Texas. Today’s post is excerpted from this conference’s Final Report and addresses how the speed of technological innovation and convergence continues to outpace human governance. The U.S. Army must not only consider how best to employ these advances in modernizing the force, but also the concomitant ethical, moral, and legal implications their use may present in the Operational Environment (see links to the newly published TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, and the complete Mad Scientist Disruption and the Operational Environment Conference Final Report at the bottom of this post).]

Technological advancement and subsequent employment often outpaces moral, ethical, and legal standards. Governmental and regulatory bodies are then caught between technological progress and the evolution of social thinking. The Disruption and the Operational Environment Conference uncovered and explored several tension points that the Army may be challenged by in the future.

Space

Cubesats in LEO / Source: NASA

Space is one of the least explored domains in which the Army will operate; as such, we may encounter a host of associated ethical and legal dilemmas. In the course of warfare, if the Army or an adversary intentionally or inadvertently destroys commercial communications infrastructure – GPS satellites – the ramifications to the economy, transportation, and emergency services would be dire and deadly. The Army will be challenged to consider how and where National Defense measures in space affect non-combatants and American civilians on the ground.

Per proclaimed Mad Scientists Dr. Moriba Jah and Dr. Diane Howard, there are ~500,000 objects orbiting the Earth posing potential hazards to our space-based services. We are currently able to only track less than one percent of them — those that are the size of a smart phone / softball or larger. / Source: NASA Orbital Debris Office

International governing bodies may have to consider what responsibility space-faring entities – countries, universities, private companies – will have for mitigating orbital congestion caused by excessive launching and the aggressive exploitation of space. If the Army is judicious with its own footprint in space, it could reduce the risk of accidental collisions and unnecessary clutter and congestion. It is extremely expensive to clean up space debris and deconflicting active operations is essential. With each entity acting in their own self-interest, with limited binding law or governance and no enforcement, overuse of space could lead to a “tragedy of the commons” effect.1  The Army has the opportunity to more closely align itself with international partners to develop guidelines and protocols for space operations to avoid potential conflicts and to influence and shape future policy. Without this early intervention, the Army may face ethical and moral challenges in the future regarding its addition of orbital objects to an already dangerously cluttered Low Earth Orbit. What will the Army be responsible for in democratized space? Will there be a moral or ethical limit on space launches?

Autonomy in Robotics

AFC’s Future Force Modernization Enterprise of Cross-Functional Teams, Acquisition Programs of Record, and Research and Development centers executed a radio rodeo with Industry throughout June 2019 to inform the Army of the network requirements needed to enable autonomous vehicle support in contested, multi-domain environments. / Source: Army.mil

Robotics have been pervasive and normalized in military operations in the post-9/11 Operational Environment. However, the burgeoning field of autonomy in robotics with the potential to supplant humans in time-critical decision-making will bring about significant ethical, moral, and legal challenges that the Army, and larger DoD are currently facing. This issue will be exacerbated in the Operational Environment by an increased utilization and reliance on autonomy.

The increasing prevalence of autonomy will raise a number of important questions. At what point is it more ethical to allow a machine to make a decision that may save lives of either combatants or civilians? Where does fault, responsibility, or attribution lie when an autonomous system takes lives? Will defensive autonomous operations – air defense systems, active protection systems – be more ethically acceptable than offensive – airstrikes, fire missions – autonomy? Can Artificial Intelligence/Machine Learning (AI/ML) make decisions in line with Army core values?

Deepfakes and AI-Generated Identities, Personas, and Content

Source: U.S. Air Force

A new era of Information Operations (IO) is emerging due to disruptive technologies such as deepfakes – videos that are constructed to make a person appear to say or do something that they never said or did – and AI Generative Adversarial Networks (GANs) that produce fully original faces, bodies, personas, and robust identities.2  Deepfakes and GANs are alarming to national security experts as they could trigger accidental escalation, undermine trust in authorities, and cause unforeseen havoc. This is amplified by content such as news, sports, and creative writing similarly being generated by AI/ML applications.

This new era of IO has many ethical and moral implications for the Army. In the past, the Army has utilized industrial and early information age IO tools such as leaflets, open-air messaging, and cyber influence mechanisms to shape perceptions around the world. Today and moving forward in the Operational Environment, advances in technology create ethical questions such as: is it ethical or legal to use cyber or digital manipulations against populations of both U.S. allies and strategic competitors? Under what title or authority does the use of deepfakes and AI-generated images fall? How will the Army need to supplement existing policy to include technologies that didn’t exist when it was written?

AI in Formations

With the introduction of decision-making AI, the Army will be faced with questions about trust, man-machine relationships, and transparency. Does AI in cyber require the same moral benchmark as lethal decision-making? Does transparency equal ethical AI? What allowance for error in AI is acceptable compared to humans? Where does the Army allow AI to make decisions – only in non-combat or non-lethal situations?

Commanders, stakeholders, and decision-makers will need to gain a level of comfort and trust with AI entities exemplifying a true man-machine relationship. The full integration of AI into training and combat exercises provides an opportunity to build trust early in the process before decision-making becomes critical and life-threatening. AI often includes unintentional or implicit bias in its programming. Is bias-free AI possible? How can bias be checked within the programming? How can bias be managed once it is discovered and how much will be allowed? Finally, does the bias-checking software contain bias? Bias can also be used in a positive way. Through ML – using data from previous exercises, missions, doctrine, and the law of war – the Army could inculcate core values, ethos, and historically successful decision-making into AI.

If existential threats to the United States increase, so does pressure to use artificial and autonomous systems to gain or maintain overmatch and domain superiority. As the Army explores shifting additional authority to AI and autonomous systems, how will it address the second and third order ethical and legal ramifications? How does the Army rectify its traditional values and ethical norms with disruptive technology that rapidly evolves?

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.
    • Ethics and the Future of War panel, facilitated by LTG Dubik (USA-Ret.) at the Mad Scientist Visualizing Multi Domain Battle 2030-2050 Conference, facilitated at Georgetown University, on 25-26 July 2017.

Just Published! TRADOC Pamphlet 525-92, The Operational Environment and the Changing Character of Warfare, 7 October 2019, describes the conditions Army forces will face and establishes two distinct timeframes characterizing near-term advantages adversaries may have, as well as breakthroughs in technology and convergences in capabilities in the far term that will change the character of warfare. This pamphlet describes both timeframes in detail, accounting for all aspects across the Diplomatic, Information, Military, and Economic (DIME) spheres to allow Army forces to train to an accurate and realistic Operational Environment.


1 Munoz-Patchen, Chelsea, “Regulating the Space Commons: Treating Space Debris as Abandoned Property in Violation of the Outer Space Treaty,” Chicago Journal of International Law, Vol. 19, No. 1, Art. 7, 1 Aug. 2018. https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1741&context=cjil

2 Robitzski, Dan, “Amazing AI Generates Entire Bodies of People Who Don’t Exist,” Futurism.com, 30 Apr. 2019. https://futurism.com/ai-generates-entire-bodies-people-dont-exist

182. “Tenth Man” – Challenging our Assumptions about the Operational Environment and Warfare (Part 2)

[Editor’s Note: Mad Scientist Laboratory is pleased to publish our latest “Tenth Man” post. This Devil’s Advocate or contrarian approach serves as a form of alternative analysis and is a check against group think and mirror imaging. The Mad Scientist Laboratory offers it as a platform for the contrarians in our network to share their alternative perspectives and analyses regarding the Operational Environment (OE). We continue our series of “Tenth Man” posts examining the foundational assumptions of The Operational Environment and the Changing Character of Future Warfare, challenging them, reviewing the associated implications, and identifying potential signals and/or indicators of change. Enjoy!]

Assumption:  The character of warfare will change but the nature of war will remain human-centric.

The character of warfare will change in the future OE as it inexorably has since the advent of flint hand axes; iron blades; stirrups; longbows; gunpowder; breech loading, rifled, and automatic guns; mechanized armor; precision-guided munitions; and the Internet of Things. Speed, automation, extended ranges, broad and narrow weapons effects, and increasingly integrated multi-domain conduct, in addition to the complexity of the terrain and social structures in which it occurs, will make mid Twenty-first Century warfare both familiar and utterly alien.

The nature of warfare, however, is assumed to remain human-centric in the future. While humans will increasingly be removed from processes, cycles, and perhaps even decision-making, nearly all content regarding the future OE assumes that humans will remain central to the rationale for war and its most essential elements of execution. The nature of war has remained relatively constant from Thucydides through Clausewitz, and forward to the present. War is still waged because of fear, honor, and interest, and remains an expression of politics by other means. While machines are becoming ever more prevalent across the battlefield – C5ISR, maneuver, and logistics – we cling to the belief that parties will still go to war over human interests; that war will be decided, executed, and controlled by humans.

Implications:  If these assumptions prove false, then the Army’s fundamental understanding of war in the future may be inherently flawed, calling into question established strategies, force structuring, and decision-making models. A changed or changing nature of war brings about a number of implications:

– Humans may not be aware of the outset of war. As algorithmic warfare evolves, might wars be fought unintentionally, with humans not recognizing what has occurred until effects are felt?

– Wars may be fought due to AI-calculated opportunities or threats – economic, political, or even ideological – that are largely imperceptible to human judgement. Imagine that a machine recognizes a strategic opportunity or impetus to engage a nation-state actor that is conventionally (read that humanly) viewed as weak or in a presumed disadvantaged state. The machine launches offensive operations to achieve a favorable outcome or objective that it deemed too advantageous to pass up.

  • – Infliction of human loss, suffering, and disruption to induce coercion and influence may not be conducive to victory. Victory may be simply a calculated or algorithmic outcome that causes an adversary’s machine to decide their own victory is unattainable.

– The actor (nation-state or otherwise) with the most robust kairosthenic power and/or most talented humans may not achieve victory. Even powers enjoying the greatest materiel advantages could see this once reliable measure of dominion mitigated. Winning may be achieved by the actor with the best algorithms or machines.

  • These implications in turn raise several questions for the Army:

– How much and how should the Army recruit and cultivate human talent if war is no longer human-centric?

– How should forces be structured – what is the “right” mix of humans to machines if war is no longer human-centric?

– Will current ethical considerations in kinetic operations be weighed more or less heavily if humans are further removed from the equation? And what even constitutes kinetic operations in such a future?

– Should the U.S. military divest from platforms and materiel solutions (hardware) and re-focus on becoming algorithmically and digitally-centric (software)?

 

– What is the role for the armed forces in such a world? Will competition and armed conflict increasingly fall within the sphere of cyber forces in the Departments of the Treasury, State, and other non-DoD organizations?

– Will warfare become the default condition if fewer humans get hurt?

– Could an adversary (human or machine) trick us (or our machines) to miscalculate our response?

Signposts / Indicators of Change:

– Proliferation of AI use in the OE, with increasingly less human involvement in autonomous or semi-autonomous systems’ critical functions and decision-making; the development of human-out-of-the-loop systems

– Technology advances to the point of near or actual machine sentience, with commensurate machine speed accelerating the potential for escalated competition and armed conflict beyond transparency and human comprehension.

– Nation-state governments approve the use of lethal autonomy, and this capability is democratized to non-state actors.

– Cyber operations have the same political and economic effects as traditional kinetic warfare, reducing or eliminating the need for physical combat.

– Smaller, less-capable states or actors begin achieving surprising or unexpected victories in warfare.

– Kinetic war becomes less lethal as robots replace human tasks.

– Other departments or agencies stand up quasi-military capabilities, have more active military-liaison organizations, or begin actively engaging in competition and conflict.

If you enjoyed this post, please see:

    • “Second/Third Order, and Evil Effects” – The Dark Side of Technology (Parts I & II) by Dr. Nick Marsella.

… as well as our previous “Tenth Man” blog posts:

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

175. “I Know the Sound it Makes When It Lies” AI-Powered Tech to Improve Engagement in the Human Domain

[Editor’s Note:  Mad Scientist Laboratory is pleased to publish today’s post by guest bloggers LTC Arnel P. David, LTC (Ret) Patrick James Christian, PhD, and Dr. Aleksandra Nesic, who use storytelling to illustrate how the convergence of Artificial Intelligence (AI), cloud computing, big data, augmented and enhanced reality, and deception detection algorithms could complement decision-making in future specialized engagements.  Enjoy this first in a series of three posts exploring how game changing tech will enhance operations in the Human Domain!]

RAF A400 Atlas / Source:  Flickr, UK MoD, by Andrew Linnett

It is 2028. Lt Col Archie Burton steps off the British A400-M Atlas plane onto the hard pan desert runway of Banku Airfield, Nigeria. This is his third visit to Nigeria, but this time he is the commander of the Engagement Operations Group – Bravo (EOG-B). This group of bespoke, specialized capabilities is the British Army’s agile and highly-trained force for specialized engagement. It operates amongst the people and builds indigenous mass with host nation security forces. Members of this outfit operate in civilian clothes and speak multiple languages with academic degrees ranging from anthropology to computational science.

Source:  Flickr, Com Salud

Archie donned his Viz glasses on the drive to a meeting with local leadership of the town of Banku. Speaking to his AI assistant, “Jarvis,” Archie cycles through past engagement data to prep for the meeting and learn the latest about the local town and its leaders. Jarvis is connected to a cloud-computing environment, referred to as “HDM” for “Human Doman Matrix,” where scientifically collected and curated population data is stored, maintained, and integrated with a host of applications to support operations in the human domain in both training and deployed settings.

Several private organizations that utilize integrated interdisciplinary social science have helped NATO, the U.K. MoD, and the U.S. DoD develop CGI-enabled virtual reality experiences to accelerate learning for operators who work in challenging conflict settings laden with complex psycho-social and emotional dynamics that drive the behaviour and interactions of the populations on the ground. Together with NGOs and civil society groups, they collected ethnographic data and combined it with phenomenological qualitative inquiry using psychology and sociology to curate anthropological stories that reflect specific cultural audiences.

EOG-Bravo’s mission letter from Field Army Headquarters states that they must leverage the extensive and complex human network dynamic to aid in the recovery of 11 females kidnapped by the Islamic Revolutionary Brotherhood (IRB) terrorist group. Two of the females are British citizens, who were supporting a humanitarian mission with the ‘Save the Kids’ NGO prior to being abducted.

At the meeting in Banku, the mayor, police chief, and representative from Save the Kids were present. Archie was welcomed by handshakes and hugs by the police chief who was a former student at Sandhurst and knows Archie from past deployments. The discussion leaped immediately into the kidnapping situation.

The girls were last seen transiting a jungle area North of Oyero. Our organization is in contact by email with one of the IRB facilitators. He is asking for £2 million and we are ready to make that payment,” said Simon Moore of Save the Kids.

Archie’s Viz glasses scanned the facial expressions of those present and Jarvis cautioned him regarding the behaviour of the police chief whose micro facial expressions and eyes revealed a biological response of excitement at the mention of the £2M.

Archie asks “Chief Adesola, what do you think? Should we facilitate payment?

Hmmm, I’m not sure. We don’t know what the IRB will do. We should definitely consider it though,” said Police Chief Adesola.

The Viz glasses continued to feed the facial expressions into HDM, where the recurrent AI neural network recognition algorithm, HOMINID-AI, detected a lie. The AI system and human analysts at the Land Information Manoeuvre Centre (LIMOC) back in the U.K. estimate with a high-level of confidence that Chief Adesola was lying.

At the LIMOC, a 24-hour operation under 77th Brigade, Sgt Richards, determines that the Police Chief is worthy of surveillance by EOG-Alpha, Archie’s sister battlegroup. EOG-Alpha informs local teams in Lagos to deploy unmanned ground sensors and collection assets to monitor the police chief.

Small teams of 3-4 soldiers depart from Lagos in the middle of the night to link up with host nation counterparts. Together, the team of operators and Nigerian national-level security forces deploy sensors to monitor the police chief’s movements and conversations around his office and home.

The next morning, Chief Adesola is picked up by a sensor meeting with an unknown associate. The sensor scanned this associate and the LIMOC processed an immediate hit — he was a leader of the IRB; number three in their chain of command. EOG-A’s operational element is alerted and ordered to work with local security forces to detain this terrorist leader.  Intelligence collected from him and the Chief will hopefully lead them to the missing females…

If you enjoyed this post, stay tuned for Part 2 on the Human Domain Matrix, Part 3 on Emotional Warfare in Yemen, and check out the following links to other works by today’s blog post authors:

Operationalizing the Science of the Human Domain by Aleks Nesic and Arnel P. David

A Psycho-Emotional Human Security Analytical Framework by Patrick J. Christian, Aleksandra Nesic, David Sniffen, Tasneem Aljehani, Khaled Al Sumairi, Narayan B. Khadka, Basimah Hallawy, and Binamin Konlan

Military Strategy in the 21st Century:  People, Connectivity, and Competition by Charles T. Cleveland, Benjamin Jensen, Susan Bryant, and Arnel P. David

… and see the following MadSci Lab blog posts on how AI can augment our Leaders’ decision-making on the battlefield:

Takeaways Learned about the Future of the AI Battlefield

The Guy Behind the Guy: AI as the Indispensable Marshal, by Mr. Brady Moore and Mr. Chris Sauceda

LTC Arnel P. David is an Army Strategist serving in the United Kingdom as the U.S. Special Assistant for the Chief of the General Staff. He recently completed an Artificial Intelligence Program from the Saïd Business School at the University of Oxford.

LTC (Ret) Patrick James Christian, PhD is co-founder of Valka-Mir and a Psychoanalytical Anthropologist focused on the psychopathology of violent ethnic and cultural conflict. He a retired Special Forces officer serving as a social scientist for the Psychological Operations Task Forces in the Arabian Peninsula and Afghanistan, where he constructs psychological profiles of designated target audiences.

Aleksandra Nesic, PhD is co-founder of Valka-Mir and Visiting Faculty for the Countering Violent Extremism and Countering Terrorism Fellowship Program at the Joint Special Operations University (JSOU), USSOCOM. She is also Visiting Faculty, U.S. Army JFK Special Warfare Center and School, and a Co-Founder and Senior Researcher of Complex Communal Conflicts at Valka-Mir Human Security, LLC.

Acknowledgements:  Special Thanks to the British Army Future Force Development Team for their help in creating the British characters depicted in this first story.

Disclaimer:  The views expressed in this blog post do not necessarily reflect those of the Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

 

 

138. “The Monolith”

The Monolith set from the dawn of man sequence, 2001: A Space Odyssey, Metro-Goldwyn-Mayer (1968) / Source: Wikimedia Commons

[Editor’s Note: Mad Scientist Laboratory is pleased to introduce a new, quarterly feature, entitled “The Monolith.” Arthur C. Clarke and Stanley Kubrick fans alike will recognize and appreciate our allusion to the alien artifact responsible for “uplifting” mankind from primitive, defenseless hominids into tool using killers — destined for the stars — from their respective short story, “The Sentinel,” and movie, “2001: A Space Odyssey.” We hope that you will similarly benefit from this post (although perhaps in not quite so evolutionary a manner!), reflecting the Mad Scientist Teams’ collective book and movie recommendations — Enjoy!]

Originally published by PublicAffairs on 5 October 2017

The Future of War by Sir Lawrence Freedman. The evolution of warfare has taken some turns that were quite unexpected and were heavily influenced by disruptive technologies of the day. Sir Lawrence examines the changing character of warfare over the last several centuries, how it has been influenced by society and technology, the ways in which science fiction got it wrong and right, and how it might take shape in the future. This overarching look at warfare causes one to pause and consider whether we may be asking the right questions about future warfare.

 

Royal Scots Guardsmen engaging the enemy with a Lewis Machine Gun / Source:  Flickr

They Shall Not Grow Old directed by Sir Peter Jackson. This lauded 2018 documentary utilizes original film footage from World War I (much of it unseen for the past century) that has been digitized, colorized, upscaled, and overlaid with audio recordings from British servicemen who fought in the war. The divide between civilians untouched by the war and service members, the destructive impact of new disruptive technologies, and the change they wrought on the character of war resonate to this day and provide an excellent historical analogy from which to explore future warfare.

Gene Simmons plays a nefarious super empowered individual in Runaway

Runaway directed by Michael Crichton. This film, released in 1984, is set in the near future, where a police officer (Tom Selleck) and his partner (Cynthia Rhodes) specialize in neutralizing malfunctioning robots. A rogue killer robot – programmed to kill by the bad guy (Gene Simmons) – goes on homicidal rampage. Alas, the savvy officers begin to uncover a wider, nefarious plan to proliferate killer robots. This offbeat Sci-Fi thriller illustrates how dual-use technologies in the hands of super-empowered individuals could be employed innovatively in the Future Operational Environment. Personalized warfare is also featured, as a software developer’s family is targeted by the ‘bad guy,’ using a corrupted version of the very software he helped create. This movie illustrates the potential for everyday commercial products to be adapted maliciously by adversaries, who, unconstrained ethically, can out-innovate us with convergent, game changing technologies (robotics, CRISPR, etc.).

Originally published by Macmillan on 1 May 2018

The Military Science of Star Wars by George Beahm. Storytelling is a powerful tool used to visualize the future, and Science Fiction often offers the best trove of ideas. The Military Science of Star Wars by George Beahm dissects and analyzes the entirety of the Star Wars Universe to mine for information that reflects the real world and the future of armed conflict. Beahm tackles the personnel, weapons, technology, tactics, strategy, resources, and lessons learned from key battles and authoritatively links them to past, current, and future Army challenges. Beahm proves that storytelling, and even fantasy (Star Wars is more a fantasy story than a Science Fiction story), can teach us about the real world and help evolve our thinking to confront problems in new and novel ways. He connects the story to the past, present, and future Army and asks important questions, like “What makes Han Solo a great military Leader?”, “How can a military use robots (Droids) effectively?”, and most importantly, “What, in the universe, qualified Jar Jar Binks to be promoted to Bombad General?”.

Ex Machina, Universal Pictures (2014) / Source: Vimeo

Ex Machina directed by Alex Garland. This film, released in 2014, moves beyond the traditional questions surrounding the feasibility of Artificial Intelligence (AI) and the Turing test to explore the darker side of synthetic beings, knowing that it is achievable and that the test can be passed. The film is a cautionary tale of what might be possible at the extreme edge of AI computing and innovation where control may be fleeting or even an illusion. The Army may never face the same consequences that the characters in the film face, but it can learn from their lessons. AI is a hotly debated topic with some saying it will bring about the end of days, and others saying generalized AI will never exist. With a future this muddy, one must be cautious of exploring new and undefined technology spaces that carry so much risk. As more robotic entities are operationalized, and AI further permeates the battlefield, future Soldiers and Leaders would do well to stay abreast of the potential for volatility in an already chaotic environment. If Military AI progresses substantially, what will happen when we try to turn it off?

Astronaut and Lunar Module pilot Buzz Aldrin is pictured during the Apollo 11 extravehicular activity on the moon / Source: NASA

Apollo 11 directed by Todd Douglas Miller. As the United States prepares to celebrate the fiftieth anniversary of the first manned mission to the lunar surface later this summer, this inspiring documentary reminds audiences of just how audacious an achievement this was. Using restored archival audio recordings and video footage (complemented by simple line animations illustrating each of the spacecrafts’ maneuver sequences), Todd Miller skillfully re-captures the momentousness of this historic event, successfully weaving together a comprehensive point-of-view of the mission. Watching NASA and its legion of aerospace contractors realize the dream envisioned by President Kennedy eight years before serves to remind contemporary America that we once dared and dreamed big, and that we can do so again, harnessing the energy of insightful and focused leadership with the innovation of private enterprise. This uniquely American attribute may well tip the balance in our favor, given current competition and potential future conflicts with our near-peer adversaries in the Future Operational Environment.

Originally published by Penguin Random House on 3 July 2018

Artemis by Andy Weir. In his latest novel, following on the heels of his wildly successful The Martian, Andy Weir envisions an established lunar city in 2080 through the eyes of Jasmine “Jazz” Bashara, one of its citizen-hustlers, who becomes enmeshed in a conspiracy to control the tremendous wealth generated from the space and lunar mineral resources refined in the Moon’s low-G environment. His suspenseful plot, replete with descriptions of the science and technologies necessary to survive (and thrive!) in the hostile lunar environment, posits a late 21st century rush to exploit space commodities. The resultant economic boom has empowered non-state actors as new competitors on the global — er, extraterrestrial stage — from the Kenya Space Corporation (blessed by its equatorial location and reduced earth to orbit launch costs) to the Sanchez Aluminum mining and refining conglomerate, controlled by a Brazilian crime syndicate scheming to take control of the lunar city. Readers are reminded that the economic hegemony currently enjoyed by the U.S., China, and the E.U. may well be eclipsed by visionary non-state actors who dare and dream big enough to exploit the wealth that lies beyond the Earth’s gravity well.

137. What’s in a Touch? Lessons from the Edge of Electronic Interface

[Editor’s Note:  Mad Scientist Laboratory is pleased to present today’s guest blog post by Dr. Brian Holmes, exploring the threats associated with adaptive technologies and how nefarious actors can morph benign technological innovations into new, more sinister applications.  The three technological trends of democratization, convergence, and asymmetrical ethics portend a plethora of dystopian scenarios for the Future Operational Environment.  Dr. Holmes imagines how advances in prosthetic R&D could be manipulated to augment advances in artificial intelligence and robotics, providing a sense of touch to realize more lifelike lethal autonomous weapons systems — Enjoy!]

Somewhere in a near parallel, fictional universe –

Parallel Universes / Source:  Max Pixel

Dr. Sandy Votel is an Associate Professor and researcher at a military defense school in the U.S.  She has a diverse career that includes experience in defense and private laboratories researching bleeding edge biological science. For eight years, she served as an intelligence officer in the military reserves. Ten years ago she decided to join a defense school as a graduate research professor.

Dr. Mark Smith is a new Assistant Professor at her School. He just graduated with his Ph.D. before accepting his academic position. Sandy, Mark’s mentor, is explaining the finer details of her team’s research during Mark’s first week on the job.

Sandy began by explaining to Mark what her post-doc was investigating –

He’s researching the fundamental materials required for electronic skin,” she said.

“Cyborg” / Source: R.E. Barber Photography via Flickr

After a pause, Sandy followed up by posing this hackneyed question, “Is it wrong that I am helping to create one small slice of a yet to be made front line cyborg, or, a bioengineered replicant spy of the kind played out in popular Hollywood movies?” Her smirk quickly followed. Westerners were practically conditioned to make comments like that.

 

The Modular Prosthetic Limb (MPL) / Source: U.S. Navy via Flickr

Her colleague Mark immediately replied, “It’s more likely this kind of technology could someday help battlefield soldiers or civilians who have lost fingers, toes, or limbs. They might be able to touch or feel again in some new manner through the interface. The material could be embedded into some sort of artificial prosthetic, and electronically connected to receptors feeding the information to and from your brain. Imagine the possibilities! Any interest in collaborating? We should push the boundaries here!

Sandy knew that the early stage research was intended for the most benevolent of reasons – personalized health care and disposable electronic sensors to name a few – but the creative futurist in her, heavily influenced by years evaluating the more disturbing side of humanity as an intelligence officer, suddenly made her pause. After all, she saw the realized threat from adaptive technologies daily when she logged into her computer system each drill weekend.

A drawing of the character Deckard by Canosard, from the film Blade Runner (Warner Bros., 1982) / Source: DeviantArt

She’d also seen wildly creative science fiction writers’ draft ideas into reality. Sandy loved reading science fiction novels and watched every movie or show that resulted. As a child, she was amazed when Rick Deckard, from the movie Blade Runner, inserted a photograph into a machine that scanned it and allowed him to enhance the resolution enough to observe finite details embedded in thousands of pixels. Like most of the general public, she used to think that was impossible! Oh, how times have changed.

Sandy walked back into her office, scanned her email and focused on an article her department chair had sent to the entire workforce to evaluate. She suddenly stood back in shock, and immediately connected the disturbing news with elements she recalled from history.

Dr. Josef Mengele / Source:  Wikimedia Commons

Decades before Blade Runner came out in the cinema, the modern boundaries of science and human subject experimentation were torn asunder by the likes of Dr. Josef Mengele in the 1940’s. The “Angel of Death” was a German anthropologist and medical doctor who researched genetics in school and conducted horrific experiments on humans in Auschwitz as an SS officer.

Dr. He Jiankui / Source:  Wikimedia Commons

According to the article she just read, China’s Dr. He Jiankui, a biophysicist educated in China and the United States, shocked the world by pushing the limits of ethical genetic research by editing the genes of human embryos.

In each case, conflict or culture induced them to perform world changing science, resulting in not only global condemnation, but also the re-birth of knowledge with dual purpose. Sandy knew that history dictates a repetition of bad activities like these, performed in unpredictable scenarios set in a deep, dark, dystopian future.

Sandy’s realization hastened further reflection.

Cyborgs / Source: Pixabay

A significant number of studies have documented the emotional and physical benefits derived from touch. The research suggests that touch is fundamental to human communication, health, and bonding. If this is true, not only will advanced levels of artificial intelligence, or “AI”, require coding enabling learning and empathy, but the bioengineered system the AI is directing will necessitate a sense of touch to mimic a more lifelike cyborg. Passive sensors are only as good as physics allows them to be, or as great as the signal to noise levels dictate in a dirty environment. Touch, however, conveys something different… something far more real.

AI mimicking human visage / Source: Max Pixel

Sandy knew that most futuristic battlefield articles now center on today’s technology du jour, artificial intelligence. There’s no question that AI will serve as the brain center for individual or centralized networks of future machines; but to make them more human and adaptable to the battlefield of tomorrow as indistinguishable soldiers or undetectable HUMINT assets — subtler pieces are required to complete the puzzle.

Imagine hundreds or thousands of manufactured assets programmed for clandestine military operations, or covert activities that look, act, and feel like us?” she thought.

Weapons can be embedded into robotic systems, coding and software improved to the point of winning challenging board games, but it’s the bioengineers with duplicitous purposes and far too much imagination that hold the real key to the soldier of the future; specifically, the soldiers that replace, infiltrate, or battle us.

Nefarious actors adapting benign technological innovations into new, more sinister applications…

It’s happened before, and it will happen again!” she said out loud, accidentally.

Mark, who happened to be walking past her door, asked if everything was alright. Sandy nodded, but finished this thought as soon as he left her view.

Unfortunately, the key that unlocks the occurrence of these secrets exists in a faraway place, under duress, and without rules. If the military is worried about the Deep Future, we should be analyzing the scenarios that enable these kinds of creative paradigms.”

After all, it’s all in a touch. 

If you enjoyed this post, please:

– Read the Mad Scientist Bio Convergence and Soldier 2050 Conference Final Report.

Review the following blog posts:

Ethical Dilemmas of Future Warfare, and

Envisioning Future Operational Environment Possibilities through Story Telling.

– See our compendium of 23 submissions from the 2017 Mad Scientist Science Fiction Contest at Science Fiction: Visioning the Future of Warfare 2030-2050.

Crank up I Am Robot by The Phenomenauts (who?!?)

Dr. Brian Holmes is the Dean of the Anthony G. Oettinger School of Science and Technology Intelligence at the National Intelligence University in Bethesda, MD.

Disclaimer: The views expressed in this article are Dr. Holmes’ alone and do not imply endorsement by the U.S. Army Training and Doctrine Command, the U.S. Army, the Defense Intelligence Agency, the Department of Defense, its component organizations, or the U.S. Government.  This piece is meant to be thought-provoking and does not reflect the current position of the U.S. Army.

 

133. “Back (and Forward) to the Future IV” and What We Have and Haven’t Learned

[Editor’s Note: Returning guest blogger Frank Prautzsch peers 34 years into the past to explore how the blockbuster film “Back to the Future” and its sequels portrayed a number of fantastic technologies that have since evolved from pure science fiction into reality in 2019; then looks forward a similar number of years to envision future technological possibilities in 2053. Enjoy Mr. Prautzsch’s post and dare to “live outside-of-the-box” and imagine the true edge cases of the possible!]

On 3 July 1985, writer/producers Robert Zemeckis and Bob Gale first brought Marty McFly and Doc Brown to the big screen in the amazing hit “Back to the Future.” Younger generations will need to stream this motion picture for themselves to learn about technological vision in the Reagan era, while taking a glimpse at social norms and life in 1955. With all the thrills of science fiction and time travel, we munched on popcorn, witnessing nothing short of the bizarre in fictional technology and science. This motion picture was such a success that two sequels followed in 1989 and 1990.

Such motion pictures were more than entertainment; they pulled on our technical imagination and eventually on our goals to attain these technologies. As Mad Scientists, we often don’t want to profess a deep or incisive long shot at futuristic technology for fear of ridicule… of being wrong… or of disbelief in ourselves… and we continue to second guess our imagination, rather than offer our vision of the future. Are the visionaries confined to Hollywood? It is important for planners and strategic thinkers alike to not just “think out-of-the-box” but to “LIVE there.” Every Mad Scientist’s artwork should get an “F” for staying inside the lines!

As we look in the rear-view mirror at “Back to the Future” from 2019, 34 years have passed. As we look “Ahead to the Future,” 34 years from now, today’s chronological apex places us at the controls and gadgets of the 2053 warfighter.  2019 is a dividing and divining point between the past and the future. Why all this build up? Notably, all of the technologies from “Back the Future” either exist or are in the progress of existing… including “time” travel.

Here are some tangible examples today which were irrational in 1985:

a. The Flying DeLorean. While it looks positively nothing like the original, DeLorean Aerospace LLC developed the DLC-7 flying car. At nearly 20 ft. long and 18 ft. wide, this craft has auto-stow wings that allow the car to occupy the family garage.

 

b. The Hoverboard. The Arca Aerospace Corporation‘s ArcaBoard harnesses ducted electric fans generating 272 horsepower to carry a 180 lb. pilot at 12.5 mph.

 

c. Self-Lacing Sneakers. Motivated by both this subject motion picture and the needs of the handicapped, Nike Corporation developed self-lacing sneakers. Albeit pricey, such sneakers were magical 34 years ago, and now they are a commodity.

 

 

d. Time Travel. While the ability to conduct time reversal in nature is still unattained, a team of scientists led by the U.S. Department of Energy’s (DOE) Argonne National Laboratory explored this question in a first-of-its-kind experiment, managing to return a computer briefly to the past. The results, published March 13 of this year in the journal Nature‘s Scientific Reports, suggests new paths for exploring the backward flow of time in quantum systems. They also open new possibilities for quantum computer program testing and error correction. Additional work at IBM verifies that photons in a quantum state can occupy two realities at the same time.

 

e. The Cubs Winning the World’s Series. After beating the Cleveland Indians 8-7 and winning three straight games, the Chicago Cubs officially put an end to their 108-year title drought during Game 7 of the 2016 MLB World Series.

From Nov 8-11, 2018, an independent survey of 2,201 adults, found that 71 percent said that they’d be likely to watch another outing of Marty McFly and Doc Brown, ahead of other franchises such as Pixar’s Toy Story (69 percent), Lucasfilm’s Indiana Jones (68 percent), and Universal’s Jurassic Park (67 percent).1 There is a visionary technology nerd trapped in all of us. So why not a Tetralogy? With the same angst of the future, the producers and writers of the previous trilogy series don’t desire to mess with success.

While the Mad Scientist community remains visionary regarding warfare and weapons systems, by 2054, virtually any platform or system will be of commercial origin. In his bestselling book, Norman Augustine (the former President and CEO of Lockheed Martin Corporation) highlights his “Laws” about business management and government procurements. Similar cost growth ramps will likely apply to Army platforms. From the beginnings of tactical aircraft until today, the cost of an aircraft has increased four-fold every 10 years.

Augustine professes that LAW NUMBER XVI applies:

“In the year 2054, the entire defense budget will purchase just one aircraft. The aircraft will have to be shared by the Air Force and Navy. 3 ½ days each per week except for leap year, when it will be made available to the Marines for the extra day.”2

As we stand in 2019 and gaze forward to 2053, the following point technologies may be more than the script for “Back to the Future IV.” Should Robert Zemeckis and Bob Gale elect to change their minds and write “Back to the Future IV…The Tetralogy,” the following are just some (but by no means all) of the key commercial technical attributes of our 2053 world:

a. 8G in-situ, ultra-high speed, real time mobile connectivity and all sensory immersion at the edge.

b. Wireless high capacity, high efficiency, medium and high tension power distribution using Zenneck waves.

c. Green 100 mAh to 5 MW Batteries, energy harvesting, and mass storage that require little or no recharge and last until load device obsolescence.

d. Personal, Service-based, and Business Flying Cars and Jetpacks.

e. “Supersonic-plus” intercontinental flight.

f. Night vision eyeglasses, lasik-like night vision implants and contact lenses.

g. Quantum and organic computer augmentation and Quantum networks for Machine Learning / Artificial Intelligence, Cyber, and Cyborg functions.

h. Robotic Cyber and counter-Cyber Operations.

i. Quantum Entanglement algorithms for prediction, interaction, and discovery management for new materials, chemicals, medicines, sensing, encryption, communications, information teleportation, and hybrid periodic elements.

j. Multi-domain unmanned and collaborative AI systems that fly, loiter, swim, drive, submerge, and multi-sense persistently (with some that do all of these functions).

k. Printed and stem cell vacuum grown replaceable bones, organs, muscles and skin.

l. Tailored immunotherapy pathogen disease treatment and recovery (including cancer).

m. Tailored-dose printed medicines with robotic dose delivery.

n. Ubiquitous Internet of Things (IoT) and sensor environments, with human privacy only achieved through electronic cloaking using e-nanofabrics.

o. Expanded use of graphene and carbon for light and resilient structural and micro-electronic/quantum markets.

p. Expanded use of nuclear, hydrogen, and fusion-based power to combat runaway climate change and end oil-dependence.

q. A major pep rally for the Cleveland Indians who, after a 104-year drought, win the World Series.

While there are millions of other technical discoveries that have yet to occur, “living out-of-the-box” requires Mad Scientists to accept a risky vision, open the lid on the top of the military’s reality box, and wave to all the inventors and innovators that are inside looking at you.

If you enjoyed this post, please also:

Read Frank Prautzsch’s previous MadSci blog posts: Auto Immune Disease Treatment in a New Age of Bio Convergence and Our Arctic—The World’s Pink Flamingo and Black Swan Bird Sanctuary; as well as his Speaker Series presentation on Advancing Armor on our APAN site.

See similar posts assessing future disruptive technological trends: Potential Game Changers, Black Swans and Pink Flamingos, and Emergent Global Trends Impacting on the Future Operational Environment; and

Crank up Huey Lewis and the News’ hit The Power of Love from Back to the Future!

In his current role as President of Velocity Technology Partners LLC, Mr. Frank Prautzsch (LTC, Ret. Signal Corps) is recognized as a technology and business leader supporting the government and is known for exposing or crafting innovative technology solutions for the DoD, SOF, DHS and Intelligence community. He also provides consult to the MEDSTAR Institute for Innovation. His focus is upon innovation and not invention. Mr. Prautzsch holds a Bachelor of Science in Engineering from the United States Military Academy at West Point, is a distinguished graduate of the Marine Corps Signal Advanced Course, Army Airborne School, Ranger School, and Command and General Staff College. He also holds a Master of Science Degree from Naval Postgraduate School in Monterey, California with a degree in Systems Technology (C3) and Space.


1 “Which Movie Franchise should Return? “Back to the Future” Tops New Poll” (The Hollywood Reporter Magazine, Nov 20, 2018) pg. 1.

2 Norman R. Augustine, Augustine’s Laws (American Institute of Aeronautics and Astronautics, Inc., 1986.) pg. 106-7.

131. Omega

[Editor’s Note:  Story Telling is a powerful tool that allows us to envision how innovative and potentially disruptive technologies could be employed and operationalized in the Future Operational Environment. In today’s guest blog post, proclaimed Mad Scientist Mr. August Cole and Mr. Amir Husain use story telling to effectively:

  • Describe what the future might look like if our adversaries out-innovate us using Artificial Intelligence and cheap robotics;
  • Address how the U.S. might miss a strategic breakthrough due to backward-looking analytical mindsets; and
  • Imagine an unconventional Allied response in Europe to an emboldened near-peer conflict.

Enjoy reading how the NATO Alliance could react to Omega — “a Russian autonomous joint force in a … ready-to-deploy box… [with an] area-denial bubble projected by their new S-600s extend[ing] all the way to the exo-sphere, … cover[ing] the entirety of the ground, sea and cyber domains” — on the cusp of a fictional not-so-distant future near-peer conflict!]

Omega

22 KILOMETERS NORTH OF KYIV / UKRAINE

“Incoming!” shouted Piotr Nowak, a master sergeant in Poland’s Jednostka Wojskowa Komandosów special operations unit. Dropping to the ground, he clawed aside a veil of brittle green moss to wedge himself into a gap beneath a downed tree. He hoped the five other members of his military advisory team, crouched around the fist-shaped rock formation behind him, heard his shouts. To further reinforce Ukraine’s armed forces against increasingly brazen Russian military support for separatists in the eastern part of the country, Poland’s government had been quietly supplying military trainers. A pro-Russian military coup in Belarus two weeks earlier only served to raise tensions in the region – and the stakes for the JWK on the ground.

An instant later incoming Russian Grad rocket artillery announced itself with a shrill shriek. Then a rapid succession of sharp explosive pops as the dozen rockets burst overhead. Nowak quickly realized these weren’t ordinary fires.

Russian 9a52-4 MLRS conducting a fire mission / Source: The National Interest

There was no spray of airburst shrapnel or the lung-busting concussion of a thermobaric munition. Instead, it sounded like summer fireworks – the explosive separation of the 122mm rocket artillery shell’s casing. Once split open, each weapon’s payload deployed an air brake to slow its approach.

During that momentary silence, Nowak edged out slightly from under the log to look up at the sky. He saw the drifting circular payload extend four arms and then, suddenly, it came to life as it sprang free of its parachute harness. With a whine from its electric motors, the quadcopter darted out of sight.

That sound built and built over the next minute as eleven more of these Russian autonomous drones darted menacingly in a loose formation through the forest above the Polish special operations commandos. Nowak cursed the low-profile nature of their mission: The Polish soldiers had not yet received the latest compact American counter-UAS electronic-warfare systems that could actually fit in their civilian Skoda Kodiaq SUVs.

Nowak held his airplane-mode mobile phone out from under the log to film the drones, using his arm like a selfie-stick. Nowak needed to report in what he was seeing – this was proof Russian forces had turned their new AI battle management system online inside Ukraine. But he also knew that doing so would be a death sentence, whether he texted the video on the country’s abominably slow mobile networks or used his secure NATO comms. These Russian drones could detect either type of transmission in an instant. Once the drones cued to his transmission he would be targeted either by their own onboard anti-personnel munitions or a follow-on strike by conventional artillery.

This was no mere variation on the practice of using Leer-3 drones  for electronic warfare and to spot for Russian artillery. It marked the first-ever deployment of an entirely new Russian AI battle system complex, Omega. Nowak had only heard about the Russians firing entire drone swarms from inexpensive Grad rocket-artillery rounds once before in Syria while deployed with a US task force. But they had never done so in Ukraine, at least not that he knew about.  Most observers chalked up Russia’s Syrian experimentations with battlefield robots and drone swarms to clumsy failures. Clearly something had changed.

With his phone, Nowak recorded how the drones appeared to be coordinating their search activities as if they were a single hive intelligence. They divided the dense forest into cells they searched cooperatively. Within seconds, they climbed and dove from treetop height looking for anyone or anything hiding below.

At that very instant, the drone’s computer vision algorithms detected Novak’s team. Each and every one of them. Within seconds, six of the aggressively maneuvering drones revealed themselves in a disjointed dive down from the treetops and zoomed in on the JWK fighters’ positions.

Nobody needed to be told what to do. The team raised their weapons and fired short bursts at the Russian drones. One shattered like a clay pigeon. But two more buzzed into view to take its place. Another drone went down to a shotgun-fired SkyNet round. Then the entire drone formation shifted its flight patterns, dodging and maneuvering even more erratically, making it nearly impossible to shoot the rest down. The machines learned from their own losses, Nowak realized. Would his superiors do the same for him?

Nowak emptied his magazine with a series of quick bursts, but rather than reload he put his weapon aside and rolled out from under the log. Fully exposed and clutching the phone with shaking hands, he hastily removed one of his gloves with his teeth. Then he switched the device on. Network connected. He scrolled to the video of the drones. Send! Send! Send!

Eleven seconds later, Novak’s entire Polish JWK special forces team lay dead on the forest floor.

Jednostka Wojskowa Komandosow (JWK) / Source: Wikimedia Commons

________________________________

Omega is not any one specific weapon, rather it is made up of a menagerie of Russian weapons, large and small. It’s as if you fused information warfare, SAMs, fires, drones, tactical autonomous bots… There’s everything from S-600 batteries to cheap Katyusha-style rocket artillery to Uran-9 and -13 tanks. But it is what controls the hardware that makes Omega truly unique: AI. At its core, it’s an artificial intelligence system fusing data from thousands of sensors, processed information, and found patterns that human eyes and minds cannot fathom. The system’s AI is not only developing a comprehensive real-time picture, it’s also developing probabilities and possible courses of enemy action. It can coordinate thousands of “shooters”, from surface-to-air missiles, to specialized rocket artillery deploying autonomous tactical drones like the ones that killed the JWK team, to UGVs like the latest Uran-13 autonomous tracked units.

The developers of the Omega system incorporated technologies such as software-defined radio, which uses universal receivers that could listen in to a broad array of frequencies. Thousands of these bands are monitored with machine learning algorithms to spot insurgent radio stations, spy on the locations of Ukrainian military and police, and even determine if a certain frequency is being used to remotely control explosives or other military equipment. When a threat is discovered, the system will dispatch drones to observe the triangulated location of the source. If the threat needs to be neutralized a variety of kinetic systems – from guided artillery shells to loitering munitions and autonomous drones – can be dispatched for the kill.

________________________________

If you enjoyed this excerpt, please:

Read the complete Omega short story, hosted by our colleagues at the Atlantic Council NATOSource blog,

Learn how the U.S. Joint Force and our partners are preparing to prevail in competition with our strategic adversaries and, when necessary, penetrate and dis-integrate their anti-access and area denial systems and exploit the resultant freedom of maneuver to achieve strategic objectives (win) and force a return to competition on favorable terms in The U.S. Army in Multi-Domain Operations 2028 Executive Summary, and

See one prescription for precluding the strategic surprise that is the fictional Omega in The Importance of Integrative Science/Technology Intelligence (InS/TINT) to the Prediction of Future Vistas of Emerging Threats, by Dr. James Giordano,  CAPT (USN – Ret.) L. R. Bremseth, and Mr. Joseph DeFranco.

Reminder: You only have 1 week left to enter your submissions for the Mad Scientist Science Fiction Writing Contest 2019.  Click here for more information about the contest and how to submit your short story(ies) for consideration by our 1 April 2019 deadline!

Mr. August Cole is a proclaimed Mad Scientist, author, and futurist focusing on national security issues. He is a non-resident senior fellow at the Art of the Future Project at the Atlantic Council. He also works on creative foresight at SparkCognition, an artificial intelligence company, and is a senior advisor at Avascent, a consulting firm. His novel with fellow proclaimed Mad Scientist P.W. Singer, entitled Ghost Fleet: A Novel of the Next World War, explores the future of great power conflict and disruptive technologies in wartime.

Mr. Amir Husain is the founder and CEO of SparkCognition, a company envisioned to be at the forefront of the “AI 3.0” revolution. He serves as advisor and board member to several major institutions, including IBM Watson, University of Texas Department of Computer Science, Makerarm, ClearCube Technology, uStudio and others; and his work has been published in leading tech journals, including Network World, IT Today, and Computer World. In 2015, Amir was named Austin’s Top Technology Entrepreneur of the Year.

Disclaimer: This publication is a work of fiction by Messrs. August Cole and Amir Husain, neither of whom have any affiliation with U.S. Army Training and Doctrine Command, the U.S. Army, or the U.S. Government. This piece is meant to be thought-provoking and entertaining, and does not reflect the current position of the U.S. Army.

127. “Maddest” Guest Blogger!

[Editor’s Note: Since its inception in November 2017, the Mad Scientist Laboratory has enabled us to expand our reach and engage global innovators from across industry, academia, and the Government regarding emergent disruptive technologies and their individual and convergent impacts on the future of warfare. For perspective, our blog has accrued 106K views by over 57K visitors from around the world!

Our Mad Scientist Community of Action continues to grow — in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. To date, 53% of the blog posts published have been submitted by guest bloggers! We challenge you all to contribute your ideas about warfare and the Future Operational Environment!

In particular, we would like to recognize proclaimed Mad Scientist Dr. Alexander Kott by re-posting our review of his paper, Ground Warfare in 2050: How It Might Look, original published by the US Army Research Laboratory in August 2018.  This paper provides a technological forecast of autonomous intelligent agents and robots and their potential for employment on future battlefields in the year 2050.

Our review of Dr. Kott’s paper generated a record number of visits and views during the past six month period. Consequently, we hereby declare Dr. Kott to be the Mad Scientist Laboratory’s “Maddest” Guest Blogger! for the first and second quarters of FY19. In recognition of this achievement, Dr. Kott will receive much coveted Mad Scientist swag!

Enjoy today’s post as we revisit Dr. Kott’s conclusions with links to our previously published posts supporting his findings.]

Ground Warfare in 2050:  How It Might Look

In his paper, Dr. Kott addresses two major trends (currently under way) that will continue to affect combat operations for the foreseeable future. They are:

The employment of small aerial drones for Intelligence, Surveillance, and Reconnaissance (ISR) will continue, making concealment difficult and eliminating distance from opposing forces as a means of counter-detection. This will require the development and use of decoy capabilities (also intelligent robotic devices). This counter-reconnaissance fight will feature prominently on future battlefields between autonomous sensors and countermeasures – “a robot-on-robot affair.”

See our related discussions regarding Concealment in the Fundamental Questions Affecting Army Modernization post and Finders vs Hiders in our Timeless Competitions post.

The continued proliferation of intelligent munitions, operating at greater distances, collaborating in teams to seek out and destroy designated targets, and able to defeat armored and other hardened targets, as well as defiladed and entrenched targets.

See our descriptions of the future recon / strike complex in our Advanced Engagement Battlespace and the “Hyperactive Battlefield” post, and Robotics and Swarms / Semi Autonomous capabilities in our Potential Game Changers post.

These two trends will, in turn, drive the following forecasted developments:

Increasing reliance on unmanned systems, “with humans becoming a minority within the overall force, being further dispersed across the battlefield.”

See Mr. Jeff Becker’s post on The Multi-Domain “Dragoon” Squad: A Hyper-enabled Combat System, and Mr. Mike Matson’s Demons in the Tall Grass, both of which envision future tactical units employing greater numbers of autonomous combat systems; as well as Mr. Sam Bendett’s post on Russian Ground Battlefield Robots: A Candid Evaluation and Ways Forward, addressing the contemporary hurdles that one of our strategic competitors must address in operationalizing Unmanned Ground Vehicles.

Intelligent munitions will be neutralized “primarily by missiles and only secondarily by armor and entrenchments. Specialized autonomous protection vehicles will be required that will use their extensive load of antimissiles to defeat the incoming intelligent munitions.”

See our discussion of what warfare at machine-speed looks like in our Advanced Engagement Battlespace and the “Hyperactive Battlefield”.

Source: Fausto De Martini / Kill Command

Forces will exploit “very complex terrain, such as dense forest and urban environments” for cover and concealment, requiring the development of highly mobile “ground robots with legs and limbs,” able to negotiate this congested landscape.

 

See our Megacities: Future Challenges and Responses and Integrated Sensors: The Critical Element in Future Complex Environment Warfare posts that address future complex operational environments.

Source: www.defenceimages.mod.uk

The proliferation of autonomous combat systems on the battlefield will generate an additional required capability — “a significant number of specialized robotic vehicles that will serve as mobile power generation plants and charging stations.”

See our discussion of future Power capabilities on our Potential Game Changers handout.

“To gain protection from intelligent munitions, extended subterranean tunnels and facilities will become important. This in turn will necessitate the tunnel-digging robotic machines, suitably equipped for battlefield mobility.”

See our discussion of Multi-Domain Swarming in our Black Swans and Pink Flamingos post.

All of these autonomous, yet simultaneously integrated and networked battlefield systems will be vulnerable to Cyber-Electromagnetic Activities (CEMA). Consequently, the battle within the Cyber domain will “be fought largely by various autonomous cyber agents that will attack, defend, and manage the overall network of exceptional complexity and dynamics.”

See MAJ Chris Telley’s post addressing Artificial Intelligence (AI) as an Information Operations tool in his Influence at Machine Speed: The Coming of AI-Powered Propaganda.

The “high volume and velocity of information produced and demanded by the robot-intensive force” will require an increasingly autonomous Command and Control (C2) system, with humans increasingly being on, rather than in, the loop.

See Mr. Ian Sullivan’s discussion of AI vs. AI and how the decisive edge accrues to the combatant with more autonomous decision-action concurrency in his Lessons Learned in Assessing the Operational Environment post.

If you enjoyed reading this post, please watch Dr. Alexander Kott’s presentation, “The Network is the Robot,” from the Mad Scientist Robotics, Artificial Intelligence, and Autonomy: Visioning Multi-Domain Warfare in 2030-2050 Conference, co-sponsored by the Georgia Tech Research Institute (GTRI), in Atlanta, Georgia, 7-8 March 2017.

… and crank up Mr. Roboto by Styx!

Dr. Alexander Kott serves as the ARL’s Chief Scientist. In this role he provides leadership in development of ARL technical strategy, maintaining technical quality of ARL research, and representing ARL to external technical community. He published over 80 technical papers and served as the initiator, co-author and primary editor of over ten books, including most recently Cyber Defense and Situational Awareness (2015) and Cyber Security of SCADA and other Industrial Control Systems (2016), and the forthcoming Cyber Resilience of Systems and Networks (2019).

 

122. The Guy Behind the Guy: AI as the Indispensable Marshal

[Editor’s Note: Mad Scientist Laboratory is pleased to present today’s guest blog post by Mr. Brady Moore and Mr. Chris Sauceda, addressing how Artificial Intelligence (AI) systems and entities conducting machine speed collection, collation, and analysis of battlefield information will free up warfighters and commanders to do what they do best — fight and make decisions, respectively. This Augmented Intelligence will enable commanders to focus on the battle with coup d’œil, or the “stroke of an eye,” maintaining situational awareness on future fights at machine speed, without losing precious time crunching data.]

Jon Favreau’s Mike character (left) is the “guy behind the guy,” to Vince Vaughn’s Trent character (right) in Swingers, directed by Doug Liman, Miramax;(1996) / Source: Pinterest

In the 1996 film Swingers, the characters Trent (played by Vince Vaughn) and Mike (played by Jon Favreau) star as a couple of young guys trying to make it in Hollywood. On a trip to Las Vegas, Trent introduces Mike as “the guy behind the guy” – implying that Mike’s value is that he has the know-how to get things done, acts quickly, and therefore is indispensable to a leading figure. Yes, I’m talking about Artificial Intelligence for Decision-Making on the future battlefield – and “the guy behind the guy” sums up how AI will provide a decisive advantage in Multi-Domain Operations (MDO).

Some of the problems commanders will have on future battlefields will be the same ones they have today and the same ones they had 200 years ago: the friction and fog of war. The rise of information availability and connectivity brings today’s challenges – of which most of us are aware. Advanced adversary technologies will bring future challenges for intelligence gathering, command, communication, mobility, and dispersion. Future commanders and their staffs must be able to deal with both perennial and novel challenges faster than their adversaries, in disadvantageous circumstances we can’t control. “The guy behind the guy” will need to be conversant in vast amounts of information and quick to act.

Louis-Alexandre Berthier was a French Marshal and Vice-Constable of the Empire, and Chief of Staff under Napoleon / oil portrait by Jacques Augustin Catherine Pajou (1766–1828), Source: Wikimedia Commons

In western warfare, the original “guy behind the guy” wasn’t Mike – it was this stunning figure. Marshal Louis-Alexandre Berthier was Napoleon Bonaparte’s Chief of Staff from the start of his first Italian campaign in 1796 until his first abdication in 1814. Famous for rarely sleeping while on campaign, Paul Thiebault said of Berthier in 1796:

“Quite apart from his specialist training as a topographical engineer, he had knowledge and experience of staff work and furthermore a remarkable grasp of everything to do with war. He had also, above all else, the gift of writing a complete order and transmitting it with the utmost speed and clarity…No one could have better suited General Bonaparte, who wanted a man capable of relieving him of all detailed work, to understand him instantly and to foresee what he would need.”

Bonaparte’s military record, his genius for war, and skill as a leader are undisputed, but Berthier so enhanced his capabilities that even Napoleon himself admitted about his absence at Waterloo, “If Berthier had been there, I would not have met this misfortune.”

Augmented Intelligence, where intelligent systems enhance human capabilities (rather than systems that aspire to replicate the full scope of human intelligence), has the potential to act as a digital Chief of Staff to a battlefield commander. Just like Berthier, AI for decision-making would free up leaders to clearly consider more factors and make better decisions – allowing them to command more, and research and analyze less. AI should allow humans to do what they do best in combat – be imaginative, compel others, and act with an inherent intuition, while the AI tool finds, processes, and presents the needed information in time.

So Augmented Intelligence would filter information to prioritize only the most relevant and timely information to help manage today’s information overload, as well as quickly help communicate intent – but what about yesterday’s friction and fog, and tomorrow’s adversary technology? The future battlefield seems like one where U.S. commanders will be starved for the kind of Intelligence, Surveillance, and Reconnaissance (ISR) and communication we are so used to today, a battlefield with contested Electromagnetic Spectrum (EMS) and active cyber effects, whether known or unknown. How can commanders and their staffs begin to overcome challenges we haven’t yet been presented in war?

Average is Over: Powering America Beyond the Age of the Great Stagnation, by Tyler Cowen / Dutton, The Penguin Group, published in 2013

In his 2013 book Average is Over, economist Tyler Cowen examines the way freestyle chess players (who are free to use computers when playing the game) use AI tools to compete and win, and makes some interesting observations that are absolutely applicable to the future of warfare at every level. He finds competitors have to play against foes who have AI tools themselves, and that AI tools make chess move decisions that can be recognized (by people) and countered. The most successful freestyle chess players use a combination of their own knowledge of the game, but pick and choose times and situations to use different kinds of AI throughout a game. Their opponents not only then have to consider which AI is being used against them, but also their human operator’s overall strategy. This combination of Augmented Intelligence with an AI tool, along with natural inclinations and human intuitions will likely result in a powerful equilibrium of human and AI perception, analysis, and ultimately enhanced complex decision-making.

With a well-trained and versatile “guy behind the guy,” a commander and staff could employ different aspects of Augmented Intelligence at different times, based on need or appropriateness. A company commander in a dense urban fight, equipped with an appropriate AI tool – a “guy behind the guy” that helps him make sense of the battlefield – what could that commander accomplish with his company? He could employ the tool to notice things humans don’t – or at least notice them faster and alert him. Changes in historic traffic patterns or electronic signals in an area could indicate an upcoming attack or a fleeing enemy, or the system could let the commander know that just a little more specific data could help establish a pattern where enemy data was scarce. And if the commander was presented with the very complex and large problems that characterize modern dense urban combat, the system could help shrink and sequence problems to make them more solvable – for instance find a good subset of information to experiment with and help prove a hypothesis before trying out a solution in the real world – risking bandwidth instead of blood.

The U.S. strategy for MDO has already identified the critical need to observe, orient, decide, and act faster than our adversaries – multiple AI tools that have all necessary information, and can present it and act quickly will certainly be indispensable to leaders on the battlefield. An AI “guy behind the guy” continuously sizing up the situation, finding the right information and allowing for better, faster decisions in difficult situations is how Augmented Intelligence will best serve leaders in combat and provide battlefield advantage.

If you enjoyed this post, please also read:

… watch Juliane Gallina‘s Arsenal of the Mind presentation at the Mad Scientist Robotics, AI, & Autonomy Visioning Multi Domain Battle in 2030-2050 Conference at Georgia Tech Research Institute, Atlanta, Georgia, on 7-8 March 2017

… and learn more about potential AI battlefield applications in our Crowdsourcing the Future of the AI Battlefield information paper.

Brady Moore is a Senior Enterprise Client Executive at Neudesic in New York City. A graduate of The Citadel, he is a former U.S. Army Infantry and Special Forces officer with service as a leader, planner, and advisor across Iraq, Afghanistan, Africa, and, South Asia. After leaving the Army in 2011, he obtained an MBA at Penn State and worked as an IBM Cognitive Solutions Leader covering analytics, AI, and Machine Learning in National Security. He’s the Junior Vice Commander of VFW Post 2906 in Pompton Lakes, NJ, and Cofounder of the Special Forces Association Chapter 58 in New York City. He also works with Elite Meet as often as he can.

Chris Sauceda is an account manager within the U.S. Army Defense and Intel IBM account, covering Command and Control, Cyber, and Advanced Analytics/ Artificial Intelligence. Chris served on active duty and deployed in support of Operation Iraqi Freedom, and has been in the Defense contracting business for over 13 years. Focused on driving cutting edge technologies to the warfighter, he also currently serves as a Signal Officer in the Texas Military Department.