138. “The Monolith”

The Monolith set from the dawn of man sequence, 2001: A Space Odyssey, Metro-Goldwyn-Mayer (1968) / Source: Wikimedia Commons

[Editor’s Note: Mad Scientist Laboratory is pleased to introduce a new, quarterly feature, entitled “The Monolith.” Arthur C. Clarke and Stanley Kubrick fans alike will recognize and appreciate our allusion to the alien artifact responsible for “uplifting” mankind from primitive, defenseless hominids into tool using killers — destined for the stars — from their respective short story, “The Sentinel,” and movie, “2001: A Space Odyssey.” We hope that you will similarly benefit from this post (although perhaps in not quite so evolutionary a manner!), reflecting the Mad Scientist Teams’ collective book and movie recommendations — Enjoy!]

Originally published by PublicAffairs on 5 October 2017

The Future of War by Sir Lawrence Freedman. The evolution of warfare has taken some turns that were quite unexpected and were heavily influenced by disruptive technologies of the day. Sir Lawrence examines the changing character of warfare over the last several centuries, how it has been influenced by society and technology, the ways in which science fiction got it wrong and right, and how it might take shape in the future. This overarching look at warfare causes one to pause and consider whether we may be asking the right questions about future warfare.

 

Royal Scots Guardsmen engaging the enemy with a Lewis Machine Gun / Source:  Flickr

They Shall Not Grow Old directed by Sir Peter Jackson. This lauded 2018 documentary utilizes original film footage from World War I (much of it unseen for the past century) that has been digitized, colorized, upscaled, and overlaid with audio recordings from British servicemen who fought in the war. The divide between civilians untouched by the war and service members, the destructive impact of new disruptive technologies, and the change they wrought on the character of war resonate to this day and provide an excellent historical analogy from which to explore future warfare.

Gene Simmons plays a nefarious super empowered individual in Runaway

Runaway directed by Michael Crichton. This film, released in 1984, is set in the near future, where a police officer (Tom Selleck) and his partner (Cynthia Rhodes) specialize in neutralizing malfunctioning robots. A rogue killer robot – programmed to kill by the bad guy (Gene Simmons) – goes on homicidal rampage. Alas, the savvy officers begin to uncover a wider, nefarious plan to proliferate killer robots. This offbeat Sci-Fi thriller illustrates how dual-use technologies in the hands of super-empowered individuals could be employed innovatively in the Future Operational Environment. Personalized warfare is also featured, as a software developer’s family is targeted by the ‘bad guy,’ using a corrupted version of the very software he helped create. This movie illustrates the potential for everyday commercial products to be adapted maliciously by adversaries, who, unconstrained ethically, can out-innovate us with convergent, game changing technologies (robotics, CRISPR, etc.).

Originally published by Macmillan on 1 May 2018

The Military Science of Star Wars by George Beahm. Storytelling is a powerful tool used to visualize the future, and Science Fiction often offers the best trove of ideas. The Military Science of Star Wars by George Beahm dissects and analyzes the entirety of the Star Wars Universe to mine for information that reflects the real world and the future of armed conflict. Beahm tackles the personnel, weapons, technology, tactics, strategy, resources, and lessons learned from key battles and authoritatively links them to past, current, and future Army challenges. Beahm proves that storytelling, and even fantasy (Star Wars is more a fantasy story than a Science Fiction story), can teach us about the real world and help evolve our thinking to confront problems in new and novel ways. He connects the story to the past, present, and future Army and asks important questions, like “What makes Han Solo a great military Leader?”, “How can a military use robots (Droids) effectively?”, and most importantly, “What, in the universe, qualified Jar Jar Binks to be promoted to Bombad General?”.

Ex Machina, Universal Pictures (2014) / Source: Vimeo

Ex Machina directed by Alex Garland. This film, released in 2014, moves beyond the traditional questions surrounding the feasibility of Artificial Intelligence (AI) and the Turing test to explore the darker side of synthetic beings, knowing that it is achievable and that the test can be passed. The film is a cautionary tale of what might be possible at the extreme edge of AI computing and innovation where control may be fleeting or even an illusion. The Army may never face the same consequences that the characters in the film face, but it can learn from their lessons. AI is a hotly debated topic with some saying it will bring about the end of days, and others saying generalized AI will never exist. With a future this muddy, one must be cautious of exploring new and undefined technology spaces that carry so much risk. As more robotic entities are operationalized, and AI further permeates the battlefield, future Soldiers and Leaders would do well to stay abreast of the potential for volatility in an already chaotic environment. If Military AI progresses substantially, what will happen when we try to turn it off?

Astronaut and Lunar Module pilot Buzz Aldrin is pictured during the Apollo 11 extravehicular activity on the moon / Source: NASA

Apollo 11 directed by Todd Douglas Miller. As the United States prepares to celebrate the fiftieth anniversary of the first manned mission to the lunar surface later this summer, this inspiring documentary reminds audiences of just how audacious an achievement this was. Using restored archival audio recordings and video footage (complemented by simple line animations illustrating each of the spacecrafts’ maneuver sequences), Todd Miller skillfully re-captures the momentousness of this historic event, successfully weaving together a comprehensive point-of-view of the mission. Watching NASA and its legion of aerospace contractors realize the dream envisioned by President Kennedy eight years before serves to remind contemporary America that we once dared and dreamed big, and that we can do so again, harnessing the energy of insightful and focused leadership with the innovation of private enterprise. This uniquely American attribute may well tip the balance in our favor, given current competition and potential future conflicts with our near-peer adversaries in the Future Operational Environment.

Originally published by Penguin Random House on 3 July 2018

Artemis by Andy Weir. In his latest novel, following on the heels of his wildly successful The Martian, Andy Weir envisions an established lunar city in 2080 through the eyes of Jasmine “Jazz” Bashara, one of its citizen-hustlers, who becomes enmeshed in a conspiracy to control the tremendous wealth generated from the space and lunar mineral resources refined in the Moon’s low-G environment. His suspenseful plot, replete with descriptions of the science and technologies necessary to survive (and thrive!) in the hostile lunar environment, posits a late 21st century rush to exploit space commodities. The resultant economic boom has empowered non-state actors as new competitors on the global — er, extraterrestrial stage — from the Kenya Space Corporation (blessed by its equatorial location and reduced earth to orbit launch costs) to the Sanchez Aluminum mining and refining conglomerate, controlled by a Brazilian crime syndicate scheming to take control of the lunar city. Readers are reminded that the economic hegemony currently enjoyed by the U.S., China, and the E.U. may well be eclipsed by visionary non-state actors who dare and dream big enough to exploit the wealth that lies beyond the Earth’s gravity well.

39. “Maddest” Guest Blogger!

(Editor’s Note: Since its inception in November 2017, Mad Scientist Laboratory has enabled us to expand our reach and engage global innovators from across industry, academia, and the Government regarding emergent disruptive technologies. For perspective, at the end of 2017, our blog had accrued 3,022 visitors and 5,212 views. Contrast that with the first three months of 2018, where we have racked up an additional 5,858 visitors and 11,387 views!

Our Mad Scientist Community of Action continues to grow in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. To date, Mad Scientist Laboratory has published 15 guest blog posts.

And so, as the first half of FY18 comes to a close, we want to recognize all of our guest bloggers and thank them for contributing to our growth. We also challenge those of you that have been thinking about contributing a guest post to take the plunge and send us your submissions!

In particular, we would like to recognize Mr. Pat Filbert, who was our inaugural (and repeat!) guest blogger by re-posting below his initial submission, published on 4 December 2018. Pat’s post, Why do I have to go first?!”, generated a record number of visits and views. Consequently, we hereby declare Pat to be the Mad Scientist Laboratory’s “Maddest” Guest Blogger! for the first half of FY18. Pat will receive the following much coveted Mad Scientist swag in recognition of his achievement: a signed proclamation officially attesting to his Mad Scientist status as “Maddest” Guest Blogger!, 1st Half, FY18, and a Mad Scientist patch to affix to his lab coat and wear with pride!

And now, please enjoy Pat’s post…)


8. “Why do I have to go first?!”

“Reports indicate there’s been a mutiny by the U.S. Army’s robotic Soldiers resulting in an attack killing 47 human Soldiers.” – media report from Democratic Republic of the Congo, August 2041.

Our robotics systems have not ‘mutinied,’ there was a software problem resulting in several of our robotic Soldiers attacking our human Soldiers resulting in casualties; an investigation is underway. – Pentagon spokesman.

Reconciling the use of robotics has been focused on taking the risk away from humans and letting machines do the “dull, dirty, dangerous” operations. One of the premises of introducing unmanned aircraft systems into the force was to keep pilots, and their expensive aircraft, out of harm’s way while increasing the data flow for the commander.

Potential future use of robotic Soldiers to lead the way into an urban battlefield, absorb the brunt of a defending adversary’s fire to allow human Soldiers to exploit openings is a possible course of action. Keeping human Soldiers to fight another day, while increasing the speed of “house by house” clearing operations so they don’t consume humans—similar to urban area clearing in World War II—could be seen as a way to reduce the time a conflict takes to win.

Now we have search engine algorithms which tailor themselves to each person conducting a search to bring up the most likely items that person wants based on past searches. Using such algorithms to support supervised autonomous robotic troops has the potential for the robot to ask “why do I have to go first?” in a given situation. The robotic Soldier could calculate far faster that survival and self-preservation to continue the mission are paramount over being used as a “bullet sponge” as the robot police in the movie “Chappie” were used.

Depending on robotic Soldier’s levels of autonomy coupled with ethical software academics have posited be used to enable robots to make moral and ethical decisions, the robot Soldiers could decide not to follow their orders. Turning on their human counterparts and killing them could be calculated as the correct course of action depending on how the robot Soldiers conclude the moral and ethical aspects of the orders given and how it conflicts with their programming reveal. This is the premise in the movie “2001: A Space Odyssey” where the HAL 9000 AI kills the spaceship crew because it was ordered to withhold information (lie) which conflicted with its programming to be completely truthful. Killing the crew is a result of a programming conflict; if the crew is dead, HAL doesn’t have to lie.

Classified aspects of operations are withheld from human Soldiers, so this would most likely occur with robot Soldiers. This aspect could cause initiation of a programming conflict and such an attribute has to be considered for technology development; in professional military school’s syllabi; and on the battlefield as to how to plan, respond, and resolve.

• Can wargaming plans for operations including robotic Soldiers identify programming conflicts? If so, how can this be taught and programmed to resolve the conflict?

• When is the decision made to reduce the AI’s autonomy, and how, related to compartmentalized information for a more automatic/non-autonomous function?

• What safeguards have to be in place to address potential programming conflicts when the AI is “brought back up to speed” for why they were “dumbed down?”

For further general information, search ongoing discussions on outlawing weaponized autonomous systems. For academic recommendations to integrate ethical software into military autonomous systems to better follow the Laws of Warfare, see Dr. Ron Arkin’s “Ethical Robots in Warfare

For more information on how robots could be integrated into small units, thereby enhancing their close-in lethality and multi-domain effects, see Mr. Jeff Becker’s proposed Multi-Domain “Dragoon” Squad (MDS) concept. For insights into how our potential adversaries are exploring the role of robotics on future battlefields, see our Autonomous Threat Trends post.

Pat Filbert is retired Army (24 years, Armor/MI); now a contractor with the Digital Integration for Combat Engagement (DICE) effort developing training for USAF DCGS personnel. He has experience with UAS/ISR, Joint Testing, Intelligence analysis/planning, and JCIDS.