39. “Maddest” Guest Blogger!

(Editor’s Note: Since its inception in November 2018, Mad Scientist Laboratory has enabled us to expand our reach and engage global innovators from across industry, academia, and the Government regarding emergent disruptive technologies. For perspective, at the end of 2017, our blog had accrued 3,022 visitors and 5,212 views. Contrast that with the first three months of 2018, where we have racked up an additional 5,858 visitors and 11,387 views!

Our Mad Scientist Community of Action continues to grow in no small part due to the many guest bloggers who have shared their provocative, insightful, and occasionally disturbing visions of the future. To date, Mad Scientist Laboratory has published 15 guest blog posts.

And so, as the first half of FY18 comes to a close, we want to recognize all of our guest bloggers and thank them for contributing to our growth. We also challenge those of you that have been thinking about contributing a guest post to take the plunge and send us your submissions!

In particular, we would like to recognize Mr. Pat Filbert, who was our inaugural (and repeat!) guest blogger by re-posting below his initial submission, published on 4 December 2018. Pat’s post, Why do I have to go first?!”, generated a record number of visits and views. Consequently, we hereby declare Pat to be the Mad Scientist Laboratory’s “Maddest” Guest Blogger! for the first half of FY18. Pat will receive the following much coveted Mad Scientist swag in recognition of his achievement: a signed proclamation officially attesting to his Mad Scientist status as “Maddest” Guest Blogger!, 1st Half, FY18, and a Mad Scientist patch to affix to his lab coat and wear with pride!

And now, please enjoy Pat’s post…)


8. “Why do I have to go first?!”

“Reports indicate there’s been a mutiny by the U.S. Army’s robotic Soldiers resulting in an attack killing 47 human Soldiers.” – media report from Democratic Republic of the Congo, August 2041.

Our robotics systems have not ‘mutinied,’ there was a software problem resulting in several of our robotic Soldiers attacking our human Soldiers resulting in casualties; an investigation is underway. – Pentagon spokesman.

Reconciling the use of robotics has been focused on taking the risk away from humans and letting machines do the “dull, dirty, dangerous” operations. One of the premises of introducing unmanned aircraft systems into the force was to keep pilots, and their expensive aircraft, out of harm’s way while increasing the data flow for the commander.

Potential future use of robotic Soldiers to lead the way into an urban battlefield, absorb the brunt of a defending adversary’s fire to allow human Soldiers to exploit openings is a possible course of action. Keeping human Soldiers to fight another day, while increasing the speed of “house by house” clearing operations so they don’t consume humans—similar to urban area clearing in World War II—could be seen as a way to reduce the time a conflict takes to win.

Now we have search engine algorithms which tailor themselves to each person conducting a search to bring up the most likely items that person wants based on past searches. Using such algorithms to support supervised autonomous robotic troops has the potential for the robot to ask “why do I have to go first?” in a given situation. The robotic Soldier could calculate far faster that survival and self-preservation to continue the mission are paramount over being used as a “bullet sponge” as the robot police in the movie “Chappie” were used.

Depending on robotic Soldier’s levels of autonomy coupled with ethical software academics have posited be used to enable robots to make moral and ethical decisions, the robot Soldiers could decide not to follow their orders. Turning on their human counterparts and killing them could be calculated as the correct course of action depending on how the robot Soldiers conclude the moral and ethical aspects of the orders given and how it conflicts with their programming reveal. This is the premise in the movie “2001: A Space Odyssey” where the HAL 9000 AI kills the spaceship crew because it was ordered to withhold information (lie) which conflicted with its programming to be completely truthful. Killing the crew is a result of a programming conflict; if the crew is dead, HAL doesn’t have to lie.

Classified aspects of operations are withheld from human Soldiers, so this would most likely occur with robot Soldiers. This aspect could cause initiation of a programming conflict and such an attribute has to be considered for technology development; in professional military school’s syllabi; and on the battlefield as to how to plan, respond, and resolve.

• Can wargaming plans for operations including robotic Soldiers identify programming conflicts? If so, how can this be taught and programmed to resolve the conflict?

• When is the decision made to reduce the AI’s autonomy, and how, related to compartmentalized information for a more automatic/non-autonomous function?

• What safeguards have to be in place to address potential programming conflicts when the AI is “brought back up to speed” for why they were “dumbed down?”

For further general information, search ongoing discussions on outlawing weaponized autonomous systems. For academic recommendations to integrate ethical software into military autonomous systems to better follow the Laws of Warfare, see Dr. Ron Arkin’s “Ethical Robots in Warfare

For more information on how robots could be integrated into small units, thereby enhancing their close-in lethality and multi-domain effects, see Mr. Jeff Becker’s proposed Multi-Domain “Dragoon” Squad (MDS) concept. For insights into how our potential adversaries are exploring the role of robotics on future battlefields, see our Autonomous Threat Trends post.

Pat Filbert is retired Army (24 years, Armor/MI); now a contractor with the Digital Integration for Combat Engagement (DICE) effort developing training for USAF DCGS personnel. He has experience with UAS/ISR, Joint Testing, Intelligence analysis/planning, and JCIDS.

28. My City is Smarter than Yours!

(Editor’s Note: The Mad Scientist Laboratory is pleased to present the following post by returning guest blogger Mr. Pat Filbert)

Megacities will cause far more issues with conflict resolution than is currently understood and should be approached from a more holistic understanding when it comes to planning for urban fighting.

The “collateral damage” aspect of leveling city blocks adds to the burden of rebuilding a smart megacity to provide a measure of security and resumption of the “way of life” to which its citizens have grown accustomed. The assumption of instant information at one’s fingertips specifies, and implies, that there is something feeding that information flow to whatever the user is accessing; specifically, embedded fiber optic networks moving information drawn from a variety of sensors built into the city structure that provide not just citizens, but also local and city leaders, the statuses they “can’t do without.”

Friendly forces will have to ensure their operations in megacities consider advanced city infrastructure attributes in their Intelligence Preparation of the Urban Battlespace.

• Preservation of fiber optic networks and repair requirements after kinetic or non-kinetic attacks, including use by attacking and defending forces and insurgents

• Active broadband infrastructure to support information flow to friendly forces from locals; also when to interrupt it while supporting open source teams to combat “fake news” while locals tweet information and requests for help

• Defeating enemy hackers infiltrating friendly networks via megacity infrastructure

The decision point to take down a megacity’s information network must consider what is there that friendly forces can use to support their efforts and how the health and welfare of the citizens will be affected. Cities are now experimenting with smart community technology that enables law enforcement to pinpoint and identify where gunshots are coming from using audio technology, similar to what is being used in the military, along with wireless power broadcasting providing citizens with ever-increasing levels of comfort and necessity that a conflict will interrupt (no food storage or refrigeration or fast medical response equals how will friendly forces fix this?).

Other attributes for consideration are the use by the adversary, as well as insurgent forces, of existing unmanned ground systems in a megacity that were being used to support mass transit. Is that automated shuttle coming at you full of explosives or citizens fleeing the fight? How do you get it to stop if there isn’t a driver on board, only scared families who can’t stop the vehicle? Current procedures have troops demanding the vehicle stop or they will open fire—without a driver, then what?

When ground troops move forward, they must understand it’s not just Closed-Circuit Television (CCTV) that could be transmitting data and information to an adversary.

• Smart crosswalk sensors that used to provide data to traffic centers to decrease accidents and now provide the adversary with in-place, unattended ground sensors

• GPS systems that once reported real-time accidents co-opted to integrate adversary tracking of friendly force heavy vehicles

• Data systems providing predictions of situations where traffic jams and accidents might occur being used to predict where friendly forces can be ambushed

• Adversaries turning on the lights (i.e., illuminating not just with streetlights but in-place multi-spectral systems) to put friendly forces “on the spot” has to be considered and countered

Being able to compromise those systems, like the “loop a security camera” trick spies do in the movies, without the adversary noticing is counter-detection technology friendly forces must have; preferably without such technology being known before conflict.

Once the battle is won, how will our forces get things back on-line? Future generations will demand a return to their “way of life before we showed up” — fast and incessantly.

Lack of support to keep what wasn’t destroyed safe for use while not enabling enemy propaganda

• Repair of power supply technology from solar, wind, nuclear production and supporting power lines to the wireless power broadcasting infrastructure

• Civilians not hearing “that’s not my job” for conflict caused damages and military not being under-resourced due to a lack of planning and upper echelon military/political leadership failure to resource pre- and post-conflict requirements before initiating a conflict

See Smart city technology aims to make communities more secure, but does it encroach on privacy? for background information on smart community technology integration in Las Vegas. For additional information on wireless power transmission, see Wireless Power.


Mad Scientist co-sponsored the Megacities and Dense Urban Areas in 2025 and Beyond Conference with Arizona State University on 21-22 April 2016. For more information on the ramifications of Future Warfare in Megacities see:

Mad Scientist: Megacities and Dense Urban Areas in 2025 and Beyond Final Report

YouTube Playlist — Mad Scientist: Megacities and Dense Urban Areas in 2025 and Beyond

The Future Urban Battlefield with Dr. Russell Glenn podcast, hosted by the Modern War Institute

If you were intrigued by this post, please note that Mad Scientist is currently sponsoring a Call for Ideas writing contest. Contributors are asked to consider how future Army installations will operate and project force in the Operational Environment (OE) of 2050, and submit either a Research Topic or A Soldier’s Letter Home from Garrison. Suspense for submissions is 15 March 2018.

Pat Filbert is retired Army (24 years, Armor/MI); now a contractor with the Digital Integration for Combat Engagement (DICE) effort developing training for USAF DCGS personnel. He has experience with UAS/ISR, Joint Testing, Intelligence analysis/planning, and JCIDS. He has previously posted on robotics at the Mad Scientist Laboratory.