[Editor’s Note: Mad Scientist is pleased to announce our latest episode of “The Convergence” podcast, featuring Messrs. Michael Meier, Special Assistant to the Judge Advocate General for Law of War Matters, United States Army, and proclaimed Mad Scientist Shawn Steene, Senior Force Developer for Emerging Technologies, Office of the Under Secretary of Defense for Policy. Join us as they discuss the ground truth on regulations and directives regarding lethal autonomy and what the future of autonomy might mean in a complex threat environment. Please note that this podcast and several of the embedded links below are best accessed via a non-DoD network due to network priorities for teleworking — Enjoy!]
[If the podcast dashboard is not rendering correctly for you, please click here to listen to the podcast]
Michael Meier is the Special Assistant to the Judge Advocate General (JAG) for Law of War Matters at Headquarters, Department of the Army. As such, Mr. Meier serves as the law of war subject matter expert for the U.S. Army JAG Corps, advising on policy issues involving the law of war. Mr. Meier also reviews all proposed new U.S. Army weapons and weapons systems to ensure they are consistent with U.S. international law obligations. Additionally, he is an Adjunct Professor at Georgetown University Law Center, instructing courses on the Law of Armed Conflict. Mr. Meier is a retired JAG officer, having served in the U.S. Army for 23 years.
Shawn Steene is the Senior Force Developer for Emerging Technologies, Office of the Under Secretary of Defense for Policy, where his portfolio includes Emerging Technologies and S&T, including Autonomous Weapon Systems policy and Directed Energy Weapons policy. Prior to joining OSD Strategy & Force Development, Mr. Steene worked in OSD Space Policy, where his portfolio included Space Support (launch, satellite control, orbital debris mitigation, and rendezvous and proximity operations), as well as strategic stability and all space-related issuances (Directives, Instructions, DTMs, etc.). He is a proclaimed Mad Scientist, having presented and served as a discussion panelist in our Frameworks (Ethics & Policy) for Autonomy on the Future Battlefield, the final webinar in our Mad Scientist Robotics and Autonomy series of virtual events.
In today’s podcast, Messrs. Meier and Steene discuss the ground truth on regulations and directives regarding lethal autonomy and what the future of autonomy might mean in a complex threat environment. The following bullet points highlight key insights from our interview with them:
-
-
- Current law and policy do not specifically prohibit or restrict the use of autonomous weapons. However, these systems will need to operate within the law of armed conflict and Department of Defense (DoD) directives. These restrictions entail that autonomous systems will need to be capable of distinguishing between appropriate targets and non-combatants, maintain proportionality in attacks, and undertake feasible precautions to reduce risk to civilians and protected objects.
-
-
-
- Ultimately, operators and human supervisors will be held responsible under laws of conflict and U.S. policy. Thus, appropriate safeguards will need to be adopted to ensure appropriate human oversight of autonomous systems. DoD directives establish guidelines for this supervision and facilitate case by case reviews of systems with autonomous capabilities.
-
-
-
- Artificial intelligence (AI) and autonomy are not interchangeable. While some autonomous systems use AI, this is not always the case.
-
-
-
- The United States is concerned with and making efforts to address the ethical components of autonomous systems. DoD directives and AI principles are examples of ethics in action and facilitate an acceptable code of conduct for lethal autonomous weapons.
-
-
-
- Importantly, DoD directives on autonomous systems are not threat informed in an effort to avoid a “race to the bottom.” This feature allows definitions, such as “appropriate level of human judgement,” to change with technology development without sacrificing core ethical principles.
-
-
-
- Initially, lethal autonomous systems will need to perform far above the “better than human” standard. Over time, as trust in these platforms increases, this standard may be re-evaluated. The introduction of younger generations, who may be more likely to trust advanced AI-enabled machinery, is likely to affect the trust and use of autonomous weapons.
-
-
-
- Human machine teaming is a priority in weapons development, indicating that semi-autonomous systems are currently preferred compared to fully autonomous weapons. Development of semi-autonomous platforms may pave the way for increased trust in fully autonomous platforms.
-
Stay tuned to the Mad Scientist Laboratory for our next episode of “The Convergence,” featuring COL Scott Shaw, Commander, U.S. Army Asymmetric Warfare Group, discussing the future of ground warfare, including lessons learned from the Nagorno-Karabakh Conflict in 2020 and the realities of combat for tomorrow’s Soldiers, on 4 March 2021!
If you enjoyed this post, check out our Insights from the Robotics and Autonomy Series of Virtual Events, all of the associated webinar content (presenter biographies, slide decks, and notes), and watch the associated videos — including the Frameworks (Ethics & Policy) for Autonomy on the Future Battlefield — via a non-DoD network
… see The Ethics and the Future of War panel discussion [via a non-DoD network], facilitated by LTG Jim Dubik (USA-Ret.) from our Visualizing Multi Domain Battle in 2030-2050 Conference at Georgetown University on 26 July 2017
… and read the following posts:
Ethics, Morals, and Legal Implications
An Appropriate Level of Trust…
Integrating Artificial Intelligence into Military Operations, by Dr. James Mancillas
>>> REMINDER 1: Mad Scientist is pleased to announce Competition and Conflict in the Next Decade, the next webinar in our continuing series of monthly virtual events – Are We Doing Enough, Fast Enough? – exploring our adversaries’ views on Competition, Crisis, Conflict, and Change next Tuesday, 23 February 2021 (starting at 1030 EST). Join our panelists: Dr. George Friedman, Founder and Chairman of Geopolitical Futures; John Edwards, U.S. Secret Service’s Deputy Special Agent in Charge, Office of Strategic Planning and Policy; Dr. Eleonora Mattiacci, Assistant Professor for Political Science, Amherst College; Dr. Zack Cooper, Research Fellow, American Enterprise Institute, Lecturer, Princeton University, and Adjunct Assistant Professor, Georgetown University; and Collin Meisel, Program Lead, Diplometrics, Frederick S. Pardee Center for International Futures, University of Denver, as they present their diverse perspectives on this vital topic and then take questions from registered participants.
Register here [via a non-DoD network] to participate in this informative event!
>>> REMINDER 2: Our Mad Scientist Writing Contest on Competition, Crisis, Conflict, and Change seeks to crowdsource the intellect of the Nation (i.e., You!) regarding:
How will our competitors deny the U.S. Joint Force’s tactical, operational, and strategic advantages to achieve their objectives (i.e., win without fighting) in the Competition and Crisis Phases?
How will our adversaries seek to overmatch or counter U.S. Joint Force strengths in future Large Scale Combat Operations?
Review the submission guidelines on our contest flyer, then get cracking brainstorming and crafting your innovative and insightful visions — you’ve got less than four weeks left — deadline for submission is 15 March 2021!!!