At the Visualizing Multi Domain Battle 2030-2050 Conference, Georgetown University, 25-26 July 2017, Mad Scientists addressed the requirement for United States policymakers and warfighters to address the ethical dilemmas arising from an ever-increasing convergence of Artificial Intelligence (AI) and smart technologies in both battlefield systems and embedded within individual Soldiers. While these disruptive technologies have the potential to lessen the burden of many military tasks, they may come with associated ethical costs. The Army must be prepared to enter new ethical territory and make difficult decisions about the creation and employment of these combat multipliers.
“Human enhancement will undoubtedly afford the Soldier a litany of increased capabilities on the battlefield. Augmenting a human with embedded communication technology, sensors, and muscular-skeletal support platforms will allow the Soldier to offload many physical, mundane, or repetitive tasks but will also continue to blur the line between human and machine. Some of the many ethical/legal questions this poses, are at what point does a Soldier become more machine than human, and how will that Soldier be treated and recognized by law? At what point does a person lose their legal personhood? If a person’s nervous system is intact, but other organs and systems are replaced by machines, is he/she still a person? These questions do not have concrete answers presently, but, more importantly, they do not have policy that even begins to address them. The Army must take these implications seriously and draft policy that addresses these issues now before these technologies become commonplace. Doing so will guide the development and employment of these technologies to ensure they are administered properly and protect Soldiers’ rights.”
Fully Autonomous Weapons:
“Fully autonomous weapons with no human in the loop will be employed on the battlefield in the near future. Their employment may not necessarily be by the United States, but they will be present on the battlefield by 2050. This presents two distinct dilemmas regarding this technology. The first dilemma is determining responsibility when an autonomous weapon does not act in a manner consistent with our expectations. For a traditional weapon, the decision to fire always comes back to a human counterpart. For an autonomous weapon, that may not be the case. Does that mean that the responsibility lies with the human who programmed the machine? Should we treat the programmer the same as we treat the human who physically pulled the trigger? Current U.S. policy doesn’t allow for a weapon to be fired without a human in the loop. As such, this alleviates the responsibility problem and places it on the human. However, is this the best use of automated systems and, more importantly, will our adversaries adhere to this same policy? It’s almost assured that the answer to both questions is no. There is little reason to believe that our adversaries will employ the same high level of ethics as the Army. This means Soldiers will likely encounter autonomous weapons that can target, slew, and fire on their own on the future battlefield. The human Soldier facing them will be slower, less accurate, and therefore less lethal. So the Army is at a crossroads where it must decide if employing automated weapons aligns with its ethical principles or if they will be compromised by doing so. It must also be prepared to deal with a future battlefield where it is at a distinct disadvantage as its adversaries can fire with speed and accuracy unmatched by humans. Policy must address these dilemmas and discussion must be framed in a battlefield where autonomous weapons operating at machine speed are the norm.”
Given the inexorable advances and implementation of the aforementioned technologies, how will U.S. policymakers and warfighters tackle the following concomitant ethical dilemmas:
• How do these technologies affect U.S. research and development, rules of engagement, and in general, the way we conduct war?
• Must the United States cede some of its moral obligations and ethical standards in order to gain/retain relative military advantage?
• At what point does the efficacy of AI-enabled targeting and decision-making render it unethical to maintain a human in the loop?
For additional insights regarding these dilemmas, watch this Ethics and the Future of War panel discussion, facilitated by LTG Dubik (USA-Ret.) from this Georgetown conference.