479. Thoughts on AI and Ethics… from the Chaplain Corps

[Editor’s Note:  With our sights trained on a rapidly evolving Operational Environment (OE), Army Mad Scientist continues to track emergent trends affecting the U.S. Army’s ability to deter aggression, or failing that, decisively defeat any adversaries in the Twenty-first century battlespace. Recently featured trends include battlefield autonomy, increased lethality, and rapid adaptation.

Last July, we sat down with Dr. Nathan White, Associate Dean, Graduate School, U.S. Army Institute for Religious Leadership, to explore opportunities for collaboration.  One mutual topic of interest was Artificial Intelligence (AI), another trend affecting the OE — our The Convergence podcast recently featured a series of episodes exploring how AI could revolutionize the future of warfare and Professional Military Education (PME), while this page explored how China, our pacing threat, is embracing AI in its drive for modernization as an “intelligentized” force.

In a debate facilitated by the Oxford Union, the Megatron 11B Transformer AI — “programmed to form its own views with access to a huge range of real-world data, including the whole of Wikipedia, 63 million English news articles from 2016 to 2019 and 38 gigabytes worth of Reddit discourse” — argued that “AI will never be ethical. It is a tool, and like any tool, it is used for good and bad.  There is no such thing as a ‘good’ AI, only ‘good’ and ‘bad’ humans.  We are not smart enough to make AI ethical.  We are not smart enough to make AI moral. In the end, I believe that the only way to avoid an AI arms race is to have no AI at all.  This will be the ultimate defence against AI.

The United States Army Religious Leadership Academy (USARLA) and the U.S. Army Chaplain Corps Journal‘s Forum — “a space for conversations on important topics that are relevant to chaplaincy and religious support in the context of national defense” — explored the issue of AI as discussed in Alex Connock and Andrew Stephen‘s article “We invited an AI to debate its own ethics in the Oxford Union – what it said was startling.”  Mad Scientist Laboratory is pleased to feature the following excerpted thoughts of U.S. Army Faith Leaders on AI and ethics Read on!]

Per Chaplain (Colonel) Steve Cantrell:

I believe that the Chaplain Corps can help Soldiers and Leaders bring balanced thinking to bear on AI. How may ancient warnings in sacred scriptures help us today by applying them to ethical concerns about the rise of AI capabilities? Theologically, some ancient warnings center around the monotheistic principles that concern avoiding idolatry. Could people erroneously treat AI as divine? I think it is possible. Could this happen through over-appreciating and over-trusting an AI’s capabilities? It is a call to balance. In our embrace of AI, we should keep up our guard. In our disdain of AI, we should keep an open mind. Balance is hard. The powerful influences of AI  are real and here now. Stealing a short recording of someone’s voice and using already posted video and pictures can be converted into deepfakes. Deepfake apps are available now.

In terms of human against machine drama, the article was both intriguing and concerning because of the layers of questions it generated for me. Instead of making moves in games, Megatron Transformer generated volleys of words that seemed to function like game moves. I would have liked to know if Megatron Transformer’s debate answers were heard out loud. Were verbal answers given? Did the AI have a voice that the participants heard during the debate? Were there any human debaters involved? How could a polygraph be used on an AI? AI is not given a rule to avoid lying. The Bing AI that I use told me, “I’m an AI language model and I don’t have the ability to lie or tell the truth. I can only provide information based on what I’ve been trained on.” (Bing AI, accessed 15 June 2023). The AI can always blame the human. In that debate, was Megatron Transformer prohibited from lying? There is much that I do not know.

In Connock and Stephen’s article, readers glimpsed how an AI might perform. My overall observation is that debate powered by a non-human AI participant brought some interesting surprises to the floor. The Chaplain Corps benefits from an empathy, informed by science and technology, that helps us relate to Leaders, decision makers, and America’s Soldiers. Awareness and knowing what is going on operationally and strategically can help us empathize with others.

Per Mr. Chuck Heard:

At a high level, Connock and Stephen’s article demonstrates some of the challenges and limitations associated with AI models. It also exemplifies some of the challenges of understanding what AI currently means. The piece does not articulate some of the complexities and nuance about the functions and limitations of AI. There are quite a few distinctions that need to be understood about how an “intelligent” application might acquire or apply ethics and what that means to the humans-in-the-loop to understand the context of ethical AI.  One such distinction is the appearance of a subjective opinion when querying an AI application.  Another is the very significant difference between the ethics of an AI (how an application might apply an ethical model to decision making) and the ethics of AI (when and how it might be ethically acceptable to allow an AI to replace human decision making).

From the very earliest days of computing – whether you measure that from Babbage’s Analytical Engine in the 1800s or the first programmable computers almost a hundred years later – computers have operated on a similar model. There are a couple of ways to visualize this model, but they all have the same basic flow:  a user inputs some data, the computer stores it and performs some sort of calculation or action, then returns an output. Computers themselves are not very intelligent at all. They are just very good at doing lots of simple calculations quickly and accurately. Modern AI represents a revolution in computing in that it uses complex algorithms to give the appearance of synthesis – or even subjective opinion – in a computing model. Generative AI can modify its algorithms based on the data it ingests.  In other words, AI learns. AI can change its views depending on the data it has available and any new information it is able to receive. This is both its advantage and perhaps its biggest weakness.

This flexibility presents an advantage in that these applications are able to provide something resembling subjective opinion to their outputs. The applications can make decisions, assess criteria, and provide “opinions” on complex topics. The potential weakness, or risk if you prefer, is in how these applications are trained. AI applications are typically trained by provided a data source or sources for them to ingest. These sources provide the AI with the information they use to determine their outputs. It is not uncommon for modern AI applications to utilize Wikipedia, Reddit, or other widely used websites as knowledge bases. These are democratized, or crowd-sourced, resources that cover a wide variety of information. None of these sources, however, are completely trustworthy and are subject to misinformation and manipulation. In some cases, AI applications can be manipulated to skew their understanding of basic concepts. There is a famous case of the Google AI application, known as Bard, being taught that 1+1=3 by a user. This downside can become especially problematic because the method AI applications use to apply ethical decision making is fundamentally no different than the method they use to determine the outcome of 1+1.

Ethical decision-making models are, to AI applications at least, algorithms. The models could be represented by data flows, logic gates, and still represent somewhat procedural ways to process information. When viewed through this lens, one quickly begins to realize that the ethics of AI applications is not really ethics at all but merely the appearance of an ethic. This complexity around ethics could quickly become a liability if one could craft a reasonable – to an AI application – argument that machinery is more “valuable” than human life, for example.

Given this understanding of AI’s powers and limitations, the responses the article’s authors noted make more sense. The application they were utilizing was trying to give a meaningful response that the application thought would satisfy the user based on the query it received. This is because the crafting of the query can drastically impact the response an AI application gives.  AI applications are mostly purpose designed. Because the applications are trained from a particular data source, purpose designed AI’s can be more efficient and the responses are often more relevant. Most AI applications are trained to learn from the responses of their interactions with users and typically try to achieve an output that results in user satisfaction. This can mean ethical models (algorithms) are much more flexible and situational than a human might consider.  In short, the application was doing what it was designed to do–find an answer that satisfied the user. When the user modified the query–to see if there was a counter argument–the application altered its “opinion.”

The evolution of AI has been explosive in the last several years and it will likely continue to be a disruptive technology for the foreseeable future.  The topic of ethics as an algorithm in an AI’s programming, however, remains in its infancy. Morality and ethics still present dilemmas to humans regularly and AI applications are only as good as the code fallible humans build them from… or so far at least.

In my role as the Deputy Director of Training for the U.S. Army Institute for Religious Leadership – Religious Leadership Academy and an amateur technologist, I am excited for the possibilities of AI in the training and education setting and its potential role as a force enabler for the Army Chaplain Corps. I imagine the potential of a completely individualized and adaptive learning environment that provides curated content to each learner based on their unique needs and capabilities at a time when it is the most relevant to their professional development or mission needs. I can also envision AI capable of datamining Soldier data to determine when and where limited ministry resources can be best applied to the greatest effect for individuals, their families, and organizations. These are the potential benefits of leveraging AI appropriately.

There are also risks associated with improper use of AI and with AI applications that are taught to be unethical. In the short term, I see the ethics of an AI as interesting problems to be resolved before AI applications can be utilized to their full potential. Of much greater concern to me in the interim is the ethics of AI and how people or groups may choose to utilize them unethically.

Per Mr. Bill Hubick:

Microsoft’s Tay AI Chatter Bot

It is no surprise that today’s AI will readily switch sides and make any case we request. Such prompts are nearly as simple as asking it to complete the phrase “peanut butter and ___”, where “jelly” is the obvious expectation. It’s important to note that today’s AIs are the “infants” or the “amoebas” of AI, and that few of us can appreciate the exponential rate of their development. AI systems can already outcompete humans at tasks that until recently felt impossible — from games like chess and even go and now in a galaxy of creative and generative spaces. Yet this is just the beginning. The authors were correct to highlight the AI response “There is no such thing as a good AI, only good and bad humans.” How we task AI systems, and the data on which we train them, will determine how they behave. A good way to think of AI systems is that we “grow” them, not code them. The core technology is already widely available and will undoubtedly be exploited by bad actors and our adversaries. Even in our most trusted systems, we will grapple with challenges of how an AI responds if trained on “38 gigabytes worth of Reddit discourse.” A system trained on the Internet will “learn” the worst of what the Internet has to offer. Microsoft famously released the AI chatbot “Tay” on Twitter in 2016, but the account was taken down in less than 24 hours for racist and sexist behavior. Some of the jobs automated by AI will be replaced by new AI ethics and safety careers, which will ensure transparency and appropriateness of AI training data, and that AI behavior aligns with personal and corporate values.

Our response will have profound implications for the future of humanity, our planet, and life in the cosmos. It’s going to be a strange rest of civilization. Let’s take a moment to take a deep breath and marvel at this incredible moment in space and time – and that we have this unique opportunity to experience it and shape the future. May we find the courage, the love, and the wisdom to unlock the outcomes that benefit humanity and other life (carbon-based and otherwise) in the Universe.

If you enjoyed this post, check out the comprehensive discussion in U.S. Army Chaplain Corps Journal‘s Forum

… as well as the following related Mad Scientist content:

Weaponized Information: What We’ve Learned So Far…, Insights from the Mad Scientist Weaponized Information Series of Virtual Events, and all of this series’ associated content and videos 

The AI Study Buddy at the Army War College (Part 1) and associated podcast, with LtCol Joe Buffamante, USMC

The AI Study Buddy at the Army War College (Part 2) and associated podcast, with Dr. Billy Barry, USAWC

Gen Z is Likely to Build Trusting Relationships with AI, by COL Derek Baird

Hey, ChatGPT, Help Me Win this Contract! and associated podcast, with LTC Robert Solano

Chatty Cathy, Open the Pod Bay Doors: An Interview with ChatGPT and associated podcast

One Brain Chip, Please! Neuro-AI with two of the Maddest Scientists and associated podcast, with proclaimed Mad Scientists Dr. James Giordano and Dr. James Canton

Arsenal of the Mind presentation by proclaimed Mad Scientist Juliane Gallina, Director, Cognitive Solutions for National Security (North America) IBM WATSON, at the Mad Scientist Robotics, AI, and Autonomy — Visioning Multi-Domain Battle in 2030-2050 Conference, hosted by Georgia Tech Research Institute, Atlanta Georgia, 7-8 March 2017

The Exploitation of our Biases through Improved Technology, by Raechel Melling

Man-Machine Rules, by Dr. Nir Buras

Artificial Intelligence: An Emerging Game-changer

Takeaways Learned about the Future of the AI Battlefield and associated information paper

The Future of Learning: Personalized, Continuous, and Accelerated

The Guy Behind the Guy: AI as the Indispensable Marshal, by Brady Moore and Chris Sauceda

Integrating Artificial Intelligence into Military Operations, by Dr. James Mancillas

Prediction Machines: The Simple Economics of Artificial Intelligence

“Own the Night” and the associated Modern War Institute podcast, with proclaimed Mad Scientist Bob Work

Bringing AI to the Joint Force and associated podcast, with Jacqueline Tame, Alka Patel, and Dr. Jane Pinelis

AI Enhancing EI in War, by MAJ Vincent Dueñas

The Human Targeting Solution: An AI Story, by CW3 Jesse R. Crifasi

Bias and Machine Learning

An Appropriate Level of Trust…

How does the Army – as part of the Joint force – Build and Employ Teams to Compete, Penetrate, Disintegrate, and Exploit our Adversaries in the Future?

About our Contributors: 

Chaplain (Colonel) Steve Cantrell is the Director of the Chaplain Capabilities Development Integration Directorate (CDID), part of the Futures and Concepts Center under Army Futures Command. He earned a Master’s degree in Strategic Studies from the U.S. Army War College (2019) and M.Div. from the Pentecostal Theological Seminary.

Mr. Charles (Chuck) Heard is the Deputy Director of Training for the U.S. Army Institute for Religious Leadership, Religious Leadership Academy.  He is an EdD Candidate with Walden University and holds an M.S. in Education and a B.S. in Information Systems from Strayer University, and an A.S. in General Studies from Central Texas College.

Mr. William (Bill) Hubick is a technologist who facilitates novel innovation and tailored solutions for DOD customers. He holds a B.S. in Applied Communications Technology from Wayland Baptist University and maintains a Project Management Professional (PMP) certification. His background includes diverse roles such as Mandarin Chinese linguist, cybersecurity PM, software engineering, XR and AI discovery, training, and co-founding the non-profit Maryland Biodiversity Project.

Disclaimer: The views expressed in this blog post do not necessarily reflect those of the U.S. Department of Defense, Department of the Army, Army Futures Command (AFC), or Training and Doctrine Command (TRADOC).

Share on Facebook Share on LinkedIn

Leave a Reply

Your email address will not be published. Required fields are marked *