[Editor’s Note: Mad Scientist is pleased to announce our latest episode of “The Convergence” podcast, featuring three panelists from the Joint Artificial Intelligence Center (JAIC) — Ms. Jacqueline Tame, Ms. Alka Patel, and Dr. Jane Pinelis — as they discuss the ethics of Artificial Intelligence (AI) and Machine Learning (ML), integrating these emerging technologies into the Joint Force, and the future of talent management. Please note that this podcast and several of the embedded links below are best accessed via a non-DoD network due to network priorities for teleworking — Enjoy!]
The Joint Artificial Intelligence Center (JAIC) is the Department of Defense’s (DoD) Artificial Intelligence (AI) Center of Excellence that provides a critical mass of expertise to help the Department harness the game-changing power of AI. To help operationally prepare the Department for AI, the JAIC integrates technology development with the requisite policies, knowledge, processes, and relationships to ensure long term success and scalability.
The mission of the JAIC is to transform the DoD by accelerating the delivery and adoption of AI to achieve mission impact at scale. The goal is to use AI to solve large and complex problem sets that span multiple services, then ensure the Services and Components have real-time access to ever-improving libraries of data sets and tools.
In this episode of “The Convergence” we discuss how the JAIC is bringing AI to the Joint Force (and the associated challenges!) with the following panel members:
-
-
- Ms. Jacqueline Tame, Acting Deputy Director, Chief Performance Officer
-
-
-
- Ms. Alka Patel, Head of AI Ethics Policy
-
-
-
- Dr. Jane Pinelis, Chief, Testing and Evaluation, Artificial Intelligence/Machine Learning (AI/ML).
-
The following bullet points highlight the key insights from our interview:
-
-
- We have not seen a reorganization of the DoD since the Goldwater–Nichols Act in 1986. AI offers a catalyst for what is next.
-
-
-
- The DoD has a temporal split in how to integrate AI. AI is now ready to start tackling Phase I objectives to alleviate redundant and repetitive work, but legacy processes and cultural barriers remain as obstacles in starting this work.
-
-
-
- Phase II objectives of integrating AI on the battlefield present additional obstacles that are measurable. Getting AI ready requires improved open mindedness at the individual level on what is possible and a willingness to accept risks, improve data readiness, modernize information technology, recruit the requisite talent, and implement the necessary policies.
-
-
-
- Phase II represents AI integration at a level that could redefine what it means to be Joint. Moving from doctrinal definitions and incredible effort to operate jointly to the technical ability to accomplish this at speed and scale.
-
-
-
- The human capital to implement AI at scale requires a diverse workforce and a change in culture.
-
-
-
- We need to broaden our aperture and think about adding psychologists, cognitive behavioral scientists, education and learning experts, and just more analytical thinkers to our AI workforce.
-
-
-
- Changing our culture and messaging is more important than compensation when attracting this type of workforce. Retention requires a culture that encourages professional development and work on side projects (Google 20%).
-
-
-
- Integrating AI on the battlefield will require some level of run time monitoring to identify emergent negative behaviors. This idea is not new as humans are the biggest autonomous systems on the battlefield and sometimes they act in an unethical manner.
-
-
-
- What keeps these experts up at night?
-
1. The failure of DoD to recognize the potential in distributed ledger tech that could solve many current challenges.
2. Adversarial AI: beyond an adversary turning “a panda into a toaster.” Visually tricking AI may be a popular discussion point, but it’s a niche problem that is diverting valuable R&D resources from easier and more problematic attack vectors like patching and data poisoning. We need to dedicate our resources to building resiliency into our AI-enabled systems or risk being vulnerable or too late.
3. How do we make sure that the tools developed are used by moral agents? Having the right humans operating these systems will become even more important.
Stay tuned to the Mad Scientist Laboratory for our next episode of “The Convergence: Reading and Leading in the Future,” featuring LTC Joe Byerly discussing reading and its implications on leadership and forecasting, the future of command selection, and cultivating effective communicators and thinkers in the future force on 17 December 2020!
If you enjoyed this post, check out the following JAIC resources on-line:
AI in Defense: DoD’s Artificial Intelligence Blog
Ethical Principles for Artificial Intelligence
… read these related Mad Scientist Laboratory blog posts and listen to the associated “The Convergence“ podcasts:
The Convergence: The Language of AI with Michael Kanaan and associated podcast
The Convergence: AI Across the Enterprise with Rob Albritton and associated podcast
The Convergence: The Future of Talent and Soldiers with MAJ Delaney Brown, CPT Jay Long, and 1LT Richard Kuzma and associated podcast
… and check out the following:
Artificial Intelligence: An Emerging Game-changer
Integrating Artificial Intelligence into Military Operations, by Dr. James Mancillas
“Own the Night” and the associated Modern War Institute podcast with Mr. Bob Work