Mad Scientist Laboratory is pleased to announce that Headquarters, U.S. Army Training and Doctrine Command (TRADOC) is co-sponsoring the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies this week (Wednesday and Thursday, 8-9 August 2018) in Washington, DC.
Future learning techniques and technologies are critical to the Army’s operations in the 21st century against adversaries in rapidly evolving battlespaces. The ability to effectively respond to a changing Operational Environment (OE) with fleeting windows of opportunity is paramount, and Leaders must act quickly to adjust to different OEs and more advanced and lethal technologies. Learning technologies must enable Soldiers to learn, think, and adapt using innovative synthetic environments to accelerate learning and attain expertise more quickly. Looking to 2050, learning enablers will become far more mobile and on-demand.
Looking at Learning in 2050, topics of interest include, but are not limited to: Virtual, Augmented, and Mixed Realities (VR/AR/MR); interactive, autonomous, accelerated, and augmented learning technologies; gamification; skills needed for Soldiers and Leaders in 2050;synthetic training environments; virtual mentors; and intelligent artificial tutors. Advanced learning capabilities present the opportunity for Soldiers and Leaders to prepare for operations and operate in multiple domains while improving current cognitive load limitations.
Plan to join us virtually at the conference as leading scientists, innovators, and scholars from academia, industry, and government gather to discuss:
1) How will emerging technologies improve learning or augment intelligence in professional military education, at home station, while deployed, and on the battlefield?
2) How can the Army accelerate learning to improve Soldier and unit agility in rapidly changing OEs?
3) What new skills will Soldiers and Leaders require to fight and win in 2050?
– Read our Learning in 2050 Call for Ideas finalists’ submissionshere, graciously hosted by our colleagues at Small Wars Journal.
– Starting Tuesday, 7 August 2018, see the conference agenda’s list of presentations and the associated world-class speakers’ biographieshere.
Join us at the conference on-linehere via live-streaming audio and video, beginning at 0840 EDT on Wednesday, 08 Aug 2018; submit your questions to each of the presenters via the moderated interactive chat room; and tag your comments @TRADOC on Twitter with #Learningin2050.
[Editor’s Note: The U.S. Army Training and Doctrine Command (TRADOC) G-2 is co-hosting the Mad Scientist Learning in 2050 Conference with Georgetown University’s Center for Security Studies on 8-9 August 2018 in Washington, DC. In advance of this conference, Mad Scientist Laboratory is pleased to present today’s post addressing what is necessary to truly transform Learning in 2050 by returning guest blogger Mr. Nick Marsella. Read Mr. Marsella’s previous two posts addressing Futures Work atPart I and Part II]
Only a handful of years ago, a conference on the topic of learning in 2050 would spur discussions on needed changes in the way we formally educate and train people to live successful lives and be productive citizens.[I] Advocates in K-12 would probably argue for increasing investment in schools, better technology, and increased STEM education. Higher educators would raise many of the same concerns, pointing to the value of the “the academy” and its universities as integral to the nation’s economic, security, and social well-being by preparing the nation’s future leaders, innovators, and scientists.
Yet, times have changed. “Learning in 2050” could easily address how education and training must meet the required immediate learning needs of the individual and for supporting “lifelong learning” in a very changing and competitive world.[II] The conference could also address how new discoveries in learning and the cognitive sciences will inform the education and training fields, and potentially enhance individual abilities to learn and think.[III] “Learning in 2050” could also focus on how organizational learning will be even more important than today – spelling the difference between bankruptcy and irrelevancy – or for military forces – victory or defeat. We must also address how to teach people to learn and organize themselves for learning.[IV]
Lastly, a “Learning in 2050” conference could also focus onmachine learning and howartificial intelligence will transform not only the workplace, but have a major impact on national security.[V] Aside from understanding the potential and limitations of this transformative technology, increasingly we must train and educate people on how to use it to their advantage and understand its limitations for effective “human – machine teaming.” We must also provide opportunities to use fielded new technologies and for individuals to learn when and how totrust it.[VI]
All of these areas would provide rich discussions and perhaps new insights. But just as LTG (ret) H.R. McMaster warned us about thinking about the challenges in future warfare, we must first acknowledge the continuities for this broad topic of “Learning in 2050” and its implications for the U.S. Army.[VII] Until the Army is replaced by robots or knowledge and skills are uploaded directly into the brain as shown in the “Matrix” — learning involves humans and the learning process and the Army’s Soldiers and its civilian workforce [not discounting organizational or machine learning].
While much may change in the way the individual will learn, we must recognize that the focus of “Learning in 2050” is on the learner and the systems, programs/schools, or technologies adopted in the future must support the learner. As Herbert Simon, one of the founders of cognitive science and a Nobel laureate noted: “Learning results from what the student does and thinks and only from what the student does and thinks. The teacher can advance learning only by influencing what the student does to learn.”[VIII] To the Army’s credit, the U.S. Army Learning Concept for Training and Education 2020-2040 vision supports this approach by immersing “Soldiers and Army civilians in a progressive, continuous, learner-centric, competency-based learning environment,” but the danger is we will be captured by technology, procedures, and discussions about the utility and need for “brick and mortar schools.”[IX]
Learning results from what the student does and thinks and only from what the student does and thinks.
Learning is a process that involves changing knowledge, belief, behavior, and attitudes and is entirely dependent on the learner as he/she interprets and responds to the learning experience – in and out of the classroom.[X] Our ideas, concepts, or recommendations to improve the future of learning in 2050 must either: improve student learning outcomes, improve student learning efficiency by accelerating learning, or improve the student’s motivation and engagement to learn.
“Learning in 2050” must identify external environmental factors which will affect what the student may need to learn to respond to the future, and also recognize that the generation of 2050 will be different from today’s student in values, beliefs, attitudes, and acceptance of technology.[XI] Changes in the learning system must be ethical, affordable, and feasible. To support effective student learning, learning outcomes must be clearly defined – whether a student is participating in a yearlong professional education program or a five-day field training exercise – and must be understood by the learner.[XII]
We must think big. For example, Professor of Cognition and Education at Harvard’s Graduate School of Education, Howard Gardner postulated that to be successful in the 21st Century requires the development of the “disciplined mind, the synthesizing mind, the creative mind, the respectful mind, and the ethical mind.”[XIII]
Approaches, processes, and organization, along with the use of technology and other cognitive science tools, must focus on the learning process. Illustrated below is the typical officer career timeline with formal educational opportunities sprinkled throughout the years.[XIV] While some form of formal education in “brick and mortar” schools will continue, one wonders if we will turn this model on its head – with more upfront education; shorter focused professional education; more blended programs combining resident/non-resident instruction; and continual access to experts, courses, and knowledge selected by the individual for “on demand” learning. Today, we often use education as a reward for performance (i.e., resident PME); in the future, education must be a “right of the Profession,” equally provided to all (to include Army civilians) – necessary for performance as a member of the profession of arms.
The role of the teacher will change. Instructors will become “learning coaches” to help the learner identify gaps and needs in meaningful and dynamic individual learning plans. Like the Army’s Master Fitness Trainer whom advises and monitors a unit’s physical readiness, we must create in our units “Master Learning Coaches,” not simply a training specialist who manages the schedule and records. One can imagine technology evolving to do some of this as the Alexa’s and Siri’s of today become the AI tutors and mentors of the future. We must also remember that any system or process for learning in 2050 must fit the needs of multiple communities: Active Army, Army National Guard, and Army Reserve forces, as well as Army civilians.
Just as the delivery of instruction will change, the assessment of learning will change as well. Simulations and gaming should aim to provide an “Enders’ Game” experience, where reality and simulation are indistinguishable. Training systems should enable individuals to practice repeatedly and as Vince Lombardi noted – “Practice does not make perfect. Perfect practice makes perfect.” Experiential learning will reinforce classroom, on-line instruction, or short intensive courses/seminars through the linkage of “classroom seat time” and “field time” at the Combat Training Centers, Warfighter, or other exercises or experiences.
Tell me and I forget; teach me and I may remember; involve me and I learn. Benjamin Franklin[XV]
Of course, much will have to change in terms of policies and the way we think about education, training, and learning. If one moves back in time the same number of years that we are looking to the future – it is the year 1984. How much has changed since then?
While in some ways technology has transformed the learning process – e.g., typewriters to laptops; card catalogues to instant on-line access to the world’s literature from anywhere; and classes at brick and mortar schools to Massive Open Online Courses (MOOCs), and blended and on-line learning with Blackboard. Yet, as Mark Twain reportedly noted – “if history doesn’t repeat itself – it rhymes” and some things look the same as they did in 1984, with lectures and passive learning in large lecture halls – just as PowerPoint lectures are ongoing today for some passively undergoing PME.
If “Learning in 2050” is to be truly transformative – we must think differently. We must move beyond the industrial age approach of mass education with its caste systems and allocation of seats. To be successful in the future, we must recognize that our efforts must center on the learner to provide immediate access to knowledge to learn in time to be of value.
Nick Marsella is a retired Army Colonel and is currently a Department of the Army civilian serving as the Devil’s Advocate/Red Team for Training and Doctrine Command. ___________________________________________________________________
[I] While the terms “education” and “training” are often used interchangeably, I will use the oft quoted rule – training is about skills in order to do a job or perform a task, while education is broader in terms of instilling general competencies and to deal with the unexpected.
[II] The noted futurist Alvin Toffler is often quoted noting: “The illiterate of the 21st Century are not those who cannot read and write but those who cannot learn, unlearn, and relearn.”
[III] Sheftick, G. (2018, May 18). Army researchers look to neurostimulation to enhance, accelerate Soldier’s abilities. Retrieved from: https://www.army.mil/article/206197/army_researchers_looking_to_neurostimulation_to_enhance_accelerate_soldiers_abilities
[IV] This will become increasing important as the useful shelf life of knowledge is shortening. See Zao-Sanders, M. (2017). A 2×2 matrix to help you prioritize the skills to learn right now. Harvard Business Review. Retrieved from: https://hbr.org/2017/09/a-2×2-matrix-to-help-you-prioritize-the-skills-to-learn-right-now — so much to learn, so little time.
[V] Much has been written on AI and its implications. One of the most recent and interesting papers was recently released by the Center for New American Security in June 2018. See: Scharre, P. & Horowitz, M.C. (2018). Artificial Intelligence: What every policymaker needs to know. Retrieved from: https://www.cnas.org/publications/reports/artificial-intelligence-what-every-policymaker-needs-to-know
For those wanting further details and potential insights see: Executive Office of the President, National Science and Technology Council, Committee on Technology Report, Preparing for the Future of Artificial Intelligence, October 2016.
[VI] Based on my anecdotal experiences, complicated systems, such as those found in command and control, have been fielded to units without sufficient training. Even when fielded with training, unless in combat, proficiency using the systems quickly lapses. See: Mission Command Digital Master Gunner, May 17, 2016, retrieved from https://www.army.mil/standto/archive_2016-05-17. See Freedberg, S. Jr. Artificial Stupidity: Fumbling the Handoff from AI to Human Control. Breaking Defense. Retrieved from: https://breakingdefense.com/2017/06/artificial-stupidity-fumbling-the-handoff/
[VII] McMaster, H.R. (LTG) (2015). Continuity and Change: The Army Operating Concept and Clear Thinking about Future War. Military Review.
[VIII] Ambrose, S.A., Bridges, M.W., DiPietro, M., Lovett, M.C. & Norman, M. K. (2010). How learning works: 7 research-based principles for smart teaching. San Francisco, CA: Jossey-Bass, p. 1.
[IX] U.S. Army Training and Doctrine Command. TRADOC Pamphlet 525-8-2. The U.S. Army Learning Concept for Training and Education 2020-2040.
[XI] For example, should machine language be learned as a foreign language in lieu of a traditional foreign language (e.g., Spanish) – given the development of automated machine language translators (AKA = the Universal Translator)?
[XII] The point here is we must clearly understand what we want the learner to learn and adequately define it and insure the learner knows what the outcomes are. For example, we continually espouse that we want leaders to be critical thinkers, but I challenge the reader to find the definitive definition and expected attributes from being a critical thinker given ADRP 6-22, Army Leadership, FM 6-22 Army Leadership, and ADRP 5 and 6 describe it differently. At a recent higher education conference of leaders, administrators and selected faculty, one member succinctly put it this way to highlight the importance of student’s understanding expected learning outcomes: “Teaching students without providing them with learning outcomes is like giving a 500 piece puzzle without an image of what they’re assembling.”
[XIII] Gardner, H. (2008). Five Minds for the Future. Boston, MA: Harvard Business Press. For application of Gardner’s premise see Marsella, N.R. (2017). Reframing the Human Dimension: Gardner’s “Five Minds for the Future.” Journal of Military Learning. Retrieved from: https://www.armyupress.army.mil/Journals/Journal-of-Military-Learning/Journal-of-Military-Learning-Archives/April-2017-Edition/Reframing-the-Human-Dimension/
[XIV] Officer education may differ due to a variety of factors but the normal progression for Professional Military Education includes: Basic Officer Leader Course (BOLC B, to include ROTC/USMA/OCS which is BOLC A); Captains Career Course; Intermediate Level Education (ILE) and Senior Service College as well as specialty training (e.g., language school), graduate school, and Joint schools. Extracted from previous edition of DA Pam 600-3, Commissioned Office Professional Development and Career Management, December 2014, p.27 which is now obsolete. Graphic is as an example. For current policy, see DA PAM 600-3, dated 26 June 2017. .
[XV] See https://blogs.darden.virginia.edu/brunerblog/
[Editor’s Note: Mad Scientist Laboratory is pleased to publish our latest iteration of “The Queue” – a monthly post listing the most compelling articles, books, podcasts, videos, and/or movies that the U.S. Army’s Training and Doctrine Command (TRADOC) Mad Scientist Initiative has come across during the previous month. In this anthology, we address how each of these works either informs or challenges our understanding of the Future Operational Environment. We hope that you will add “The Queue” to your essential reading, listening, or watching each month!]
There are no facts about the future and the future is not a linear extrapolation from the present. We inherently understand this about the future, but Leaders oftentimes seek to quantify the unquantifiable. Eliot Peper opens his Harvard Business Review article with a story about one of the biggest urban problems in New York City at the end of the 19th century – it stank! Horses were producing 45,000 tons of manure a month. The urban planners of 1898 convened a conference to address this issue, but the experts failed to find a solution. More importantly, they could not envision a future 14 years hence, when cars would outnumber horses. The urban problem of the future was not horse manure, but motor vehicle-generated pollution and road infrastructure. All quantifiable data available to the 1898 urban planners only extrapolated to more humans, horses, and manure. It is likely that any expert sharing an assumption about cars over horses would have been laughed out of the conference hall. Flash forward a century and the number one observation from the 9/11 Commission was that the Leaders and experts responsible for preventing such an attack lacked imagination. Story tellingand the science fiction genre allow Leaders to imaginebeyond the numbers and broaden the assumptions needed to envision possible futures. Story telling also helps Leaders and futurists to envision the human context around emerging technologies. For more on Science Fiction and futuring, watch Dr. David Brin‘s Mad Scientistpresentation.
2. “Automated Valor,” by August Cole, Proceedings Magazine, U.S. Naval Institute, May 2018.
Fellow Mad Scientist August Cole’s short story, commissioned by the British Army Concepts Branch, explores the future of urban warfare from a refreshingly new, non-US perspective. Sparking debate about force development and military operations in the 2030s, this story portrays a vivid combat scenario in a world where autonomous weapons have proliferated. Mr. Cole’s story embraces a number of Future Operational Environment themes familiar to Mad Scientists, including combat leadership andteam identity(Soldier and machine),human trust of AI decision-making, virtual and earned citizenship,deep fakes, small unittactical operations, and multi-national Joint operations against an expansionist Chinese super power. Visualizing the future fight from this British Commonwealth perspective provides a new twist in story telling, describing what it will mean to be a Soldier on the battlefield in 2039, depending on machine teammates in the close fight.
3. Altered Carbon, Netflix series, 2018 (based upon a 2002 novel by Richard K. Morgan) — submitted by Mad Scientist Pat Filbert.
Set 300+ years in a futuristic Earth, the show’s main character, or more to the point, his “cortical stack” (alien technology, reverse-engineered for human use that records the sum total of an individual’s consciousness) has been “imprisoned” for 250 years and is “released” back into the general population to solve a mysterious murder. At this time, AI exists in and fully interactswith both the physical and cyber domains. The show incorporates a number of aspects related totrust in AI and technology. Such aspects enable a future where combat is fought by “stored soldiers” on distant worlds using advanced technological capabilities. Some humans have accepted AI projections as near-peers, so the trust factor comes up repeatedly between the humans who accept and embrace this technology and those who remain skeptical, like Will Smith’s character inI, Robot. The implications of AI becoming sentient and capable of violence are at the core of the morality argument against AI technology. The popular acceptance of AI possessing human-like qualities would definitely be a “leap forward” in more than just technology. For additional insights on this topic, watch Mad Scientist Linda MacDonald Glenn‘s presentation.
4. “SOCOM’s Top 10 Technologies“ Podcast, National Defense Magazine, National Defense Industry Association, 3 May 2018 — submitted by Marie Murphy.
This podcast provides a summary of some of the primary emerging technologies that the United States Special Operations Command (SOCOM) and the Department of Defense are developing for military application. In the immediate future — exoskeletons and commercial drone use; in the deep future — quantum computing and China‘s rise to dominate the microelectronics market by 2030 are highlighted in the list. Stew Magnuson, Editor-in-Chief of National Defense Magazine, states that technology is nearing the end of the applicability of Moore’s Law. Due to this, a major consideration for the development of new scientific and technical advancements is private, profit-driven industry, which will certainly be responsible for future cutting-edge technologies. Given that many innovations the military uses or seeks to apply now stem from private sector innovation, what happens when Moore’s Lawexpires and technology moves too quickly for military research and adaptation?
Researchers analyzed the decision-making habits of gamers that play League of Legendsin order to identify and build mental models. Identifying these models will help understand how they are built and, more importantly, how they change over time as players gain proficiency from novice to expert. The researchers analyzed survey responses based on the game and compared the differences between novices, journeymen, and experts. There were clear differences in the way the mental models were organized based on experience, with experts making abstract connections and even showing signs of subnetworks. The researchers plan to use this information for better game design and the development / tailoring of training programs. The Army could leverage the potential of these mental models with neural feedback to accelerate Soldier learning, breaking the tyranny of the 10,000 hour rule of expertise. That said, this information could also prove to be a weapon in the hands of an adversary. What happens to game theory if the adversary knows how your mind works, what your proclivities are, and what courses of action you are likely to favor? What happens if the adversary can identify, based on your actions, who in your unit is a novice and who is an expert, and targets them accordingly (i.e., focusing on defeating the experts first, while leaving the less experienced)? Accessing this information could provide an adversary with an advantage that may prove the difference between success and defeat. Learn more about cognitive enhancement in fellow Mad Scientist Dr. Amy Kruse’s podcast, Human 2.0, hosted by our colleagues at Modern War Institute.
Researchers at the University of California, Berkeley, have exploited mainstream commercial Artificial Intelligence (AI) assistants (e.g., Siri, Alexa, Google Assistant) in order to secretly send commands. The researchers were able to send secret messages to the devices that were embedded in an existing audio track that were undetectable to the human ear. The track could be played and the AI could be told to do any number of things, from transferring money, to adding an item to a shopping list, or opening a malicious website. The adversarial applications of this are immense and abundant. A nefariousactor could surreptitiously activate a device, mute it, and then send and receive information stored on it or even use it to unlock doors, start cars, or call other devices. As the Army becomes more reliant on AI and automation, its vulnerability toPersonalized Warfareattacks via these axes will increase. Will the Army ever be able to use voice activated devices that can be so easily compromised by an undetectable source?
At a recent workshop, the Mad Scientist community was informed of the constraints associated with neural embedded man-machine interfaces – namely, conventional electrode materials will degrade relatively quickly via corrosion brought on by the human brain’s inflammatory immune system response. This challenge may have been overcome by researchers at Carnegie Mellon University, funded by the Defense Advanced Research Projects Agency (DARPA), who have developed a “flexible, squishy silicon-based hydrogel that sticks to neural tissue, bringing non-invasive electrodes to the brain’s surface.” As a tissue analog, this hydrogel is less likely to trigger the brain’s natural defensive response, thus potentially revolutionizing the integration of prosthetics and medical devices with patients’ brains. As with most disruptive technologies, preliminary niche applications (in this case, medical) may jump, initially to the edge, then possibly ripple throughout society. The advent of hydrogel-based electrodes has the potential to accelerate the current transhumanism movement and facilitate direct brain-machine interfaces, as envisioned in Mr. Howard Simkin’s Sine Paripost. Projected forward, the possibility of an Internet of Everything and Everyone may prove to be a two-edged sword, facilitating both the direct upload of knowledge on demand, and the direct hacking of individuals.
If you read, watch, or listen to something this month that you think has the potential to inform or challenge our understanding of the Future Operational Environment, please forward it (along with a brief description of why its potential ramifications are noteworthy to the greater Mad Scientist Community of Action) to our attention at: email@example.com — we may select it for inclusion in our next edition of “The Queue”!
The Mad Scientist team participates in many thought exercises, tabletops, and wargames associated with how we will live, work, and fight in the future. A consistent theme in these events is the idea that a major barrier to the integration ofrobotic systems into Army formations is a lack of trust between humans and machines. This assumption rings true as we hear the media and opinion polls describe how society doesn’t trust some disruptive technologies, like driverless cars or the robots coming for our jobs.
In his recent book,Army of None, Paul Scharre describes an event that nearly led to a nuclear confrontation between the Soviet Union and the United States. On September 26, 1983,LTC Stanislav Petrov, a Soviet Officer serving in a bunker outside Moscow was alerted to a U.S. missile launch by a recently deployed space-based early warning system. The Soviet Officer trusted his “gut” – or experientially informed intuition – that this was a false alarm. His gut was right and the world was saved from an inadvertent nuclear exchange because this officer did not over trust the system. But is this the rule or an exception to how humans interact with technology?
The subject of trust between Soldiers, Soldiers and Leaders, and the Army and society is central to the idea of the Army as a profession. At the most tactical level, trust is seen as essential to combat readiness as Soldiers must trust each other in dangerous situations. Humans naturally learn to trust their peers and subordinates once they have worked with them for a period of time. You learn what someone’s strengths and weaknesses are, what they can handle, and under what conditions they will struggle. This human dynamic does not translate to human-machine interaction and the tendency to anthropomorphize machines could be a huge barrier.
We recommend that the Army explore the possibility that Soldiers and Leaders could over trust AI and robotic systems. Over trust of these systems could blunt human expertise, judgement, and intuition thought to be critical to winning in complex operational environments. Also, over trust might lead to additional adversarial vulnerabilities such as deception and spoofing.
In 2016, a research team at the Georgia Institute of Technology revealed the resultsof a study entitled “Overtrust of Robots in Emergency Evacuation Scenarios”. The research team put 42 test participants into a fire emergency with a robot responsible for escorting them to an emergency exit. As the robot passed obvious exits and got lost, 37 participants continued to follow the robot and an additional 2 stood with the robot and didn’t move towards either exit. The study’s takeaway was that roboticists must think about programs that will help humans establish an “appropriate level of trust” with robot teammates.
InFuture Crimes, Marc Goodman writes of the idea of “In Screen We Trust” and the vulnerabilities this trust builds into our interaction with our automation. His example of the cyber-attack against the Iranian uranium enrichment centrifuges highlights the vulnerability of experts believing or trusting their screens against mounting evidence that something else might be contributing to the failure of centrifuges. These experts over trusted their technology or just did not have an “appropriate level of trust”. What does this have to do with Soldiers on the future battlefield? Well, increasingly we depend on our screens and, in the future, our heads-up displays totranslate the world around us. This translation will only become more demanding on the future battlefield with war at machine speed.
So what should our assumptions be about trust and our robotic teammates on the future battlefield?
1) Soldiers and Leaders will react differently to technology integration.
2) Capability developers must account for trust building factors in physical design, natural language processing, and voice communication.
3) Intuition and judgement remain a critical component of human-machine teaming and operating on the future battlefield. Speed becomes a major challenge as humans become the weak link.
4) Building an “appropriate level of trust” will need to be part of Leader Development and training. Mere expertise in a field does not prevent over trust when interacting with our robotic teammates.
5) Lastly, lack of trust is not a barrier to AI and robotic integration on the future battlefield. These capabilities will exist in our formations as well as those of our adversaries. The formation that develops the best concepts for effective human-machine teaming, with trust being a major component, will have the advantage.
[Editor’s Note: Mad Scientist Laboratory is pleased to present the following guest blog post by MAJ Chris Telley, U.S. Army, assigned to the Naval Postgraduate School, addressing how Artificial Intelligence (AI) must be understood as an Information Operations (IO) tool if U.S. defense professionals are to develop effective countermeasures and ensure our resilience to its employment by potential adversaries.]
AI-enabled IO present a more pressing strategic threat than the physical hazards ofslaughter-bots or even algorithmically-escalatednuclear war. IO areefforts to “influence, disrupt, corrupt, or usurp the decision-making of adversaries and potential adversaries;” here, we’re talking about using AI to do so. AI-guided IO tools can empathize with an audience to say anything, in any way needed, to change the perceptions that drive those physical weapons. Future IO systems will be able to individually monitor and affecttens of thousands of people at once. Defense professionals must understand the fundamental influence potential of these technologies if they are to drive security institutions to counter malign AI use in the information environment.
Programmatic marketing, using consumer’s data habits to drive real time automated bidding onpersonalized advertising, has been used for a few years now. Cambridge Analytica’sFacebooktargeting made international headlines using similar techniques, but digital electioneering is just the tip of the iceberg. An AI trained with data from users’ social media accounts, economic media interactions (Uber, Applepay, etc.), and their devices’ positional data can infer predictive knowledge of its targets. With that knowledge, emerging tools — likeReplika — can truly befriend a person, allowing it to train that individual, for good or ill.
Substantive feedback is required to train an individual’s response; humans tend to respond best to content and feedback with which they agree. That content can be algorithmically mass produced. For years, Narrative Science tools have helped writers create sports stories and stock summaries, but it’s just as easy to use them to create disinformation. That’s just text, though; today, the AI can create fake video. A recent warning, ostensibly from former President Obama, provides an entertaining yet frightening demonstration of how Deepfakes will challenge our presumptions about truth in the coming years. The Defense Advanced Research Projects Agency (DARPA) is funding aproject this summer to determine whether AI-generated Deepfakes will become impossible to distinguish from the real thing, even using other AI systems.
Given that malign actors can now employ AI to lie “at machine speed,” they still have to get the story to an audience. Russianbot armies continue to make headlines doing this very thing. The New York Times maintains about a dozen Twitter feeds and produces around 300 tweets a day, but Russia’s Internet Research Agency (IRA) regularly puts out25,000 tweets in the same twenty-four hours. The IRA’sbots are really just low-tech curators; they collect, interpret, and display desired information to promote the Kremlin’s narratives.
Next-generation bot armies will employ far faster computing techniques and profit from an order of magnitudegreater network speedwhen 5G services are fielded. If “Repetition is a key tenet ofIO execution,” then this machine gun-like ability to fire information at an audience will, with empathetic precision and custom content, provide the means to change a decisive audience’s very reality. No breakthrough science is needed, no bureaucratic project office required. These pieces arealready there, waiting for an adversary to put them together.
The DoD is looking at AI but remains focused on image classificationandswarming quadcopterswhile ignoring the convergent possibilities of predictive audience understanding, tailored content production, and massive scale dissemination. What little digital IO we’ve done, sometimes called social media “WebOps,” has been contractor heavy and prone to naïve missteps. However, groups like USSOCOM’sSOFWERX and the students at the Naval Postgraduate School are advancing the state of our art. At NPS, future senior leaders are working on AI, now. A half-dozen of the school’s departments have stood up classes and events specifically aimed at operationalizing advanced computing. The young defense professionals currently working on AI should grapple with emerging influence tools and form the foundation of the DoD’s future institutional capabilities.
MAJ Chris Telley is an Army information operations officer assigned to the Naval Postgraduate School. His assignments have included theater engagement at U.S. Army Japan and advanced technology integration with the U.S. Air Force. Chris commanded in Afghanistan and served in Iraq as a United States Marine. He tweets at @chris_telley.
This blog post represents the opinions of the author and do not reflect the position of the Army or the United States Government.
The Mad Scientist Initiative brings together cutting-edge leaders and thinkers from the technology industry, research laboratories, academia, and across the military and Government to explore the impact of potentially disruptive technologies. Much like Johannes Gutenberg’s moveable type (illustrated above), these transformational game changers have the potential to impact how we live, create, think, and prosper. Understanding their individual and convergent impacts is essential to continued battlefield dominance in the Future Operational Environment. In accordance with The Operational Environment and the Changing Character of Future Warfare, we have divided this continuum into two distinct timeframes:
The Era of Accelerated Human Progress (Now through 2035):
The period where our adversaries can take advantage of new technologies, new doctrine, and revised strategic concepts to effectively challenge U.S. military forces across multiple domains. Game changers during this era include:
• Robotics: Forty plus countries develop military robots with some level of autonomy. Impact on society, employment. Vulnerable: To Cyber/Electromagnetic (EM) disruption, battery life, ethics without man in the loop. Formats: Unmanned/Autonomous; ground/air vehicles/subsurface/sea systems. Nano-weapons. Examples: (Air) Hunter/killer Unmanned Aerial Vehicle (UAV) swarms; (Ground) Russian Uran: Recon, ATGMs, SAMs.
• Artificial Intelligence: Human-Agent Teaming, where humans and intelligent systems work together to achieve either a physical or mental task. The human and the intelligent system will trade-off cognitive and physical loads in a collaborative fashion.
• Swarms/Semi Autonomous: Massed, coordinated, fast, collaborative, small, stand-off. Overwhelm target systems. Mass or disaggregate.
• Internet of Things (IoT): Trillions of internet linked items create opportunities and vulnerabilities. Explosive growth in low Size Weight and Power (SWaP) connected devices (Internet of Battlefield Things), especially for sensor applications (situational awareness). Greater than 100 devices per human. Significant end device processing (sensor analytics, sensor to shooter, supply chain management). Vulnerable: To Cyber/EM/Power disruption. Privacy concerns regarding location and tracking. Sensor to shooter: Accelerate kill chain, data processing, and decision-making.
• Space: Over 50 nations operate in space, increasingly congested and difficult to monitor, endanger Positioning, Navigation, and Timing (PNT)
GPS Jamming/Spoofing: Increasingly sophisticated, used successfully in Ukraine. Anti Satellite: China has tested two direct ascent anti-satellite missiles.
The Era of Contested Equality (2035 through 2050):
The period marked by significant breakthroughs in technology and convergences in terms of capabilities, which lead to significant changes in the character of warfare. During this period, traditional aspects of warfare undergo dramatic, almost revolutionary changes which at the end of this timeframe may even challenge the very nature of warfare itself. Game changers during this era include:
• Hyper Velocity Weapons: Rail Guns (Electrodynamic Kinetic Energy Weapons): Electromagnetic projectile launchers. High velocity/energy and space (Mach 5 or higher). Not powered by explosive. No Propellant: Easier to store and handle. Lower Cost Projectiles: Potentially. Extreme G-force requires sturdy payloads. Limiting factors: Power. Significant IR signature. Materials science. Hyper Glide Vehicles: Less susceptible to anti-ballistic missile countermeasures.
• Directed Energy Weapons: Signature not visible without technology, must dwell on target. Power requirements currently problematic. Potential: Tunable, lethal, and non-lethal. Laser: Directed energy damages intended target. Targets: Counter Aircraft, UAS, Missiles, Projectiles, Sensors, Swarms. Radio Frequency (RF): Attack targets across the frequency spectrum. Targets: Not just RF; Microwave weapons “cook targets,” people, electronics.
• Synthetic Biology: Engineering / modification of biological entities Increased Crop Yield: Potential to reduce food scarcity. Weaponization: Potential for micro-targeting, Seek & destroy microbes that can target DNA. Potentially accessible to super-empowered individuals. Medical Advances: Enhance soldier survivability. Genetic Modification: Disease resistant, potentially designer babies and super athletes/soldiers. Synthetic DNA stores digital data. Data can be used for micro-targeting. CRISPR: Genome editing.
• Information Environment: Use IoT and sensors to harness the flow of information for situational understanding and decision-making advantage.
In envisioning Future Operational Environment possibilities, the Mad Scientist Initiative employs a number of techniques. We have found Crowdsourcing (i.e., the gathering of ideas, thoughts, and concepts from a wide variety of interested individuals assists us in diversifying thoughts and challenging conventional assumptions) to be a particularly effective technique. To that end, we have published our latest, 2-page compendium of Potential Game Changershere — we would like to hear your feedback regarding them. Please let us know your thoughts / observations by posting them in this blog post’s Comment box (found below, in the Leave a Reply section). Alternatively, you can also submit them to us via email at: firstname.lastname@example.org. Thank you in advance for your contributions!
What do these issues mean for the nature and character of warfare?
The nature of war, which has remained relatively constant from Thucydides, through Clausewitz, through the Cold War, and on into the present, certainly remains constant through the Era of Accelerated Human Progress (i.e., now through 2035). War is still waged because of fear, honor, and interest, and remains an expression of politics by other means. However, as we move into the Era of Contested Equality (i.e., 2035-2050), the character of warfare has changed in several key areas:
• The Moral and Cognitive Dimensions are Ascendant.
The proliferation of high technology, coupled with the speed of human interaction and pervasive connectivity, means that no one nation will have an absolute strategic advantage in capabilities. When breakthroughs occur, the advantages they confer will be fleeting, as rivals quickly adapt. Under such conditions, the physical dimension of warfare may become less important than the cognitive and the moral. As a result, there will be less self-imposed restrictions by some powers on the use of military force, and hybrid strategies involving information operations, direct cyber-attacks against individuals and segments of populations, or national infrastructure, terrorism, the use of proxies, and Weapons of Mass Destruction (WMD) will aim to prevail against an enemy’s will.
• Integration across Diplomacy, Information, Military, and Economic (DIME).
Clausewitz’s timeless dictum that war is policy by other means takes on a new importance as the distance between war and policy recedes; but also must take into account other elements of national power to form true whole-of-government and, when possible, collective security approaches to national security issues. The interrelationship across the DIME will require a closer integrationacross all elements of government, and Joint decision-making bodies will need to quickly and effectively deliver DIME effects across the physical, the cognitive, and moral dimensions. Military operations are an essential element of this equation, but may not necessarily be the decisive means of achieving an end state.
• Limitations of Military Force.
While mid-Century militaries will have more capability than at any time in history, their ability to wage high-intensity conflict will become more limited. Force-on-force conflict will be so destructive, will be waged at the new speed of human and AI-enhanced interaction, and will occur at such extended long-ranges that exquisitely trained and equipped forces facing a peer or near-peer rival will rapidly suffer significant losses in manpower and equipment that will be difficult to replace. Robotics, unmanned vehicles, and man-machine teaming activities offer partial solutions, but warfare will still revolve around increasingly vulnerable human beings. Military forces will need to consider how advances in AI, bio-engineering, man-machine interface, neuro-implanted knowledge, and other areas of enhanced human performance and learning can quickly help reduce the long lead time in training and developing personnel.
• The Primacy of Information.
In the timeless struggle between offense and defense, information will become the most important and most useful tool at all levels of warfare. The ability of an actor to use information to target the enemy’s will, without necessarily having to address its means will increasingly be possible. In the past, nations have tried to target an enemy’s will through kinetic attacks on its means – the enemy military – or through the direct targeting of the will by attacking the national infrastructure or a national populace itself. Sophisticated, nuanced information operations, taking advantage of an ability to directly target an affected audience through cyber operations or other forms of influence operations, and reinforced by a credible capable armed force can bend an adversary’s will before battle is joined.
• Expansion of the Battle Area.
Nations, non-state actors, and even individuals will be able to target military forces and civilian infrastructure at increasing – often over intercontinental – ranges using a host of conventional and unconventional means. A force deploying to a combat zone will be vulnerable from the individual soldier’s personal residence, to his or her installation, and during his or her entire deployment. Adversaries also will have the ability to target or hold at risk non-military infrastructure and even populations with increasingly sophisticated, nuanced and destructive capabilities, including WMD, hypersonic conventional weapons, and perhaps most critically, cyber weapons and information warfare. WMD will not be the only threat capable of directly targeting and even destroying a society, as cyber and information can directly target infrastructure, banking, food supplies, power, and general ways of life. Limited wars focusing on a limited area of operations waged between peers or near-peer adversaries will become more dangerous as adversaries will have an unprecedented capability to broaden their attacks to their enemy’s homeland. The U.S. Homeland likely will not avoid the effects of warfare and will be vulnerable in at least eight areas.
• Ethics of Warfare Shift. Traditional norms of warfare, definitions of combatants and non-combatants, and even what constitutes military action or national casus belli will be turned upside down and remain in flux at all levels of warfare.
– Does cyber activity, or information operations aimed at influencing national policy, rise to the level of warfare?
– Is using cyber capabilities to target a national infrastructure legal, if it has broad societal impacts?
– Can one target an electric grid that supports a civilian hospital, but also powers a military base a continent away from the battle zone from which unmanned systems are controlled?
– What is the threshold for WMD use?
– Is the use of autonomous robots against human soldiers legal?
These and other questions will arise, and likely will be answered differently by individual actors.
The changes in the character of war by mid-Century will be pronounced, and are directly related and traceable to our present. The natural progression of the changes in the character of war may be a change in the nature of war, perhaps towards the end of the Era of Contested Equality or in the second half of the Twenty First Century.
The Future Operational Environment’s Era of Contested Equality (i.e., 2035 through 2050) will be marked by significant breakthroughs in technology and convergences, resulting in revolutionary changes that challenge the very nature of warfare itself. No one actor is likely to have any long-term strategic or technological advantage during this period of enduring competition. Prevailing in this environment will depend on an ability to synchronize multi-domain capabilities against an artificial intelligence-enhanced adversary with an overarching capability to visualize and understand the battlespace at even greater ranges and velocities.
Ms. Elsa Kania, Adjunct Fellow, Technology and National Security Program, Center for a New American Security (CNAS), presented “People’s Liberation Army (PLA) Human-Machine Integration” at last month’s Bio Convergence and Soldier 2050 Conference. In this presentation, Ms. Kania addressed China’s on-going initiatives that seek to change military power paradigms via competition and innovation in a number of key technologies. This post summarizes Ms. Kania’s presentation.
Under President Xi Jinping‘s leadership, China is becoming a major engine of global innovation, second only to the United States. China’s national strategy of “innovation-driven development” places innovation at the forefront of economic and military development. These efforts are beginning to pay off, as Beijing is becoming as innovative as Silicon Valley. China continues to strengthen its military through a series of ambitious Science and Technology (S&T) plans and investments, focusing on disruptive and radical innovations that will enable them to seize the high ground with decisive technologies (e.g., AI, hypervelocity, and biotechnology).
President Xi leads China’s Central Military-Civil Fusion Development Commission, whose priorities include intelligent unmanned systems, biology and cross-disciplinary technologies, and quantum S&T. Though the implementation of a “whole of nation” strategy, President Xi is leveraging private sector advances for military applications. This strategy includes the establishment of Joint Research Institutes to promote collaborative R&D; new national labs focused on achieving dual-use advances; and collaboration within national military-civil fusion innovation demonstration zones. Major projects concentrate on quantum communications and computing, brain science, and brain-inspired research.
By 2030, China will be world’s premier Artificial Intelligence (AI) innovation center. Building upon their successes with Alpha Go, the PLA is seeking to establish a “Battlefield Singularity,” leveraging AI potential in planning, operational command and control, decision support tools, wargaming, and brain-computer interfaces controlling unmanned systems. They will deepen military-civil fusion AI initiatives with Baidu, Alibaba Group, Tencent, and iFLYTEK.AI is seen as a potential game-changer by the Chinese, a way to augment perceived military shortcomings.
This focused initiative on innovation may result in China’s First Offset, characterized by integrating quantum satellites with fiber optic communication networks; human-machine interfaces; drone swarms able to target carrier task forces; naval rail guns; and quantum computing.
Potential areas for biotechnology and AI convergences include:
• “Intelligentized” Command Decision-Making: The Joint Staff Department of the Central Military Commission (CMC) has called for the PLA to leverage the “tremendous potential” of AI in planning, operational command, and decision support. Ongoing research is focusing on command automation and “intelligentization,” with experimental demonstrations of an “external brain” for commanders and decision support systems for fighter pilots and submarines.
• Brain-Computer Interfaces: Active research programs in brain-computer interfaces are underway (e.g., at PLA Information Engineering University, Tsinghua University), enabling “brain control” of robotic and “unmanned” systems and potentially facilitating brain networking.
• Military Exoskeletons: Several prototype exoskeletons have been tested and demonstrated to date, augmenting soldiers’ physical capabilities, with the latest generations being more capable and closer to being fielded by the PLA.
• CRISPR in China: Gene editing is currently underway with animals and human embryos due to less stringent regulatory requirements in the PRC. BGI (a would-be “bio-Google”) is currently soliciting DNA from Chinese geniuses in an attempt to understand the genomic basis for intelligence.
• Chinese Superintelligence: The Chinese aspire to develop “brain-like” or human-level AI. Their new National Engineering Laboratory for Brain-Inspired Intelligence Technologies and Applications, with Baidu involvement, is focusing on learning from the human brain to tackle AI, advancing next-generation AI technologies.
While technological advantage has been a key pillar of U.S. military power and national competitiveness, China is rapidly catching up. Future primacy in AI and biotech, likely integral in future warfare, could remain contested between the U.S. and China. The PLA will continue explore and invest in these key emerging technologies in their on-going drive for innovation dominance.
For more information regarding the PLA’s on-going innovation efforts:
Watch Ms. Kania’s video presentation and read the associated slides from the Bio Convergence and Soldier 2050 Conference.
(Editor’s Note: Mad Scientist Laboratory is pleased to present Dr. Peter Emanuel’s guest blog post, illustrating how popular culture often presages actual technological advancements and scientific breakthroughs.)
Did Dick Tracy’s wrist watch telephone or Star Trek’s communicator inspire future generations of scientists and engineers to build today’s smartphone? Or were they simply depicting the inevitable manifestation of future technology? If we look back on old issues of Superman comic books that depict a 3D printer half a century before it was invented, we can see popular media has foreshadowed future technology, time and time again. Clearly, there are many phenomena, from time travel to force fields, that have not, and may not ever see the light of day; however, there are enough examples to suggest that dedicated and forward thinking scientists, trying to defend the United States, should consider this question:
Can comic books, video games, television, and movies give us a glimpse into the battlefield of the future?
For today’s Mad Scientist blog, consider what the future may hold for defense against weapons of mass destruction.
Let’s get the 800 lb. gorilla out of the room first! Or, perhaps, the 800 lb. dinosaur by talking about biological warfare in the future. The movie Jurassic Park depicts the hubris of man trying to control life by “containing” its DNA. Our deeper understanding of DNA shows us that life is programmed to be redundant and error prone. It’s actually a fundamental feature that drives evolution. In the year 2050, if we are to control our genetically modified products, we must master containment and control for a system designed since the dawn of time to NOT be contained. Forget bio-terror…What about bio-Error?! Furthermore, the lesson in Jurassic Park from the theft of the frozen dinosaur eggs shows us the asymmetric impact that theft of genetic products can yield. Today, our adversaries amass databases on our genetic histories through theft and globalization and one only has to ask, “What do they know that we should be worried about?”
Let’s move from biology to chemistry. A chemist will argue that biology is just chemistry, and at some level it’s true. Like the movie Outlander and anime like Cowboy Bebop, today’s Middle East battlefield shows the use of CAPTAGON, an addictive narcotic blend used to motivate and subjugate radical Islamists. In 2050, our mastery of tailored chemistry will likely lead to more addictive or targeted drug use that could elicit unpredictable or illogical behaviors. Controlled delivery of mood/behavior altering drugs will frustrate efforts to have a military workforce managed by reliability programs and will require layered and redundant controls even on trusted populations. Such vulnerabilities will likely be a justification for placing weapons and infrastructure under some level of artificial intelligence in the year 2050. Imagine this is the part of the blog where we talked about the Terminator and CyberDyne Systems.
Today, the thought of man-machine interfaces depicted by the Borg from Star Trek and the TV shows such as Aeon Flux and Ghost in the Shell may make our skin crawl. In 2050, societal norms will likely evolve to embrace these driven by the competitive advantage that implants and augmentation affords. Cyborgs and genetic chimeras will blur the line between what is man and what is machine; it will usher in an era when a computer virus can kill, and it will further complicate our ability to identify friend from foe in a way best depicted by the recent Battlestar Galactica TV show. Will the point of need manufacturing systems of the future be soulless biological factories like those depicted in Frank Herbert’s book series, “Dune”? As we prepare for engaging in a multi-domain battlespace by extending our eyes and ears over the horizon with swarming autonomous drones are we opening a window into the heart and mind of our future fighting force?
Some final thoughts for the year 2050 when we maintain a persistent presence off planet Earth. As Robert Heinlein predicted, and recent NASA experiments proved, our DNA changes during prolonged exposure to altered gravity. What of humans who never stepped foot on Earth’s surface, as shown in the recent movie, The Fate of our Stars. Eventually, non-terrestrially based populations will diverge from the gene pool, perhaps kindling a debate on what is truly human? Will orbiting satellites with hyperkinetic weapons such as were pictured in GI Joe Retaliation add another dimension to the cadre of weapons of mass destruction? I would argue that popular media can help spur these discussions and give future mad scientists a glimpse into the realm of the possible. To that end, I think we can justify a little binge watching in the name of national security!
If you enjoyed this post, please check out the following:
– Headquarters, U.S. Army Training and Doctrine Command (TRADOC) is co-sponsoring the Bio Convergence and Soldier 2050 Conference with SRI International at Menlo Park, California, on 08-09 March 2018. Click here to learn more about the conference, the associated on-line game, and then watch the live-streamed proceedings, starting at 0840 PST / 1140 EST on 08 March 2018.
– Our friends at Small Wars Journal are continuing to publish the finalists from our most recent Call for Ideas — click here to check them out!
Dr. Peter Emanuel is the Army’s Senior Research Scientist (ST) for Bioengineering. In this role, he advises Army Leadership on harnessing the opportunities that synthetic biology and biotechnology can bring to National Security.
“Speed is the essence of war. Take advantage of the enemy’s unpreparedness; travel by unexpected routes and strike him where he has taken no precautions.” — Sun Tzu
Mad Scientist Laboratory has addressed the impact of Autonomy, Artificial Intelligence (AI), and Robotic Trends in previous posts. Consequential in their own right, particularly in the hands of our adversaries, the impact of these technology trends is exacerbated by their collective speed, scope, and convergence, leading ultimately to man-machine co-evolution.
Speed. Some Mad Scientists posit that the rate of progress in these technologies will be “faster than Moore’s law.” As our adversaries close the technology gap and potentially overtake us in select areas, there is clearly a “need for speed” as cited in the Defense Science Board (DSB) Report on Autonomy. The speed of actions and decisions will need to increase at a much higher pace over time.
“… the study concluded that autonomy will deliver substantial operational value across an increasingly diverse array of DoD missions, but the DoD must move more rapidly to realize this value. Allies and adversaries alike also have access to rapid technological advances occurring globally. In short, speed matters—in two distinct dimensions. First, autonomy can increase decision speed, enabling the U.S. to act inside an adversary’s operations cycle. Secondly, ongoing rapid transition of autonomy into warfighting capabilities is vital if the U.S. is to sustain military advantage.” — DSB Summer Study on Autonomy, June 2016 (p. 3)
Scope. It may be necessary to increase not only the pace but also the scope of these decisions if these technologies generate the “extreme future” characterized by Mad Scientist Dr. James Canton as “hacking life” / “hacking matter” / “hacking the planet.” In short, no aspect of our current existence will remain untouched. Robotics, artificial intelligence, and autonomy – far from narrow topics – are closely linked to a broad range of enabling / adjunct technologies identified by Mad Scientists, to include:
• Computer Science, particularly algorithm design and software engineering
• Man-Machine Interface, to include Language / Speech and Vision
• Sensing Technologies
• Power and Energy
• Mobility and Manipulation
• Material Science to include revolutionary new materials
• Quantum Science
• 3D (Additive) Manufacturing
• Positioning, Navigation and Timing beyond GPS
Science and Technological Convergence. Although 90% of the technology development will occur in the very fragmented, uncontrolled private sector, there is still a need to view robotics, artificial intelligence and autonomy as a holistic, seamless system. Technology convergence is a recurring theme among Mad Scientists. They project that we will alter our fundamental thinking about science because of the “exponential convergence” of key technologies, including:
• Nanoscience and nanotechnology
• Biotechnology and Biomedicine
• Information Technology
• Cognitive Science and Neuroscience
• Quantum Science
This convergence of technologies is already leading to revolutionary achievements with respect to sensing, data acquisition and retrieval, and computer processing hardware. These advances in turn enable machine learning to include reinforcement learning and artificial intelligence. They also facilitate advances in hardware and materials, 3D printing, robotics and autonomy, and open-sourced and reproducible computer code. Exponential convergence will generate “extremely complex futures” that include capability “building blocks” that afford strategic advantage to those who recognize and leverage them.
Co-Evolution. Clearly humans and these technologies are destined to co-evolve. Humans will be augmented in many ways: physically, via exoskeletons; perceptionally, via direct sensor inputs; genetically, via AI-enabled gene-editing technologies such as CRISPR; and cognitively via AI “COGs” and “Cogni-ceuticals.” Human reality will be a “blended” one in which physical and digital environments, media and interactions are woven together in a seamless integration of the virtual and the physical. As daunting – and worrisome – as these technological developments might seem, there will be an equally daunting challenge in the co-evolution between man and machine: the co-evolution of trust.
Trusted man-machine collaboration will require validation of system competence, a process that will take our legacy test and verification procedures far beyond their current limitations. Humans will expect autonomy to be nonetheless “directable,” and will expect autonomous systems to be able to explain the logic for their behavior, regardless of the complexity of the deep neural networks that motivate it. These technologies in turn must be able to adapt to user abilities and preferences, and attain some level of human awareness (e.g., cognitive, physiological, emotional state, situational knowledge, intent recognition).
For additional information on The Convergence of Future Technology, see Dr. Canton’s presentation from the Mad Scientist Robotics, Artificial Intelligence, & Autonomy Conference at Georgia Tech Research Institute last March.