6. Trends in Autonomy

“Control leads to compliance; autonomy leads to engagement.” – Daniel H. Pink

During the Robotics, Artificial Intelligence & Autonomy Conference, Georgia Tech Research Institute (GTRI), 7-8 March 2017, Mad Scientists addressed how these interdependent technologies will exercise key roles in future military operations, including land operations.

In order to better address Autonomy’s relevance to future military operations, the Mad Scientist community identified the following Autonomy Trends:

Autonomy Definition. The Joint Concept for Robotics and Autonomous Systems defines autonomy as follows:


“… the level of independence that humans grant a system to execute a given task. It is the condition or quality of being self-governing to achieve an assigned task based on the system’s own situational awareness (integrated sensing, perceiving, analyzing), planning and decision-making. Autonomy refers to a spectrum of automation in which independent decision-making can be tailored for a specific mission, level of risk, and degree of human-machine teaming.”

Degrees of Autonomy. The phrase “spectrum of automation” alludes to the different degrees to autonomy:

Fully Autonomous: “Human Out of the Loop”: no ability for human to intervene in real time.

Supervised Autonomous: “Human on the Loop”: humans can intervene in real time.

Semi-Autonomous: “Human in the Loop”: machines wait for human input before taking action.

Non-Autonomous (Remote Control): “Human in the Loop”: machines guided via remote controls; no autonomy in system.

Autonomy Baseline. Autonomy is already evident on the battlefield. At least 30 countries have defensive, human-supervised autonomous weapons such as the Aegis and Patriot. Some “fully autonomous” weapon systems are also emerging. The Israeli Harpy drone (anti-radiation loitering munition) has been sold to India, Turkey, South Korea, and China. China reportedly has reverse-engineered their own variant. The U.S. has also experimented with similar systems in the Tacit Rainbow and the Low Cost Autonomous Attack System (LOCAAS) programs.

Autonomy Projections. Mad Scientists expect autonomy to evolve into solutions that are flexible, multi-modal, and goal-oriented featuring trusted man-machine collaboration, distributed autonomy and continuous learning.

Collaborative Autonomy will be learning and adaptation to perform a new task based on mere demonstration of the task by end-users (i.e., Soldiers) to teach the robot what to do.

Distributed Autonomy will be dynamic team formation from heterogeneous platforms to include coordination in settings with limited or impaired communication and the emergence of new tactics and strategies enabled by multi-agent capabilities.

Continuous Learning will be a continuous, incremental evolution and expansion of capabilities, to include the incorporation of high-level guidance (such as human instruction, changes in laws / ROEs / constraints) and “Transfer Learning.”

Autonomy Challenges. Mad Scientists acknowledged that the aforementioned “autonomy projections” pose the following challenges:

• Goal-Oriented Autonomy: Decision and adaptation, to include the incorporation of ethics and morality into decision-making.

• Trusted Collaboration: The challenge of trust between man and machine continues to be a dominant theme. Machines must properly perceive human goals and preserve their autonomous system integrity while achieving joint man-machine goals in a manner explainable to – and completely trusted by — the human component.

• Distributed Systems: Rethinking the execution of tasks using multiple, distributed agents while preserving command-level understanding and decision adds an additional layer of complexity to the already challenging task of designing and building autonomous systems.

• Transfer Learning: Learning by inference from similar tasks must address the challenges of seamless adaptation to changing contexts and environments, including the contextual inference of missing data and physical attributes.

• High Reliability Theory: “Normal Accident Theory” holds that accidents are inevitable in complex, tightly-coupled systems. “High Reliability Theory” asserts that organizations can contribute significantly to the prevention of accidents. Because of the significant complexity and “tight-coupling” of future autonomous systems, there is an obvious challenge in the application of high reliability theory to emerging technologies that are not yet well comprehended.

Relevance of Autonomous Systems. Hollywood inevitably envisions autonomous systems as either predisposed for malevolence, destined to “go rogue” and turn on their creators at the earliest opportunity; or coolly logical, dispassionately taking actions with disastrously unintended consequences for humankind. For the foreseeable future, however, no autonomous system will have the breadth, robustness and flexibility of human cognition. That said, autonomous systems offer the potential for speed, mass, and penetration capabilities in future lethal, high threat environments — minimizing risks to our Soldiers.

For additional insights regarding Autonomy Trends, watch “Unmanned and Autonomous Systems,” presented by Mr. Paul Scharre, Senior Fellow / Director, Future Warfare Initiative, Center for New American Security, during the GTRI Conference last spring.

Share on Facebook Share on LinkedIn

5 Replies to “6. Trends in Autonomy”

  1. In “Degrees of Autonomy” between “Fully Autonomous” and “Supervised Autonomy” there is the case where humans can perform after mission reviews of fully autonomous systems, and adjust how the autonomous systems will behave in the future. In this case, humans define the behavioral policies for machines (ROE for machines – telling machines how to think) and then can perform after mission reviews, where the behavioral policies can be adjusted if necessary for future missions. Key attributes are that behavioral policies are easily explainable and auditable. This capability is provided with Compsim’s KEEL (Knowledge Enhanced Electronic Logic) Technology. Maybe this would be called a “Human Monitored Loop”, or a “Human Controlled Loop”?

  2. Related reference items for consideration:

    https://www.geospatialworld.net/news/darpa-assured-autonomy-seeks-guarantee-safety-learning-enabled-autonomous-systems/

    Know CompSim’s KEEL has been identified in previous blogs, but a colleciton of autonomy related references is here: http://www.compsim.com/military/militaryRelatedInformation.html

    There is a CompSim paper (on DTIC) that doesn’t seem to be in the above reference listing titled “Yes, Computerized Systems Can Have Reasoning Powers” that is short (8-9 pages) and not a bad read related to this posting

  3. Additional discussion of artificial intelligence from a human-centric perspective — for your consideration (source: pwc via @mikequindazzi, posted on the Mad Scientist Twitter feed):

    1. Automated intelligence: Improves human productivity by automating manual tasks (e.g., software that compares documents and spots inconsistencies and errors).

    2. Assisted intelligence: Helps people perform tasks faster and better (e.g., medical image classification, real-time operational efficiency improvement).

    3. Augmented intelligence: Helps people make better decisions by analyzing past behavior (e.g., media curation, guided personal budgeting, on-the-fly decision analysis).

    4. Autonomous intelligence: Automates decision-making processes without human intervention while also putting controls into place (e.g., self-driving vehicles, full-fledged language translation, robots that mimic people).

  4. This is a great article as it is a good start point for terms of reference as well. The Office of Naval Research (as well as the UK’s “Systems Engineering for Autonomous Systems Defence Technology Centre”) have adopted a 6-level scale that can be found in NATO Standards for Autonomous Systems, Issues for Defence Policymakers. (pp63-64 of 347, Table 2.2 (page number 41)). It would be good to ensure we have commonality across DoD.

  5. Perhaps the entire topic of Autonomy can be simplified to:
    1. For the machine: automating where to go and what to do (incorporating AI)
    2. Identifying who is responsible
    I still think the learning aspects are independent and may or may not be required. Do we want machines to learn on their own who to kill? Machines are built by humans to amplify human capabilities (speed, power, size, quality of outcome, avoid human error, …) , at least for the near timeframe.

Leave a Reply to iankersey Cancel reply

Your email address will not be published. Required fields are marked *