The Blog

DoD releases autonomy study

Relevant links are at the bottom of  this post.

The Defense Science Board released the results of its summer study on autonomy. The document provides a raft of recommendations for DoD in the way of further adopting autonomy into systems and operations while ensuring sufficient security against adversarial adoption of similar technologies and the inherent vulnerabilities that exist within the interconnected cyberspace architecture.

The board’s study focused on three areas: institutional and enterprise strategies to widen the use of autonomy, approaches to strengthening the operational pull for autonomous systems, and an approach accelerate the advancement of the technology for autonomy applications and capabilities, with recommendations across three vectors:
— Accelerating the adoption of autonomous capabilities
— Strengthening operational pull of autonomy
— Expanding the envelope of technologies available for mission use

Among the study’s conclusions:
— Action is needed in the three aforementioned study areas to build trust and enable the most effective use of autonomy to defend the nation.
— Autonomy has the potential to deliver substantial operational value across a diverse array of vital missions.
— DoD, while adopting autonomy into systems presently, should speed up its adoption to realize these potential benefits, as many of the systems it has adopted are remotely operated as opposed to autonomous.

Autonomy, according to the study, can deliver value by mitigating operational challenges such as rapid decision making, high heterogeneity and/or volume of data, intermittent communications, high complexity of coordinated action danger of mission and high persistence and endurance.

The authors defined autonomy as a means of framing the discussion as resulting from “delegation of a decision to an authorized entity to take action within specific boundaries. An important distinction is that systems governed by prescriptive rules that permit no deviations are automated, but they are not autonomous,” they wrote. “To be autonomous, a system must have the capability to independently compose and select among different courses of action to accomplish goals based on its knowledge and understanding of the world, itself, and the situation.”

The study did not recommend any major new programs, given the existing budget environment. Rather, the study’s authors recommended a series of experiments and prototypes to demonstrate clear operational value across operational challenges.

One of the key challenges greater autonomy adoption can facilitate is anti-access and area denial, the report said. “Anti-access and area denial (A2/AD) is a primary example of a mission that could be enhanced by autonomous systems. Autonomously operating [unmanned aircraft] UA could assume several functions now performed by manned aircraft in areas that are difficult to access (e.g., aerial refueling, airborne early warning, intelligence, surveillance, reconnaissance, anti-ship warfare, and command),” it said. “Additionally, large UA could be designed to dispense small UA that could operate autonomously to facilitate both offensive strike (via electronic warfare, communications jamming, or decoys), as well as defensive measures as decoys, sensors and emitters, target emulators, and so on — to confuse, deceive, and attrite adversary attacks. These small swarms could be deployed as perimeter and close-in defensive actions with payloads tailored to the situation.”

These concepts can be applied to other missions or domains. The report noted that undersea, acoustic and RF decoy payloads, likely smaller than sea mines and thus more easily deployable from existing unmanned undersea vehicles, could significantly extend electromagnetic maneuver warfare capabilities enabling covert options with a small observable footprint until electronic warfare operations are initiated.

The report also noted how adversaries will be employing autonomy as commercial technologies become more widely available. “This situation is similar to the potential adversary use of cyber and electronic warfare. For years, it has been clear that certain countries could, and most likely would, develop the technology and expertise to use cyber and electronic warfare against U.S. forces,” the report said. “Yet most of the U.S. effort focused on developing offensive cyber capabilities without commensurate attention to hardening U.S. systems against attacks from others. Unfortunately, in both domains, that neglect has resulted in DoD spending large sums of money today to “patch” systems against potential attacks.”

The U.S. must apply lessons learned from these two areas to adversarial autonomy adoption now, the report states. “The potential exploitations the U.S. could face include low observability throughout the entire spectrum from sound to visual light, the ability to swarm with large numbers of low-cost vehicles to overwhelm sensors and exhaust the supply of effectors, and maintaining both endurance and persistence through autonomous or remotely piloted vehicles,” it continued.

One of the more hotly debated issues surrounding autonomous systems revolves around the level of autonomy. For example, in cyber systems, many officials have expressed the desire for fully autonomous cyber tools that can respond in so-called “cyber speed,” which happens much faster than human speed.

“I want autonomous basic security tools – not automated, I want autonomous basic security tools that I can just let go that will look at my network, sensor it, and say ‘you know what, there’s an attack happening here, we’re immediately going to quarantine this part of the network, we’re going to add some security protection over…I can’t have people in that loop…it’s too fast,” DoD CIO Terry Halvorsen, said at an event hosted by FedScoop in June.

Conversely, human rights groups have expressed grave concern regarding the potential for fully autonomous systems to deploy weapons or lethal payloads. Several leading technologists, from Stephen Hawking to Elon Musk,  signed an open letterendorsing a “ban on offensive autonomous weapons beyond meaningful human control.” This is what Vice Chairman of the Joint Chiefs of Staff, Gen. Paul Selva describes as the “Terminator Conundrum.” “What happens when that thing can inflict mortal harm and is empowered by artificial intelligence. How are we going to deal with that? How are we going to know what’s in the vehicle’s mind, presuming for the moment we are capable of creating a vehicle with a mind,” he said in an event at the Brookings Institution in January. “Those are the problem sets that I think we are going to have to deal with in the technology sector.”

The Defense Science Board’s report noted that there will be skepticism and resistance regarding the employment of autonomous weapons. The report noted that DoD has taken steps in the way of a 2012 directive. “The most important policy points to be made from the Directive,” the report notes, “that are relevant to public concerns are that there are no proscriptions for the development of lethal autonomous weapon systems, but their development would require a much more rigorous review and approval process. Emphasis is placed on assurance that the system will perform as intended and be as immune as possible to unintended loss of control, capture, or compromise by the adversary. Moreover, appropriate use of human judgment over the use of force is required and use must be in accordance with all applicable domestic and international law, in particular, the law of war.”

Peter Singer, strategist at New America and author of “Ghost Fleet,” believes it is likely autonomous systems will be deployed more in regions with low probability of civilian casualties. “It is likely in the near term that we will be more comfortable in placing autonomous systems in domains and settings where there are less likely civilian casualties, such as antisubmarine warfare, or where pattern matching against known targets is less complex (radar emissions from SAM batteries vs trying to ID a ISIS technical truck from a civilian truck),” Singer said in an email. “We’ll also see it take place in areas where the data is too complex or fast moving for human reaction time, like missile defense (where it is already largely automated) or cyber warfare.”

The report also devotes partial focus toward fostering the man-machine team, the cornerstone of Deputy Defense Secretary Bob Work’s so-called Third Offset Strategy, he described as “basically hypothesiz[ing] that the advances in artificial intelligence and autonomy – autonomous systems – is going to lead to a new era of human-machine collaboration and combat teaming.” The report notes that today rule-based coordination of multiple platforms along with high-volume communications data transfer exist while in the near term observability and directablity, provably correct emergent behavior, trustworthiness and trust calibration under defined conditions and natural language processing are likely available in the near term. Shared “mental models,” mutual predictability, understanding intent, fully adaptive coordination and implicit communication might be available in the long term, the report said.

“While difficult to quantify, the study concluded that autonomy — fueled by advances in artificial intelligence — has attained a ‘tipping point’ in value. Autonomous capabilities are increasingly ubiquitous and are readily available to allies and adversaries alike,” study co-chairs Ruth David and Paul Nielsen wrote. “The study therefore concluded that DoD must take immediate action to accelerate its exploitation of autonomy while also preparing to counter autonomy employed by adversaries.”

http://www.c4isrnet.com/articles/dod-releases-autonomy-study

http://www.acq.osd.mil/dsb/reports/DSBSS15.pdf