Sign Up

2461 SW Campus Way, Corvallis, OR 97331

How to Reason About the Obligations of Autonomous Systems

In this dissertation we investigate how to reason about an autonomous system's obligations. In particular, we explore how to verify, elicit, and enforce rules about how a system should behave when it pursues the maximization of a utility function. To specify obligations we use deontic logics with modalities to capture agency, obligation and temporal necessity. Unlike logics commonly used to reason about a system's behaviors (such as Linear Temporal Logic), deontic logics can express the difference between what an agent should do, and what is possible to do. This difference makes deontic logics better suited to reason about how an agent's rewards impact how that agent behaves. We apply Dominance Act Utilitarian deontic logic to reasoning about autonomous systems, and develop a model checking algorithm for the logic. We then give an algorithm for learning a system's optimal behaviors from human feedback, and discuss discovering obligations from optimal behaviors. We also develop Expected Act Utilitarian deontic logic to specify obligations of agents in stochastic systems, and explore the model checking of this logic at scale. The problem of logical reasoning at scale is again addressed by our development of a neural network for checking a system's specifications, and a proposal to extend the design to deontic logic. Finally, we use deontic logic specifications as a constraint in an agent's search for a reward maximizing policy, allowing a system designer to enforce the satisfaction of a given obligation.

MAJOR ADVISOR: Houssam Abbas
COMMITTEE: Alan Fern
COMMITTEE: Prasad Tadepalli
COMMITTEE: Kagan Tumer
GCR: Sharon Shen

  • Aimee Wong
  • Maia Barnes
  • Graysen Terry

3 people are interested in this event