Events Calendar

PhD Final Exam – Mandana Hamidi Haines

Learning from Examples and Interactions

Humans are remarkably efficient in learning by interacting with other people and observing others behavior. Children learn by watching their parents actions and mimic their behavior. When they are not sure about some of their parents’ behavior, they communicate with them, ask questions, and learn from their feedback. On the other hand, parents/teachers ask children to explain their behavior. This explanation helps the parents to know whether the children learn their task correctly or not. Learning by imitation and interaction with other humans are the primary ways children can understand and reproduce human behavior. Asking questions and receiving feedback is one way that can reduce the confusion and uncertainty in children. Explaining the decisions is another way that children can build trust for their parents. So, why not having intelligent systems that learn from examples and interaction with human, and explains its decision to a human? This dissertation makes three contributions toward this goal. The first contribution is a new approach to the discovery of hierarchical structure in sequential decision problems. Given a set of expert demonstrations, our approach learns a hierarchical policy by actively selecting demonstrations and using queries to explicate their intentional structure at selected points. The second contribution is the introduction of generalization of the framework of adaptive submodularity. Adaptive submodular optimization, where a sequence of items is selected adaptively to optimize a submodular function, has been found to have many applications from sensor placement to active learning. In the current paper, we extend this work to the setting of multiple queries at each time step, where the set of available queries is randomly constrained. A primary contribution of this paper is to prove the first near-optimal approximation bound for a greedy policy in this setting. A natural application of this framework is to crowd-sourced active learning problem where the set of available experts and examples might vary randomly. We instantiate the new framework for multi-label learning and evaluate it in multiple benchmark domains with promising results. The third contribution of this dissertation is the introduction of a framework for explaining the decisions of deep neural networks using human-recognizable visual and/or linguistic concepts. Our approach, called interactive naming, is based on enabling human annotators to interactively group the excitation patterns of the neurons in the critical layer of the network into groups called “visual concepts". We performed a systematic study visual concepts produced by five human annotators. We find that a large fraction of the activation maps have recognizable visual concepts, and that there is significant agreement between the different annotators about their denotations.

Major Advisor: Prasad Tadepalli
Minor Advisor: Sarah Emerson
Committee: Alan Fern
Committee: Weng-Keen Wong
GCR: John Dilles

Friday, September 20 at 10:00am to 12:00pm


Kelley Engineering Center, 1005
110 SW Park Terrace, Corvallis, OR 97331

Event Type

Lecture or Presentation

Event Topic

Research

Organization
Electrical Engineering and Computer Science
Contact Name

Calvin Hughes

Contact Email

calvin.hughes@oregonstate.edu

Subscribe
Google Calendar iCal Outlook

Recent Activity