Multimodal Data Augmentation for Data-Efficient Robot Manipulation
About this Event
2251 SW Campus Way, Corvallis, OR 97331
Zoom.
Speaker: Daniel Seita, Assistant Professor, University of Southern California
Location: Bexell 415
Time: April 10, 2-3 p.m.
Abstract:
Despite recent advances, learning-based robot manipulation systems often require large demonstration datasets and degrade in cluttered or deformable environments. This talk presents diffusion-based multimodal data augmentation methods that synthesize consistent observations and action labels. By augmenting limited demonstrations, these approaches substantially reduce data requirements and enable robust manipulation in complex, real-world settings.
Bio:
Daniel Seita is an Assistant Professor in the Computer Science department at the University of Southern California and the director of the Sensing, Learning, and Understanding for Robotic Manipulation (SLURM) Lab. His research interests are in computer vision, machine learning, and foundation models for robot manipulation, focusing on improving performance in visually and geometrically challenging settings. Daniel was a postdoc at Carnegie Mellon University's Robotics Institute and holds a PhD in computer science from the University of California, Berkeley. Daniel has been honored with the AAAI 2026 New Faculty Highlights program. He presents his work at premier robotics conferences such as ICRA, IROS, RSS, and CoRL.