BEGIN:VCALENDAR
VERSION:2.0
CALSCALE:GREGORIAN
PRODID:iCalendar-Ruby
BEGIN:VEVENT
CATEGORIES:Lecture or Presentation
DESCRIPTION:Watch the speaker here.\n\nSpeaker: Aditi Raghunathan\, Carnegi
 e Mellon University \n\nAbstract: In recent years\, foundation models—large
  pretrained models that can be adapted for a wide range of tasks—have achie
 ved state-of-the-art performance on a variety of tasks. The adaptation or f
 ine-tuning process is a crucial component that enables specialization to th
 e task of interest\, and is the de facto standard for mitigating risks such
  as reducing toxic and harmful generations from large language models. \n\n
 While the pretrained models are trained on broad data\, the adaptation (or 
 fine-tuning) process is often performed on limited well-curated data. How w
 ell does fine-tuning generalize beyond the narrow training distribution? Vi
 a theory and experiments\, we show how to improve current fine-tuning appro
 aches so that they can better leverage diverse pretraining knowledge and im
 prove downstream performance across broader settings than the narrow fine-t
 uning data. On the flip side\, we show that pretrained knowledge can be har
 d to get rid of\, thereby underlining the potential perils of over reliance
  on fine-tuning for safety.\n\nBio: Aditi Raghunathan is an Assistant Profe
 ssor in the Computer Science Department at CMU. She received her Ph.D. from
  Stanford in 2021\, and Bachelor of Technology from IIT Madras in 2016. She
  is a recipient of the Okawa research grant\, Schmidt AI2050 Early Career F
 ellowship\, the Google Research Scholar Award\, Rising Stars in EECS\, Goog
 le PhD Fellowship\, Open Philanthropy AI Fellowship\, Stanford School of En
 gineering Fellowship and Google Anita Borg Memorial Fellowship. She was fea
 tured in the Forbes 30 under 30 list for her contributions to reliable mach
 ine learning. Her PhD thesis was awarded the Arthur Samuel Best Thesis Awar
 d at Stanford. Her research has also been recognized by multiple orals and 
 spotlights at top conferences\, and a Best Paper Award at Data Problems in 
 ML Workshop at ICLR 2024.
DTEND:20250214T230000Z
DTSTAMP:20260414T083704Z
DTSTART:20250214T220000Z
GEO:44.567164;-123.278692
LOCATION:Kelley Engineering Center\, 1001
SEQUENCE:0
SUMMARY:Understanding the promises and limits of fine-tuning
UID:tag:localist.com\,2008:EventInstance_48782727740528
URL:https://events.oregonstate.edu/event/understanding-the-promises-and-lim
 its-of-fine-tuning
END:VEVENT
END:VCALENDAR
