Events Calendar

Reinforcement Learning Across Thousands of CPUs: Coach, Ray, and TensorFlow

Stephen Offer, AI Program Manager Intern

Deep Learning is simultaneously one of the most sought-after elements that researchers and developers seek to incorporate and one of the more computationally demanding workloads. Taking anywhere from a few minutes to a few days to train on large datasets, applications may take months to properly develop an AI system. This lecture will, therefore, teach about distributed training- the methods by which to spread the training process across multiple nodes in a compute cluster.

It will also touch on Reinforcement Learning, an exciting subset of machine learning in which an agent learns from its environment to perform complex tasks. Famous examples of this type of machine learning include AlphaGo and AlphaStar (Beat professional-level players in Go and Starcraft II), DQN (Super-human performance of Atari games), and the ANYMal Robot (RL-based instead of inverse kinematics). But most importantly, this talk will be about the large scalability of these algorithms and the advantages over traditional deep learning.

As more than just a look at the theory underneath distributed systems, the lecture will cover the frameworks that can be used to create such systems. It’ll discuss Distributed TensorFlow, Intel RL Coach, and UC Berkeley RISELab’s Ray. The lecture will finish with an overview of how to get access to free compute clusters and software, enabling listeners to begin experimenting with distributed AI and reinforcement learning on their own.

Wednesday, March 13, 2019 at 3:00pm to 4:00pm

Kelley Engineering Center, 1005
110 SW Park Terrace, Corvallis, OR 97331

Event Type

Lecture or Presentation

Event Topic


College of Engineering, Electrical Engineering and Computer Science
Contact Name

Aayam Shrestha

Contact Email

Google Calendar iCal Outlook

Recent Activity