Events Calendar

PhD Preliminary Oral Exam – Arash Azizimazreah

Designing Energy-efficient, Flexible and Reliable Deep Learning Accelerators

State-of-the-art deep learning models, especially deep convolutional neural networks (DCNNs), can achieve very high accuracy in a wide range of tasks, thus are being deployed in many conventional and emerging fields. Due to the special computation and memory behaviors of DCNNs, hardware accelerators have become increasingly important to achieve the speedup and power efficiency that are not possible in today’s general-purpose computing platforms. In this research, we are investigating different design aspects of hardware deep learning accelerators including performance, energy-efficacy, and reliability. A considerable amount of off-chip memory traffic can be generated by the accelerators to access feature maps and weights. These off-chip accesses are extremely costly in terms of both latency and energy. We introduce the abstraction of logical buffers to address the lack of flexibility in existing buffer architecture, and proposed an architecture that mines the unexploited opportunity of on-chip data reusing in order to reduce the off-chip traffic. The logical buffers are based on a new memory bank that can tolerate soft errors for achieving higher reliability. We also explore the approach of providing the flexibility of runtime configuration of processing element (PE) array in accelerators to boost the performance.

Major Advisor: Lizhong Chen
Committee: Bella Bose
Committee: Matthew Johnston
Committee: Fuxin Li
GCR: William H. Warnes

Tuesday, December 4, 2018 at 10:00am to 12:00pm


Kelley Engineering Center, 1007
110 SW Park Terrace, Corvallis, OR 97331

Event Type

Lecture or Presentation

Event Topic

Research

Organization
College of Engineering, Electrical Engineering and Computer Science
Contact Name

Calvin Hughes

Contact Email

Calvin.Hughes@oregonstate.edu

Subscribe
Google Calendar iCal Outlook

Recent Activity