郵箱登錄 | 所務辦公 | 收藏本站 | English | 中國科學院
 
首頁 計算所概況 新聞動態 科研成果 研究隊伍 國際交流 技術轉移 研究生教育 學術出版物 黨群園地 科學傳播 信息公開
國際交流
學術活動
交流動態
現在位置:首頁 > 國際交流 > 學術活動
High-performance Memory Hierarchy Design for Throughput Processors
2019-05-10 | 【 【打印】【關閉】

  主 講 人:Rachata Ausavarungnirun Carnegie Mellon University

  報告時間:2019年5月14日(周二)下午 14:15-16:00

  報告地點:四層報告廳

  報告摘要 (Abstract)

  The continued growth of the computational capability of throughput processors has made throughput processors the platform of choice for a wide variety of high-performance computing applications. Graphics Processing Units (GPUs) are a prime example of throughput processors that can deliver high performance for applications ranging from typical graphics applications to general-purpose data parallel (GPGPU) applications. However, this success has been accompanied by new performance bottlenecks throughout the memory hierarchy of GPU-based systems. This talk identifies and eliminates performance bottlenecks caused by major sources of interference throughout the memory hierarchy. Specifically, we provide an in-depth analysis of bottlenecks at the caches, main memory, address translation and page-level transfers that significantly degrade the performance and efficiency of GPU-based systems. To minimize such performance bottlenecks, we introduce changes to the memory hierarchy for systems with GPUs that allow the entire memory hierarchy to be aware of applications’ characteristics. Our proposal introduces a combination of GPU-aware cache and memory management techniques that effectively mitigates performance bottlenecks in the memory hierarchy of current and future GPU-based systems as well as other types of throughput processors.

  主講人簡介 (Short Bio)

  Rachata Ausavarungnirun is a postdoctoral researcher at Carnegie Mellon University and a lecturer at the Sirindhorn International Thai-German Graduate School of Engineering at King Mongkut’s University of Technology North Bangkok. His research spans multiple topics across computer architecture and system software with emphasis on GPU architecture, heterogeneous CPU-GPU architecture, management of GPUs in the cloud, memory subsystems, memory management, processing-in-memory, non-volatile memory, network-on-chip, and accelerator designs. Rachata received his Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University in 2017.

 
網站地圖 | 聯系我們 | 22选5历史走势 | 所長信箱
 
京ICP備05002829號 京公網安備1101080060號