AI & Deep Learning
Sub brand
Back to top

Introduction

Date: July 14, 2020
Time: 9:00am – 10:00am PT
Duration: 1 hour


See how NVIDIA knocks down performance impediments to storage IO for workflows executing on the GPU.


As workflows shift away from the CPU in GPU-centric systems, the data path from storage to GPUs increasingly becomes the bottleneck. NVIDIA and its partners are relieving that bottleneck with a new technology called GPUDirect Storage, that enables DMA directly to and from GPU memory. This can improve bandwidth, reduce latency, reduce CPU-side memory management overheads, and reduce interference with CPU utilization.


In this talk, we’ll illustrate the benefits of GPUDirect Storage with recent results from demos and proof points in AI, data analytics, and visualization. Technical enhancements described include a compatibility mode that allows the same APIs to be used even when all software components and support are not in place.



In this webinar you will learn:
  • How NVIDIA and its partners create a direct data path to the GPU that avoids CPU bottlenecks
  • See demos and proof points illustrating multi-X gains for AI, data analytics, and visualization
  • Dig into recent technical innovations

WEBINAR REGISTRATION

THANK YOU FOR REGISTERING FOR THE WEBINAR



You will receive an email with instructions on how to join the webinar shortly.

Main Content

maincontent goes here

Content

Content goes here

Content

content goes here

main image description

Content

Content goes here

Content

DGX Station Datasheet

Get a quick low-down and technical specs for the DGX Station.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.

Content

Content goes here

Speakers

Cj Newburn

Principal Architect, co-architect of GPUDIrect Storage, HPC lead for NVIDIA Compute SW

Chris J. Newburn (CJ) is the Principal Architect in NVIDIA Compute Software for HPC strategy and the SW product roadmap, with a special focus on systems and programming models for scale.  He has contributed to a combination of hardware and software technologies over the last twenty years and has over 100 patents. He is a community builder with a passion for extending the core capabilities of hardware and software platforms from HPC into AI, data science and visualization.  Before his Ph.D. at Carnegie Mellon University, he did stints at a couple of start-ups, working on a voice recognizer and a VLIW supercomputer. He's delighted to have worked on volume products that his Mom used.

Kiran Modukuri

Kiran Modukuri is a Senior Software Engineer in the NVIDIA DGX Platform software team

Kiran Modukuri is a Senior Software Engineer in the NVIDIA DGX Platform software team with a special focus on accelerating IO pipelines for DGX platforms. He has contributed to a combination of distributed file systems, caching and replication technologies over the last 15 years. Kiran has a Master's degree in Electrical and Computer Engineering from the University of Arizona.

Presenter 3 Bio

Presenter 4 Name

Job Title 4

Presenter 4 Bio

Other Speakers

Name1

Job Title.
Name 2

Job Title.
Name 3

Job Title.

Content Title

Content here

Register

Webinar: Description here

Date & Time: Wednesday, April 22, 2018