Sub brand
Back to top

Introduction

In this webinar, researchers and developers will learn about mixed-precision techniques for training Deep Neural Networks with Tensor Core GPUs using PyTorch. First, we’ll describe real-world use cases that have benefited from significant speedups with mixed-precision training, without sacrificing accuracy or stability. We’ll give a conceptual overview of how and why mixed-precision training works. Finally, we’ll walk you through a live example of how to enable mixed-precision training using NVIDIA’s Automatic Mixed-Precision (AMP), which implements the entire recipe automatically in only three lines of user code.

By viewing this recorded webinar, you’ll learn:
  • Benefits of mixed-precision training on NVIDIA Tensor Core GPUs
  • Techniques of mixed precision training: loss scaling, master weights, and preserving accuracy for selected operations
  • Performance guidelines
  • How to use NVIDIA’s toolkit for Automatic Mixed-Precision (AMP)

ON-DEMAND WEBINAR REGISTRATION

THANK YOU FOR REGISTERING FOR THE WEBINAR

You will receive an email with instructions on how to join the webinar shortly.

Main Content

maincontent goes here

Content

Content goes here

Content

content goes here

main image description

Content

Content goes here

Content

DGX Station Datasheet

Get a quick low-down and technical specs for the DGX Station.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.

Content

Content goes here

Speaker

Michael Carilli

Senior Developer Technology Engineer, NVIDIA

Michael Carilli is a Senior Developer Technology Engineer on the Deep Learning Frameworks team at Nvidia. His focus is making mixed-precision and multi-GPU training in PyTorch fast, numerically stable, and easy to use. Previously, he worked at the Air Force Research Laboratory optimizing CFD code for modern parallel architectures. He holds a PhD in computational physics from the University of California, Santa Barbara.

Add Presenter 2's Name (John Smith)
Add Presenter 2's Title (ex: CMO, ABC Company)

Add Presenter 2's Bio (2-3 Sentences)

Add Presenter 3's Name (John Smith)
Add Presenter 3's Title (ex: CMO, ABC Company)

Add Presenter 3's Bio (2-3 Sentences)

Add Presenter 4's Name (John Smith)
Add Presenter 4's Title (ex: CMO, ABC Company)

Add Presenter 4's Bio (2-3 Sentences)
Text here
Text here

Other Speakers

Name1

Job Title.
Name 2

Job Title.
Name 3

Job Title.

Content Title

Content here

Register

Webinar: Description here

Date & Time: Wednesday, April 22, 2018