Offer
Back to top

Introduction

As the demand for faster and more efficient AI infrastructures grows, it’s essential to have a high-speed network that can handle the intensive data transfer requirements of large language models. InfiniBand has emerged as the top choice for these workloads, due to its unparalleled speed, low latency, and In-Network Computing features.

Join NVIDIA, 650 Group, and Microsoft for an insightful discussion on how InfiniBand is transforming the AI landscape, and learn how to use this technology to accelerate AI workloads and drive system efficiency.

Webinar attendees will learn:

  • How InfiniBand technology is revolutionizing the way AI infrastructures are built and managed.
  • How InfiniBand can speed up the training and inference of large language models, and how it can drive system efficiency by reducing the need for data movement between nodes.
  • How InfiniBand can support other AI workloads, such as deep learning and computer vision, with real-world examples of how companies such as Microsoft are using this technology to achieve breakthrough results in their AI projects.

Learn more about the NVIDIA Quantum InfiniBand Platform

Register Now

THANK YOU FOR REGISTERING FOR THE WEBINAR



You will receive an email with instructions on how to join the webinar shortly.

Main Content

maincontent goes here

Content

Content goes here

Content

content goes here

main image description

Content

Content goes here

Content

DGX Station Datasheet

Get a quick low-down and technical specs for the DGX Station.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.

Content

Content goes here

Speakers

Gilad Shainer
Senior Vice President of Networking
NVIDIA
Gilad Shainer serves as senior vice president of networking at NVIDIA. Gilad chairs the HPC-AI Advisory Council, is the president of UCF and CCIX consortia, a member of the InfiniBand Trade Association, and a contributor to the PCISIG PCI-X and PCIe specifications. He holds multiple patents in high-speed networking and received the 2015 R&D100 award for his contribution to the CORE-Direct In-Network Computing technology and the 2019 R&D100 award for his contribution to the Unified Communication X technology. Gilad holds M.S. and B.S. degrees in electrical engineering from the Technion Institute of Technology in Israel.
Nidhi Chappell
General Manager of Microsoft Azure HPC and AI
Microsoft Azure
Nidhi Chappell leads the Workload Optimized Compute for Azure. Under her leadership, the team is responsible for transitional (on-prem workloads that are now adapting to cloud) and more modern workloads (cloud native) that have very specific infrastructure requirements outside of our general-purpose fleet. The team is responsible for Azure offerings for Confidential Computing, SAP, HPC, and AI. Prior to joining Microsoft, Nidhi was at Intel for 12 years where she led the development of AI product strategy and Intel’s efforts around converged workflow for HPC and AI. Nidhi holds an M.S. in computer engineering from the University of Wisconsin, an MBA from the University of Michigan, and a U.S. patent on branch prediction.
Alan Weckel
Founder and Technology Analyst
650 Group
Alan Weckel is a recognized data center networking, enterprise, and cloud market expert. He publishes quarterly market share and forecast data for switching and cloud markets. His research spans verticals, use cases, and speed transitions. Alan is pioneering research in areas such as AI networking, white box, data center switching, and merchant silicon.



Main CTA for lightbox form use class="lightbox" CTA

Content Title

Content here

Register

Webinar: Description here

Date & Time: Wednesday, April 22, 2018