Offer
Back to top
Boost your understanding of the core infrastructure that underpins NVIDIA’s accelerated computing solutions in this two-day masterclass from the NVIDIA Partner Expert program.

On day one, you’ll discover how to run and accelerate traditional x86 workloads on NVIDIA’s ARM-based CPU and CPU-GPU platforms: NVIDIA Grace and Grace Hopper.

Day two will focus on networking where you will learn how to dimension and configure your customers’ networks for the best performing AI.

Infrastructure Masterclass: Accelerated CPU

OnDemand



Summary


Yyou’ll learn how to run and accelerate traditional x86 workloads on NVIDIA’s ARM-based CPU and CPU-GPU platforms: NVIDIA Grace and Grace Hopper.


In the CPU masterclass, join us online or on-site to learn:

  • The fundamentals of accelerated CPU computing
  • Insights into the current market, value proposition and, near-term roadmap
  • How to differentiate ARM RISC versus traditional x86 CISC
  • How to transition workloads from traditional to accelerated CPU computing
  • Onsite: from unboxing and installation to cluster management


Enroll in the NVIDIA Partner Expert program to receive invitations to all sessions and work towards earning your certificate. NVIDIA Partner Expert Program.


Speakers



Jason Tichy
Senior Data Scientist
NVIDIA





Ruben Raadsheer
Channel Enablement Manager EMEA
NVIDIA





Jits Langedijk
ProViz and Partner SA Manager EMEA
NVIDIA


Infrastructure Masterclass: Accelerated Networking

OnDemand



Summary


We’ll focus on networking where you will learn how to dimension and configure your customers’ networks for the best performing AI.


In the networking masterclass, join us online or on-site to learn:

  • How AI impacts data center design
  • Networking requirements for training foundational models and deploying generative AI
  • Ethernet or Infiniband? Get a deep understanding of the key differences and relative strengths in AI environments.
  • How to size, configure and optimize network configurations
  • Onsite: take a deeper dive into the NVIDIA Spectrum Ethernet Switch portfolio and NVIDIA Cumulus Linux simulation environments.


Enroll in the NVIDIA Partner Expert program to receive invitations to all sessions and work towards earning your certificate: NVIDIA Partner Expert Program.


Speakers



Gadi Godanyan
Senior AI Networking Lead, EMEA
NVIDIA





Ruben Raadsheer
Channel Enablement Manager EMEA
NVIDIA





Jits Langedijk
ProViz and Partner SA Manager EMEA
NVIDIA


NOT IN USE

NOT IN USE


-

x

Thursday, February 29, 2024, 11:00 a.m. GMT



Summary


Conversational AI technologies are becoming ubiquitous, with countless products taking advantage of automatic speech recognition, natural language understanding, and speech synthesis coming to market. Thanks to new tools and technologies, developing conversational AI applications is easier than ever, enabling a much broader range of applications, such as virtual assistants, real-time transcription, and many more.


Speaker



Oleg Sudakov
Deep Learning Solutions Architect
NVIDIA


Oleg Sudakov is a deep learning solutions architect at NVIDIA, where he concentrates on Large Language Models training and Conversational AI.

Previously he worked as a machine learning engineer and data scientist in Apple, Huawei, and Rakuten. He has a strong background in natural language processing and speech processing.

Oleg is based in Germany.


x

Thursday, February 29, 2024, 14:00 p.m. GMT



Summary


In this session, we will do an in-depth presentation of Triton and its main components. We also will demonstrate how to quick-start and use Triton on real word applications on-prem, on the cloud, or in a mixed environment. In addition, we provide you with the scripts and code to jumpstart your Triton expertise.

TensorRT is an optimizing compiler of neural networks for inference. NVIDIA TensorRT-based applications perform up to 36X faster than CPU-only platforms during inference, enabling you to optimize neural network models trained on all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers, embedded platforms, or automotive product platforms.


Speaker



Sergio Perez Perez
Solution Architect
NVIDIA


Sergio Perez is a Solution Architect at NVIDIA specialised in Conversational AI. He has experience in optimising training and inference of LLMs, building Retrieval Augmented Generation systems and working with companies in different sectors to leverage AI. His area of expertise at NVIDIA is quantisation and inference serving. Previously he worked as AI engineer at Graphcore and Amazon, and got a PhD in computational fluid dynamics from Imperial College London.


x

Thursday, February 29, 2024, 13:00 p.m. GMT



Summary


Data science empowers enterprises to analyze and optimize various aspects such as business processes, supply chains, and digital experiences. However, data preparation and machine learning tasks often consume a significant amount of time. In time-sensitive scenarios, like predicting demand for perishable goods, speeding up data science workflows becomes a crucial advantage. With RAPIDS, a collection of open-source GPU libraries, data scientists can enhance their Python toolchain, achieve higher productivity, and improve model accuracy with minimal code modifications.


Speakers




Miguel Martinez
Senior Deep Learning Data Scientist
NVIDIA


Miguel Martínez is a senior deep learning data scientist at NVIDIA, where he concentrates on Large Language Models, Recommender Systems and Data Engineering Pipelines with RAPIDS.

Previously, he mentored students at Udacity's Artificial Intelligence Nanodegree. He has a strong background in financial services, mainly focused on payments and channels.

Select one or more of the following sessions and complete registration.

Click any session listing in the registration form to view its details.

Content

Content goes here

Content

content goes here

main image description

Content

DGX Station Datasheet

Get a quick low-down and technical specs for the DGX Station.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.

Speakers

Main CTA for lightbox form use class="lightbox" CTA

Content Title

Content here

Register

Webinar: Description here

Date & Time: Wednesday, April 22, 2018