Offer
Back to top
Dive deep into NVIDIA Omniverse, the platform for connecting and developing OpenUSD applications of choice, in this two day masterclass brought to you by the NVIDIA Partner Expert Program.

Offering both virtual sessions open to all partners and hands-on labs for in person attendees, this masterclass provides the opportunity to explore how Omniverse can optimize your customers’ workflows. From virtual factory integration, to product configurators, and design reviews, you’ll learn how to scope the opportunity and expand your knowledge of NVIDIA’s solutions.

Omniverse Masterclass, Day One

OnDemand


On Omniverse Masterclass Day 1, you will:

  • Discover how to build Digital Twins with Omniverse
  • Learn how digital representations of physical objects can be augmented with Generative AI, to generate data for vision models that can be deployed to inference models in the physical world.

Pre-Work
To gain maximum benefit from the Omniverse Masterclass, we encourage you to study the recommended self-paced Omniverse training for this track as outlined on the NVIDIA Partner Expert Program website.

Omniverse Masterclass, Day Two

OnDemand


On Omniverse Masterclass Day 2, you will:

  • Learn about Omniverse developer tools and templates
  • Get to know all about custom extension and the integration with NGC as an API endpoint in your Omniverse application.
  • Discover how to integrate your custom extension into a custom app using developer tools and how to deploy and stream your application.

Pre-Work
To gain maximum benefit from the Omniverse Masterclass, we encourage you to study the recommended self-paced Omniverse training for this track as outlined on the NVIDIA Partner Expert Program website.

NOT IN USE

NOT IN USE


-

x

Thursday, February 29, 2024, 11:00 a.m. GMT



Summary


Conversational AI technologies are becoming ubiquitous, with countless products taking advantage of automatic speech recognition, natural language understanding, and speech synthesis coming to market. Thanks to new tools and technologies, developing conversational AI applications is easier than ever, enabling a much broader range of applications, such as virtual assistants, real-time transcription, and many more.


Speaker



Oleg Sudakov
Deep Learning Solutions Architect
NVIDIA


Oleg Sudakov is a deep learning solutions architect at NVIDIA, where he concentrates on Large Language Models training and Conversational AI.

Previously he worked as a machine learning engineer and data scientist in Apple, Huawei, and Rakuten. He has a strong background in natural language processing and speech processing.

Oleg is based in Germany.


x

Thursday, February 29, 2024, 14:00 p.m. GMT



Summary


In this session, we will do an in-depth presentation of Triton and its main components. We also will demonstrate how to quick-start and use Triton on real word applications on-prem, on the cloud, or in a mixed environment. In addition, we provide you with the scripts and code to jumpstart your Triton expertise.

TensorRT is an optimizing compiler of neural networks for inference. NVIDIA TensorRT-based applications perform up to 36X faster than CPU-only platforms during inference, enabling you to optimize neural network models trained on all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers, embedded platforms, or automotive product platforms.


Speaker



Sergio Perez Perez
Solution Architect
NVIDIA


Sergio Perez is a Solution Architect at NVIDIA specialised in Conversational AI. He has experience in optimising training and inference of LLMs, building Retrieval Augmented Generation systems and working with companies in different sectors to leverage AI. His area of expertise at NVIDIA is quantisation and inference serving. Previously he worked as AI engineer at Graphcore and Amazon, and got a PhD in computational fluid dynamics from Imperial College London.


x

Thursday, February 29, 2024, 13:00 p.m. GMT



Summary


Data science empowers enterprises to analyze and optimize various aspects such as business processes, supply chains, and digital experiences. However, data preparation and machine learning tasks often consume a significant amount of time. In time-sensitive scenarios, like predicting demand for perishable goods, speeding up data science workflows becomes a crucial advantage. With RAPIDS, a collection of open-source GPU libraries, data scientists can enhance their Python toolchain, achieve higher productivity, and improve model accuracy with minimal code modifications.


Speakers




Miguel Martinez
Senior Deep Learning Data Scientist
NVIDIA


Miguel Martínez is a senior deep learning data scientist at NVIDIA, where he concentrates on Large Language Models, Recommender Systems and Data Engineering Pipelines with RAPIDS.

Previously, he mentored students at Udacity's Artificial Intelligence Nanodegree. He has a strong background in financial services, mainly focused on payments and channels.

Select one or more of the following sessions and complete registration.

Click any session listing in the registration form to view its details.

Please follow the link to view your sessions:

Content

Content goes here

Content

content goes here

main image description

Content

DGX Station Datasheet

Get a quick low-down and technical specs for the DGX Station.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.

Speakers

Main CTA for lightbox form use class="lightbox" CTA

Content Title

Content here

Register

Webinar: Description here

Date & Time: Wednesday, April 22, 2018