Deep Learning Inference Platform
Offer
Back to top

Introduction

Content

content goes here. To create a button please use class="link-btn".

Register now

Form Here
Use string ?thankyou=true in the form follow-up URL

Wednesday, February 23, 2022 | 10:00am - 11:00am PT

AI inference can deliver faster, more accurate predictions to organizations of all sizes—but building a platform for production AI inference is hard. Real-world use cases require different types of AI model architectures, and the models can contain hundreds of millions of parameters. Models are trained in different frameworks (TensorFlow, PyTorch, XGBoost, Python, and others) and have different formats. Applications have different requirements (real-time low latency, high-throughput batch, or streaming inputs), and then there are different execution environments (CPUs, GPUs, in the cloud, on premises, at the edge). High-performance inference on specific hardware or in-framework is challenging because of competing constraints like latency, accuracy, throughput, and memory size that modern AI applications demand.

Join our webinar to explore how NVIDIA’s inference solution, including open-source NVIDIA Triton™ Inference Server and NVIDIA® TensorRT™, delivers fast and scalable AI inference in production.

In the webinar, you’ll learn:
  • How to optimize, deploy, and scale AI models in production using Triton Inference Server and TensorRT
  • CPUs and GPUs, and with a model analyzer for efficient deployment
  • How to standardize workflows to optimize models using TensorRT and framework Integrations with PyTorch and TensorFlow
  • About real-world use cases of customers and the benefits they’re seeing with Triton and TensorRT

Join us after the presentation for a live Q&A session.

In the webinar, you’ll learn:
  • New Jetson features like the Python GPIO library
  • The anatomy of a typical AI project using Jetson Nano
  • How to train a neural network for a custom task, like avoiding collisions with JetBot 
  • How to easily perform real-time object detection with Jetson Nano
  • How to move beyond JetBot and create your own project

Join us after the presentation for a live Q&A session.

Content

content goes here

main image description

Content

Content goes here

Content

DGX Station Datasheet

Get a quick low-down and technical specs for the DGX Station.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.

Content

Content goes here

Speakers

Text here
Text here
Text here
Text here
Text here
Text here
Shankar Chandrasekaran

Senior Product Marketing Manager, NVIDIA

Shankar is Senior Product Marketing manager in the data center GPU team at NVIDIA. He is responsible for GPU software infrastructure marketing to help IT and DevOps easily adopt and seamlessly integrate GPUs in their infrastructure. Before NVIDIA, he held engineering, operations and marketing positions in both small and large technology companies. He holds business and engineering degrees.
Jay Rodge

Product Marketing Manager, NVIDIA

Jay Rodge is a product marketing manager for deep learning and inference products at NVIDIA driving launches and product marketing initiatives. Jay received his master’s degree in computer science from Illinois Tech, Chicago with a focus on computer vision and NLP. Before NVIDIA, Jay was an AI research intern at BMW Group solving problems using computer vision for BMW’s largest manufacturing plant.

Main CTA for lightbox form use class="lightbox" CTA

WEBINAR REGISTRATION


THANK YOU FOR REGISTERING FOR THE WEBINAR



You will receive an email with instructions on how to join the webinar shortly.

Register

Webinar: Description here

Date & Time: Wednesday, April 22, 2018