AI & Deep Learning
Sub brand
Back to top

Introduction

Date: September 10, 2019
Time: 11:00am – 12:00pm PT
Duration: 1 hour


You’ve developed your algorithm, trained your deep learning model, and optimized it for the best performance possible. What’s next?

Join this third webinar in our inference series to learn how to launch your deep learning model in production with the NVIDIA® TensorRT™ Inference Server. TensorRT™ Inference Server enables teams to deploy trained AI models from any framework, and on any infrastructure whether it be on GPUs or CPUs.

Maggie Zhang, technical marketing engineer, will introduce the TensorRT™ Inference Server and its many features and use cases. Then she’ll walk you through how to load your model into the inference server, configure the server for deployment, set up the client, and launch the service in production.

By attending this webinar, you'll learn:
  • About TensorRT™ Inference Server features and functionality for model deployment
  • How to set up the inference server model repository with models ready for deployment
  • How to set up the inference server client with your application and launch the server in production to fulfill live inference requests

WEBINAR REGISTRATION

THANK YOU FOR REGISTERING FOR THE WEBINAR



You will receive an email with instructions on how to join the webinar shortly.

Main Content

maincontent goes here

Content

Content goes here

Content

content goes here

main image description

Content

Content goes here

Content

DGX Station Datasheet

Get a quick low-down and technical specs for the DGX Station.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.

Content

Content goes here

Speaker

Maggie Zhang

Deep Learning Technical Marketing Engineer, NVIDIA

Maggie Zhang joined NVIDIA in 2017 and she is working on deep learning frameworks. She got her PhD in Computer Science & Engineering from the University of New South Wales in 2013. Her background includes GPU/CPU heterogeneous computing, compiler optimization, computer architecture, and deep learning.

Presenter 2 Bio

Presenter 3 Bio

Job Title 4

Job Title 4

Presenter 4 Bio

Other Speakers

Name1

Job Title.
Name 2

Job Title.
Name 3

Job Title.

Content Title

Content here

Register

Webinar: Description here

Date & Time: Wednesday, April 22, 2018