NVIDIA WEBINAR
You’ve developed your algorithm, trained your deep learning model, and optimized it for the best performance possible. What’s next?
Join this third webinar in our inference series to learn how to launch your deep learning model in production with the NVIDIA® TensorRT™ Inference Server. TensorRT™ Inference Server enables teams to deploy trained AI models from any framework, and on any infrastructure whether it be on GPUs or CPUs.
Maggie Zhang, technical marketing engineer, will introduce the TensorRT™ Inference Server and its many features and use cases. Then she’ll walk you through how to load your model into the inference server, configure the server for deployment, set up the client, and launch the service in production.
By attending this webinar, you'll learn:maincontent goes here
Content goes here
content goes here
Content goes here
Content goes here
Maggie Zhang joined NVIDIA in 2017 and she is working on deep learning frameworks. She got her PhD in Computer Science & Engineering from the University of New South Wales in 2013. Her background includes GPU/CPU heterogeneous computing, compiler optimization, computer architecture, and deep learning.
Presenter 2 Bio
Presenter 3 Bio
Presenter 4 Bio
Content here
Webinar: Description here
Date & Time: Wednesday, April 22, 2018