In this webinar we will explore some of the common challenges with deep learning deployment and how they can be addressed with NVIDIA TensorRT. Through an example, we will review a typical workflow for taking a trained deep neural network to production to achieve desired throughput, latency and energy efficiency requirements.
- Overview of deep learning deployment and associated challenge
- A deep dive into deep learning deployment workflow with TensorRT
- Overview of TensorRT optimizer and runtime engine
- TensorRT performance benefits over traditional CPU and GPU deployment approaches
- A live demonstration of deploying a TensorFlow model into an app running TensorRT