WEBINAR
Optimize Your Model Deployment with Triton Inference Server
One of the main challenges AI startups are facing is efficiently deploying their Machine and Deep Learning models. In this webinar, we will explore how NVIDIA Triton Inference Server simplifies and accelerates inference on GPUs. Join the webinar to learn how startups can optimize their inference pipeline and scale AI model deployment with ease.
Content goes here
content goes here
Content goes here
Content goes here
Alexander Martynau is Inception program EMEA Community Manager. With background in Sales & Communication and a passion for accelerated computing, he has been helping startups adopt NVIDIA platform solutions for the last 4 years. Lately focusing on the rapidly growing MENA startup ecosystem, Alexander identifies future AI champions and helps them scale using cutting-edge NVIDIA technology.
Dr. Adam Grzywaczewski is a senior deep learning data scientist at NVIDIA, where his primary responsibility is to support customers in scaling and deployment of their deep learning workloads. Over the last four years, Adam has specialized in large-scale training, focusing not just on deep learning system and software design, but also on algorithms that allow for large batch data/model/pipeline parallel training. Adam works with customers with high computational needs, including key automotive customers and organizations with the need for large scale NLP and conversational AI. Adam also focuses on techniques for computationally efficient inference, including parameter/compute efficient model design and model pruning and quantization.
Ulrik taught himself to code and sold his first website built using the LAMP stack when he was 14. He started his career managing a multi-million dollar franchise in the emerging markets rates & FX team @ J.P. Morgan. Ulrik holds an M.S. in Computer Science from Imperial College London.
Presenter 4 Bio
Content here
Webinar: Description here
Date & Time: Wednesday, April 22, 2018