WEBINAR
AI is driving innovation across businesses of every size and scale. NVIDIA Triton™ Inference Server simplifies the deployment of AI deep learning and machine learning models at scale in production. Deploy trained AI models from any major framework on GPUs and CPUs with high performance. Learn why Triton is the top choice for AI inference.
maincontent goes here
Content goes here
content goes here
Content goes here
Content goes here
Arun Raman is an enterprise solution architect at NVIDIA, where he focuses on helping customers with Deep Learning Infrastructure, ML Ops, Deep Learning and Machine Learning challenges on GPU. He has interest in the inference at scale, NLP, recommendation engines and accelerated ETL. Arun likes learning new technologies, which is shown in his career paths from building networks protocols to now working on Deep Learning. Arun also has a Masters degree in Electrical Engineering from The University of Texas at Dallas.
Mohit Ayani is a Solution Architect on NVIDIA’s CSP team where he helps customers accelerate their AI workloads using Nvidia products. His interest lies in solving business challenges by deploying the optimized AI models from Computer Vision and the NLP domain. Mohit obtained his doctorate degree in Geophysics from the University of Wyoming.
Presenter 3 Bio
Presenter 4 Bio
Content here
Webinar: Description here
Date & Time: Wednesday, April 22, 2018