NVIDIA WEBINAR
MPI, the Message Passing Interface, is a standard API for communicating data via messages between distributed processes that is commonly used in HPC to build applications that can scale to multi-node computer clusters. As such, MPI is fully compatible with CUDA, which is designed for parallel computing on a single computer or node.
CUDA-aware MPI helps to enable solving problems with a data size too large to fit into the memory of a single GPU, or that would require an unreasonably long compute time on a single node, to accelerate an existing MPI application with GPUs or to enable an existing single-node multi-GPU application to scale across multiple nodes.
This webinar will give you an overview of OpenACC and CUDA-aware MPI.
maincontent goes here
Content goes here
content goes here
Content goes here
Content goes here

Working as a HPC Solutions Architect for NVIDIA. Sayak possesses extensive hands-on experience in developing, porting, optimizing and benchmarking HPC applications with specialization in Parallel Programming techniques. In his previous roles, he has extensively worked on developing applications for big-data, embedded systems combined with his knowledge in domains like finance, telecom, radars, and image and video processing. He holds a master’s degree from Manipal Institute of Technology, Manipal.
Presenter 2 Bio
Presenter 3 Bio
Presenter 4 Bio
Content here
Webinar: Description here
Date & Time: Wednesday, April 22, 2018