A revolution is occurring across the GPU software stack, driven by the disruptive performance gains NVIDIA GPUs have seen generation after generation. The modern field of deep learning would have not been possible without GPUs, and the MapD Core database often sees two-or-more orders of magnitude performance gains compared to CPU systems - but for all of the innovation occurring in the GPU software ecosystem, the systems and platforms themselves still remain isolated from each other. Even though the individual components are seeing significant acceleration from running on the GPU, they must intercommunicate over the relatively thin straw of the PCIe and then through CPU memory.
Watch the replay to hear Joshua Patterson, Director of Applied Solutions Engineering at NVIDIA, and Bill Maimone VP of Engineering at MapD, as discuss and communicate:
- The importance of open source community to enable efficient intra-GPU communication between different processes running on the GPUs.
- The integration that will allow developers to build new functions to cluster or perform analysis on queries, and seamless workflows that combine data processing, machine learning (ML), and visualization