NVIDIA Developer Day for
Lloyds Banking Group
Thursday, February 29, 9:30 a.m. - 3:00 p.m. GMT
Thursday, February 29, 2024, 9:30 a.m. GMT
Welcome to the NVIDIA Developer Day for Lloyds Banking Group!
During this event, NVIDIA experts will share their knowledge and expertise that we have specifically selected together with the Chief Technology Office for your benefit.
We will begin the day with opening remarks and an event overview by Kelly Major, Account Director at NVIDIA.
Lloyds Banking Group-Client Partner. 25 years working with Financial services clients in digital transformation, data and AI.
Thursday, February 29, 2024, 9:45 a.m. GMT
Discover how generative AI (GA) enables businesses to develop better products and services and deliver original content tailored to the unique needs of customers and audiences. Generative models are accelerating the development of applications for many use cases, including question-answering, summarizing, textual entailment, and generating 2D and 3D images and audio, among others. This session will provide an overview of the major developments in generative AI, where it currently stands, and what it could be capable of in the coming years. We’ll discuss technical details and popular use cases driving next-gen generative applications, as well as how businesses can responsibly take advantage of the technology
Adam Grzywaczewski is a deep learning solution architect at NVIDIA, where his primary responsibility is to support a wide range of customers in delivery of their deep learning solutions. Adam is an applied research scientist specializing in machine learning with a background in deep learning and system architecture. Previously, he was responsible for building up the UK government’s machine-learning capabilities while at Capgemini and worked in the Jaguar Land Rover Research Centre, where he was responsible for a variety of internal and external projects and contributed to the self-learning car portfolio.
NOT IN USE
Thursday, February 29, 2024, 11:00 a.m. GMT
Conversational AI technologies are becoming ubiquitous, with countless products taking advantage of automatic speech recognition, natural language understanding, and speech synthesis coming to market. Thanks to new tools and technologies, developing conversational AI applications is easier than ever, enabling a much broader range of applications, such as virtual assistants, real-time transcription, and many more.
Oleg Sudakov is a deep learning solutions architect at NVIDIA, where he concentrates on Large Language Models training and Conversational AI.
Previously he worked as a machine learning engineer and data scientist in Apple, Huawei, and Rakuten. He has a strong background in natural language processing and speech processing.
Oleg is based in Germany.
Thursday, February 29, 2024, 14:00 p.m. GMT
In this session, we will do an in-depth presentation of Triton and its main components. We also will demonstrate how to quick-start and use Triton on real word applications on-prem, on the cloud, or in a mixed environment. In addition, we provide you with the scripts and code to jumpstart your Triton expertise.
TensorRT is an optimizing compiler of neural networks for inference. NVIDIA TensorRT-based applications perform up to 36X faster than CPU-only platforms during inference, enabling you to optimize neural network models trained on all major frameworks, calibrate for lower precision with high accuracy, and deploy to hyperscale data centers, embedded platforms, or automotive product platforms.
Sergio Perez is a Solution Architect at NVIDIA specialised in Conversational AI. He has experience in optimising training and inference of LLMs, building Retrieval Augmented Generation systems and working with companies in different sectors to leverage AI. His area of expertise at NVIDIA is quantisation and inference serving. Previously he worked as AI engineer at Graphcore and Amazon, and got a PhD in computational fluid dynamics from Imperial College London.
Thursday, February 29, 2024, 13:00 p.m. GMT
Data science empowers enterprises to analyze and optimize various aspects such as business processes, supply chains, and digital experiences. However, data preparation and machine learning tasks often consume a significant amount of time. In time-sensitive scenarios, like predicting demand for perishable goods, speeding up data science workflows becomes a crucial advantage. With RAPIDS, a collection of open-source GPU libraries, data scientists can enhance their Python toolchain, achieve higher productivity, and improve model accuracy with minimal code modifications.
Miguel Martínez is a senior deep learning data scientist at NVIDIA, where he concentrates on Large Language Models, Recommender Systems and Data Engineering Pipelines with RAPIDS.
Previously, he mentored students at Udacity's Artificial Intelligence Nanodegree. He has a strong background in financial services, mainly focused on payments and channels.
Content goes here
content goes here
Webinar: Description here
Date & Time: Wednesday, April 22, 2018