Offer
Back to top
Join us for an exciting and interactive day delving into cutting-edge techniques in large-language-model (LLM) application development. LLM Day will offer hands-on, practical guidance from LLM practitioners, who will share their insights and best-practices for getting started with and advancing LLM application development.
The sessions will be recorded and made available on-demand.

The Fast Path to Developing with LLMs

On Demand


 

Summary


Learn practical methods for designing and implementing LLM-powered systems on real-world business data using popular, ready-to-go LLM APIs—no specialized hardware, model training, or tricky deployment required. We'll show techniques for engineering effective inputs to the models (“prompts”) and how to combine LLMs with other systems, including business databases, with toolkits workflow frameworks like LangChain. Join us and learn how to build LLM systems to generate tangible business results..


Speaker



David Taubenheim
Senior Solutions Architect
NVIDIA


David Taubenheim is a Senior Solutions Engineer in NVIDIA's Developer Programs organization, driving AI approaches to business challenges. Prior to his work at NVIDIA, David directed a program area of engineers, scientists, and technical managers supporting US Government sponsors through specialized technology programs and emergent initiatives at the Johns Hopkins University Applied Physics Laboratory (APL). Earlier, David's work at Motorola focused on the design of bespoke digital signal processing systems using custom software defined radio platforms, earning him the company's Distinguished Innovator Award. In 2018, David received the Engineer of the Year Award by the National Organization of Gay and Lesbian Scientists and Technical Professionals for his contributions to national security technologies. He has a Master of Science in Electrical Engineering and has been granted 17 patents for a variety of technical inventions.


Tailoring LLMs to Your Use Case

On Demand



Summary


Push LLMs beyond the quality limits of off-the-shelf models and APIs by customizing them for domain-specific applications. We'll discuss strategies for preparing datasets and showcase gains from different forms of customization using practical, real-world examples. Join us and learn about model tuning techniques applicable for both API-based and self-managed LLMs.


Speaker



Christopher Pang
Senior Solutions Engineer
NVIDIA


Chris Pang is a Senior Solutions Engineer in NVIDIA's Developer Programs organization. Before joining NVIDIA, Chris worked in baseball analytics. He was a member of the front offices of the New York Mets and New York Yankees, where he split time between working on statistical models for player evaluation and full-stack development of internal web applications. Chris studied Applied Physics at Yale University.


Large Language Models and Generative AI for Life Sciences

On Demand



Summary

In this session, we will explore foundational AI models in biology, as well as practical protein engineering and design applications supported by real-world examples. We will discuss recent biology breakthroughs and apply that to how you can use LLMs to predict protein structure and function and encode protein data computationally. Attendees will learn techniques for how to use NVIDIA BioNeMo, a generative AI platform for drug discovery, to simplify and accelerate training of models on their own data, ensuring easy and scalable deployment of models for drug discovery applications.


Speakers



Chris Dallago
Senior Solutions Architect
NVIDIA


Chris is a computer scientist turned bioinformatician with a passion for systematically modelling biological mechanisms through machine learning. His path towards reaching this goal led him to contribute and push the community of learned protein sequence representations in order to find new, principled ways to describe biological entities. Bio-sequence representation learning, for instance through transformer models, is today an established field in bioinformatics with flourishing frameworks - even by NVIDIA - and impactful research applications, like the prediction of protein 3D structure from just an input sequence. However, Chris remains focused on trying to address problems for which data and intuition remain scarce, for instance those allowing to design new proteins with therapeutic or industrial use.




Dr. Chelsea Sumner, PharmD, RPh
NALA Healthcare AI Startups Lead
NVIDIA


Chelsea Sumner leads healthcare AI startups in North and Latin America at NVIDIA, managing more than 450 healthcare startups in the digital health, medical instrument, medical imaging, genomics, and drug discovery segments. Before this, she was a brand manager for Eli Lilly and Company and worked in a variety of clinical pharmacy settings, including CVS, for over six years. Chelsea graduated from the University of North Carolina at Chapel Hill Eshelman School of Pharmacy with a Doctor of Pharmacy degree and holds a bachelor’s in clinical research from Campbell University in North Carolina. In her free time, she enjoys trying new food when traveling, learning new exercises when working out, and reading.


Running Your Own LLM

On Demand



Summary


Optimizing and deploying LLMs on self-managed hardware—whether in the cloud or on premises–can produce tangible efficiency, data governance, and cost improvements for organizations operating at scale. We'll discuss open, commercially licensed LLMs that run on commonly available hardware and show how to use optimizers to get both lower-latency and higher-throughput inference to reduce compute needs. Join us and learn how to scale up self-managed LLMs to accommodate unique business and application requirements.


Speaker



Emily Apsey
Senior Technical Marketing Engineering Manager
NVIDIA



Emily Apsey is a Manager of NVIDIA Technical Marketing Engineers. She works closely with both internal and external stakeholders, on initiative such as marketing, engineering, and business development. Emily is just as happy talking to and teaching others, as she is heads down leading and using NVIDIA AI-Ready Enterprise technolog


Reinventing the Complete Cybersecurity Stack With AI Language Models

On Demand



Summary

Cybersecurity is a data problem, and one of the most effective ways of contextualizing data is via natural language. With the advancement of Large Language Models (LLMs) and accelerated compute capabilities, we can represent security data in ways that expand our detection and data generation techniques. Here we discuss advancements in LLMs, including how to leverage throughout the cybersecurity stack from copilots to synthetic data generation.


Speaker



Bartley Richardson, PhD
Director, Cybersecurity Engineering
NVIDIA


Bartley Richardson is director of cybersecurity engineering at NVIDIA and leads a cross-discipline team researching GPU-accelerated machine learning and deep learning techniques and creating new frameworks for cybersecurity. His interests include natural language processing and sequence-based methods applied to cyber network datasets and threat detection. Bartley holds a Ph.D. in computer science and engineering, working on loosely and unstructured logical query optimization, and a B.S. in computer engineering with a focus on software design and AI.


Technical Ask-the-Experts

On Demand



Summary

In this session, we'll answer any additional questions that attendees may have, beyond those discussed during the sessions.


Speakers



Ozzy Johnson
Director, Solutions Engineering
NVIDIA


Ozzy Johnson is the Director of Solutions Engineering for NVIDIA Developer Programs, where he oversees a team providing technical guidance to NVIDIA Inception startups. With more than two decades of experience in technology, Ozzy has held senior positions across the public and private sector and startups.




Adriana Flores, PhD
Senior Manager, NVIDIA AI Solution Architecture
NVIDIA


Dr. Adriana Flores leads NVIDIA's AI and GenAI Solution Architecture team. Leveraging her extensive knowledge of NVIDIA AI technologies, she empowers clients and partners to navigate the dynamic world of AI solutions. With a diverse background in AI and HPC spanning roles at NVIDIA and Hewlett Packard Labs, Adriana plays a pivotal role in making AI accessible. Beyond her corporate achievements, Adriana's academic contributions include notable research in wireless communication systems and HPC interconnect networks. Holding a Ph.D. and M.S. in Electrical Engineering from Rice University, as well as a B.S. from ITESM, Monterrey, Mexico, her academic journey complements her technical corporate expertise. Join Adriana in enabling AI in the Enterprise through practical insights and collaboration.


EMEA - Technical Ask-the-Experts

On Demand



Summary

In this session, we'll answer any additional questions that attendees may have, beyond those discussed during the sessions and the first Ask-the-Experts session.


Speakers



Ekaterina Sirazitdinova
Senior Deep Learning Data Scientist
NVIDIA


Ekaterina specializes in end-to-end AI productization, encompassing the entire process from development to optimized deployment of discriminative and generative AI solutions. Formerly a research engineer in medical image analysis, she holds a Ph.D. in Computer Science and has authored peer-reviewed publications in image-based 3D reconstruction, localization and tracking.




Miguel Martinez
Senior Deep Learning Data Scientist
NVIDIA


Miguel is a senior deep learning data scientist at NVIDIA, where he concentrates on RAPIDS and Merlin. Previously, he mentored students at Udacity's Artificial Intelligence Nanodegree. He has a strong background in financial services, mainly focused on payments and channels. As a constant and steadfast learner, Miguel is always up for new challenges.




Ross Verrall
Enterprise Services Lead EMEA
NVIDIA


Ross is responsible for NVIDIA's Enterprise Services business across EMEA, based in the UK.


Select one or more of the following sessions and complete registration. Click any session listing to view its details.

Content

Content goes here

Content

content goes here

main image description

Content

DGX Station Datasheet

Get a quick low-down and technical specs for the DGX Station.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.

Speakers

Main CTA for lightbox form use class="lightbox" CTA

Content Title

Content here

Register

Webinar: Description here

Date & Time: Wednesday, April 22, 2018