AI & Deep Learning
Sub brand
Back to top

Introduction

Date: June 18, 2020
Time: 9:00am – 10:00am PT
Duration: 1 hour


Even as artificial intelligence (AI) and machine learning (ML) provide endless benefits, such as increasing productivity while decreasing costs, reducing waste, improving efficiency, and fostering innovation in outdated business models, there’s tremendous potential for errors that result in unintended, biased results. Companies often realize AI and ML performance issues after the damage has been done.


Join us for this live webinar featuring NVIDIA Inception member Fiddler.ai to learn more about the basics of Explainable AI (XAI) and why explainable monitoring is critical to successful AI.



By attending this webinar, you'll learn:
  • Five key operational challenges in AI and ML models, including model decay, data drift, data integrity, outliers, and bias
  • The ways Explainable ML Monitoring overcomes these challenges
  • How to solve for bias and fairness issues with transparency and insights through continuous monitoring
Join us after the presentation for a live Q&A session.

WEBINAR REGISTRATION

THANK YOU FOR REGISTERING FOR THE WEBINAR



You will receive an email with instructions on how to join the webinar shortly.

Main Content

maincontent goes here

Content

Content goes here

Content

content goes here

main image description

Content

Content goes here

Content

DGX Station Datasheet

Get a quick low-down and technical specs for the DGX Station.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.
DGX Station Whitepaper

Dive deeper into the DGX Station and learn more about the architecture, NVLink, frameworks, tools and more.

Content

Content goes here

Content

Content goes here

Content

Content goes here

Speaker

Dr. Ankur Taly

Head of Data Science, Fiddler Labs

Ankur Taly is the Head of Data Science at Fiddler labs, where he is responsible for developing, productizing, and evangelizing core explainable AI technology. Previously, he was a Staff Research Scientist at Google Brain where he carried out research in explainable AI, and was most well-known for his contribution to developing and applying Integrated Gradients  — a new interpretability algorithm for Deep Networks.

Presenter 2 Bio

Presenter 3 Bio

Presenter 4 Name

Job Title 4

Presenter 4 Bio

Other Speakers

Name1

Job Title.
Name 2

Job Title.
Name 3

Job Title.

Content Title

Content here

Register

Webinar: Description here

Date & Time: Wednesday, April 22, 2018