• Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
  • Menu

Jon Krohn

  • Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
Jon Krohn

How to Catch and Fix Harmful Generative A.I. Output

Added on June 23, 2023 by Jon Krohn.

Today, the A.I. entrepreneur Krishna Gade joins me to detail open-source solutions for overcoming the safety and security issues associated with generative A.I. systems, such as those powered by Large Language Models (LLMs).

The remarkably well-spoken Krishna:
• Is Co-Founder and CEO of Fiddler AI, an observability platform that has raised over $45m in venture capital to build trust in A.I. systems.
• Previously worked as an engineering manager on Facebook’s Newsfeed, as Head of Data Engineering at Pinterest, and as a software engineer at both Twitter and Microsoft.
• Holds a Masters in Computer Science from the University of Minnesota.

In this episode, Krishna details:
• How the LLMs that enable Generative A.I. are prone to inaccurate statements, can be biased against protected groups and are susceptible to exposing private data.
• How these undesirable and even harmful LLM outputs can be identified and remedied with open-source solutions like the Fiddler Auditor that his team has built.


The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.

In Data Science, Interview, Podcast, SuperDataScience, YouTube Tags generativeai, AI, data engineering, LLMs
← Newer: A.I. Accelerators: Hardware Specialized for Deep Learning Older: Observing LLMs in Production to Automatically Catch Issues →
Back to Top