• Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
  • Menu

Jon Krohn

  • Home
  • Fresh Content
  • Courses
  • Resources
  • Podcast
  • Talks
  • Publications
  • Sponsorship
  • Testimonials
  • Contact
Jon Krohn
0.jpeg

Minimizing Unwanted Biases (e.g., by Gender, Ethnicity) within ML Models

Added on August 20, 2020 by Jon Krohn.

Here's a new blog post I wrote on how my team eliminates unwanted biases (e.g., by gender, ethnicity) from algorithms we've deployed in the recruitment sector.

Devising algorithms that stamp out unwanted biases without skimping on accuracy or performance adds time and effort to the machine learning model-design process. When algorithms can have a considerable social impact, as ours do in the human-resources space at GQR Global Markets, investing this time and effort is essential to ensuring equitable treatment of all people.

← Newer: Final videos of "Data Structures for Algebra" segment of ML Foundations Older: Vectors and Norms: Three New "ML Foundations" Videos →
Back to Top