Workshop

Algorithmic fairness in language models

Carl Brenssell

Co-founder, Spoke.ai

Carl is Co-Founder and heads the Data team at Spoke.ai, B2B software to smartly label and summarise user-submitted work updates. With international academic experiences and having felt the alignment challenges at distributed organisations like WeWork, he is passionate about applying AI to ship meaningful data products and deliver real value to individuals and their teams.

Carl Brenssell
Carl Brenssell
Session description

Measuring bias is an important step for understanding and addressing unfairness in NLP and ML models. This can be achieved using fairness metrics which quantify the differences in a model's behaviour across a range of demographic groups. In this workshop, we will introduce you to these metrics in addition to general practices to promote algorithmic fairness.


By the end of the workshop, you will:

  • Understand why it is important to detect and mitigate algorithmic bias in language models

  • Understand how algorithmic bias can materialize in language models

  • Be able to measure and mitigate bias in pre-trained word embeddings


Structure:

Motivation:

  • Presentation on why we need fairness and bias mitigation tools with examples

  • Breakout discussions on real-life cases and causes of bias

Bias Measurement:

  • Calculating the bias of a pre-trained word embedding in python within a Deepnote notebook

  • Visualizing and comparing bias across word embeddings in python

Bias Mitigation:

  • De-biasing word embeddings in python

Q&A


Requirements:

  • We will use Deepnote to work together on python notebooks in the workshop. Access is possible by signing up for free with a google or github account, this can be done either in advance or on the day of the workshop.

  • Basic knowledge of python is required and experience working with word embeddings is an advantage