Workshop
Algorithmic fairness in language models
Carl Brenssell
Co-founder, Spoke.ai
Carl is Co-Founder and heads the Data team at Spoke.ai, B2B software to smartly label and summarise user-submitted work updates. With international academic experiences and having felt the alignment challenges at distributed organisations like WeWork, he is passionate about applying AI to ship meaningful data products and deliver real value to individuals and their teams.

Session description
Measuring bias is an important step for understanding and addressing unfairness in NLP and ML models. This can be achieved using fairness metrics which quantify the differences in a model's behaviour across a range of demographic groups. In this workshop, we will introduce you to these metrics in addition to general practices to promote algorithmic fairness.