Machine learning has taken over our world, in more ways than we realize. You might get book recommendations, or an efficient route to your destination, or even a winning strategy for a game of Go. But you might also be admitted to college, granted a loan, or hired for a job based on algorithmically enhanced decision-making. We believe machines are neutral arbiters: cold, calculating entities that always make the right decision, that can see patterns that our human minds can’t or won’t. But are they? Or is decision-making-by-algorithm a way to amplify, extend and make inscrutable the biases and discrimination that is prevalent in society?
To answer these questions, we need to go back — all the way to the original ideas of justice and fairness in society. We also need to go forward — towards a mathematical framework for talking about justice and fairness in machine learning. I will talk about the growing landscape of research in algorithmic fairness: how we can reason systematically about biases in algorithms, and how we can make our algorithms fair(er).
This is a short (and intense) course. We’ll cover material in two lecture chunks each day. But this is also a discussion, on a topic that’s still very new and that has fluid boundaries and evolving formalisms. I’ve provided readings that are technical and non-technical in nature, and I expect (and hope!) that the presentations will provoke discussion, arguments, and new ideas.
So please read the provided materials ahead of the lecture and come prepared with your questions, comments and critiques. You’ll benefit the most from the material if you have time to engage with it.
Dec 11: Preliminaries
Basics of machine learning – supervised and unsupervised learning, empirical risk minimization, classifiers, regression, training and generalization.
- Chapters 1 (PDF), 3.1-3.4 (PDF) and 4 (PDF) of Hal Daumé’s excellent book on ML.
- Hal’s post on the ML development pipeline. This is framed in terms of the way errors can creep into the modeling process, but it doubles as an excellent explanation of the pipeline itself.
Dec 12: Automated Decision Making: Case studies of the use of machine learning in applications. An introduction to different formal notions of fairness.
- Criminal Justice:
- Risk Assessment: (Primer, discussion from Data and Civil Rights Workshop)
- Predictive Policing: (Primer and discussion)
- Julia Angwin, Jeff Larson, Surya Mattu, Lauren Kirchner, “Machine Bias”
- Video showing predictive policing in action
- On how AI can be used in hiring (by one company that provides solutions). Another perspective
- Predicting Voice-elicited emotions (from KDD 2015)
- Credit Scoring and Loans
- China’s new citizen scoring system
Readings (notions of fairness):
- Discrimination-aware data mining. (discriminatory classifiers)
- Data preprocessing techniques for classification without discrimination. (statistical parity)
- Fairness through awareness. (individual fairness)
- Certifying and removing disparate impact (disparate impact)
- Equality of opportunity in supervised learning and Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment (equalizing odds)
- Fairness in Classic and Contextual Bandits (fairness in sequential learning)
Dec 13: Fairness Mechanisms: Understanding the different techniques for ensuring fairness in classification.
- Detecting discriminatory rules in a rule-based system
- Detecting discriminatory black box decision-making (and repairing it)
- Data preprocessing techniques for classification without discrimination.
Dec 14: Fairness Mechanisms (continued)
- Discrimination Aware Decision Tree Learning
- Three Naive Bayes Approaches for Discrimination-Free Classification
- Fairness-aware Learning through Regularization Approach
- Learning Fair Representations
- A Confidence-Based Approach for Balancing Fairness and Accuracy
- Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
- Fairness in Classic and Contextual Bandits
Dec 15: Accountability via Influence Estimation: Probing black-box decision-makers: estimating influence of features.
- Breiman’s idea for testing classifiers (Section 10)
- A peek into the black box
- Algorithmic transparency via quantitative input influence
- Auditing Black-box Models for Indirect Influence.
Dec 17: Interpretability: Building interpretable models.
- Comprehensible Classification Models: A Review
- Statistical Learning with Sparsity
- Interpretable Models for Recidivism Prediction (based on the SLIM model)
- Rule Extraction from Linear Support Vector Machines
- Comprehensible Credit Scoring Models Using Rule Extraction From Support Vector Machines
- Modeling the Model
- EU Regulations on Right to Explanations.
Dec 18: Fairness, Accountability and Transparency in other areas of computer science: Beyond classification: unsupervised learning, representations, rankings and verification.
- Gender bias in word embeddings: two views.
- Measuring fairness in ranked outputs.
- Fairness as a program property.
Dec 19: Belief Systems: Axiomatic approaches to thinking about fairness.
Email me at email@example.com
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.