Kommentar |
Accuracy is not enough when you’re developing machine learning systems for consequential application domains. You also need to make sure that your models are fair, have not been tampered with, will not fall apart in different conditions, and can be understood by people. Your design and development process has to be transparent and inclusive. You don’t want the systems you create to be harmful, but to help people flourish in ways they consent to. All of these considerations beyond accuracy that make machine learning safe, responsible, and worthy of our trust have been described by many experts as the biggest challenge of the next five years. This course will equip you with the thought process to meet this challenge.
The course focuses on three key issues in machine learning, addressed from an ethical, legal, and technological perspective: 1. Personal data processing: privacy, confidentiality, surveillance, recourse, data collection, and power differentials 2. Data-driven decision support: biases and transparency in data processing, data-rich communication, and data visualization 3. Automated decision making: conceptualizations of power and discrimination in scenarios with different degrees of automation.
We will spend about half of the course studying computing technologies for, e.g., anonymizing data, or detecting and mitigating algorithmic bias. The other half of the course we will study different conceptualizations of power around data processing pipelines, analyze bias and discrimination in computer systems from a moral philosophy perspective, and overview the relevant legal frameworks for data processing. |
Bemerkung |
Es ist geplant, die Vorlesungstermine überwiegend online durchzuführen. Evtl. werden diese aufgezeichnet und diese Aufzeichnungen beim Arbeitgeber der Dozentin intern zugänglich gemacht. Ob eine Aufzeichnung erfolgt, darüber informiert die Dozentin zu Beginn der Veranstaltung. |