SP 2: Protection against discrimination as a limit to machine decision-making

Subproject 2 (SP 2) examines the extent to which anti-discrimination law – which is tailored to human behaviour – restricts the decisions of autonomous systems.
The use of artificial intelligence (AI) carries a risk of discrimination: algorithmic systems can generate biases through faulty data or programming and perpetuate existing social inequalities.
Existing protection against discrimination is put to the test. To address this challenge, the subproject examines legal aspects of labour law, private law and social law as examples.
These areas are characterised by existing power imbalances and therefore require particular scrutiny with regard to legal gaps in protection against discrimination. Commonalities and underlying concepts are to be identified. Based on this, options for legal design should be developed that contribute to the harmonisation and simplification of the law. In doing so, inter- and intra-disciplinary exchange is to be harnessed. The aim is to establish a uniform level of protection across the three areas.
The project is divided into three areas of work, each guided by a central thesis:
- A solid legal framework with high standards is required to effectively counter discrimination caused by autonomous systems and thereby protect the autonomy of those affected. Read more
- Differences in the level of protection arise from the varying degrees of regulation in specific contexts; legal gaps in protection and a lack of preventive measures become apparent. Read more
- Based on the findings, a coherent system of protection against discrimination can be developed as a limit to machine decision-making. Read more