SP 3: The opportunities and limitations of automated decision-making in criminal proceedings

The interdisciplinary subproject 3 (SP 3) focuses on the autonomy of criminal court decisions in cases where artificial intelligence (AI) is used. To this end, the project examines the prerequisites and design of AI applications, using examples such as the analysis of evidence (particularly in the context of ‘deep fakes’) and crime forecasting in criminal proceedings. In these fields, the use of artificial intelligence is already a reality today or at least a topic of concrete discussion (e.g. image recognition in the prosecution of pornography offences). Furthermore, initial experiences from abroad suggest that future use is also likely in Germany (crime prediction). In addition to (constitutional) legal requirements for preserving the function of criminal procedure law in safeguarding individual rights and freedoms – particularly in the form of the jurisdiction of independent judges to decide on the merits of a case – the subproject addresses psychological processes of trust and acceptance in artificial intelligence recommendations. While a basic willingness among judges to use AI is desirable (to avoid ‘under-trust’), excessive trust (‘over-trust’), for instance as a result of work overload, is associated with risks to the acceptance and correctness of decisions in criminal proceedings.
The subproject is divided into five work packages:
- Shedding light into the black box: A characteristic feature of AI is the autonomous alteration of decision-making parameters as a result of machine learning, which cannot be understood by the user. AI recommendations can therefore no longer be readily explained by the original programming (the ‘black box’). As a first step, therefore, current technical developments and options for ensuring the traceability of AI recommendations in criminal proceedings should be discussed and integrated. Read more
- Legal requirements for judicial crime risk assessments: In other countries, AI is already being used to generate the crime risk assessments required for probation decisions, based on statistical data and the facts of the individual case. Read more
- Legal requirements for AI-assisted evidence assessment: At first glance, AI – in line with the vision of improving quality by eliminating human error – could have the potential to optimise the fact-finding process in criminal proceedings. The central question is under what conditions AI systems can supplement, or possibly even partially replace, expert evidence in criminal proceedings. In addition to the legal requirements of the German Constitution and the German Code of Criminal Procedure, particular attention will need to be paid to the provisions of the European Convention on Human Rights and the Charter of Fundamental Rights of the EU. Read more
- Trust and acceptance of AI-based recommendations in criminal proceedings: In a series of qualitative and quantitative experimental studies, the willingness to use or accept AI recommendations in criminal proceedings will be examined from various perspectives (judges, defendants and their lawyers, and uninvolved observers from society). A key objective is to derive evidence-based recommendations for the design of AI (algorithms, interfaces, recommendations, etc.). Read more
- Requirements for AI support in judicial decision-making: On the basis of the results achieved in the other areas of the subproject (empirical findings, legal requirements, and current or foreseeable technical possibilities), a theoretical model for the use of AI in criminal proceedings will be developed, which is to be validated by the research group in the course of its work. Read more