WP 1: 

This will be analysed on the basis of three exemplary areas:

  1. Technological advances in AI have revived the long-held dream that it might be possible to verify the truthfulness of statements using technical means. In this context, the legal requirements set out in Articles 8 to 15 of the AI Act will need to be taken into account. The resulting questions of interpretation – in addition to what constitutes ‘authorisation’ under national law (Annex III, no. 6 of the AI Act), there is also the question of the requirements and design options for ‘effective’ human oversight (Article 14 (1) of the AI Act) – will need to be clarified. This also applies to the classification of such an AI system against the backdrop of the nemo tenetur principle: Is the required participation of the person being questioned comparable to the situation with human experts, and might additional provisions of criminal procedural law granting investigative powers therefore be necessary?
  2. The integration of AI in the analysis of photographs, video recordings, or audio recordings is conceivable. When such recordings are to be introduced into criminal proceedings as visual evidence, it is often necessary to clarify whether they actually show the accused person. This raises legal questions, including regarding the collection of training and comparison data: either existing material will have to be used, which requires clarification under data protection law, or the cooperation of the individuals concerned will again be required. Furthermore, the question arises whether such an application falls within the provisions on high-risk AI under the AI Act.
  3. The authenticity of image or audio recordings must be examined. Using deepfake technology, it is possible to generate astonishingly realistic depictions that purportedly show the actions or statements of real people, even though these never actually took place. This raises the need to use AI to identify AI in criminal proceedings in order to verify whether a recording is authentic. Such a system would quite clearly qualify as a system used to "evaluate the reliability of evidence” (Annex III no. 6 (c) AI Act), meaning that the requirements for high-risk AI set out in the AI Act must be examined. In particular, it must be clarified which criteria should apply to the reliability and verifiability of such AI systems.