© Brazil Centre

On November 17th, 2025, an interdisciplinary workshop entitled "AI-supported research approaches: Interdisciplinary perspectives" took place at the University of Münster. The event was conceived and organised by Prof. Dr. Marcelo Parreira do Amaral and Dr. Luís Filipe de Araújo Pessoa, with the Brazil Centre acting as co-organiser. The workshop aimed to bring together leading scientists to understand Artificial Intelligence both as a methodological tool and as an object of critical reflection. By connecting different perspectives and research fields, similarities and tensions were to be identified.

Through the participation of professors and researchers from educational sciences, linguistics, social sciences, computer science, and the Centre for Information Technology, the workshop laid the foundation for a collective manifesto on AI-supported research approaches. In addition to the organisers, Prof. Dr. Amaral and Dr. Pessoa, Prof. Dr. Ana Larissa Oliveira, professor at Minas Gerais Federal University (UFMG) and holder of the CAPES Brazil Chair at the University of Münster; Prof. Dr. Fernando Buarque, professor at the University of Pernambuco (UPE) and research alumni ambassador of the University of Münster; and Prof. Dr. Wilson Fusco, professor at the Joaquim Nabuco Foundation, also took part.

© Brazil Centre

The workshop focused on topics such as Machine Learning, network analysis, simulations and decision support systems, as well as the analysis of large data sets and the recognition of complex patterns. The role of researchers in theoretical interpretation and analysis was also discussed. Participants debated on the idea that AI not only influences research methods but also fundamentally transforms the rationalities, normativities and practices of research. In this context, a reflexive engagement with the assumption of technical neutrality, the political and economic conditions of its application, and the performative effects of research itself was demanded.

Furthermore, the need for explainable and responsible AI was emphasised in order to make biases, uncertainties and social impacts transparent, particularly in politically relevant analyses. Participants recognised the need to develop an interdisciplinary research agenda with AI that is analytically rigorous, reflexive, explainable and socially responsible. The goal of this agenda should be to improve social diagnoses, strengthen evidence-based policy-making and defend democratic values.

Participants committed themselves to cooperative projects, joint supervision of students, ethical data sharing and the integration of critical reflection into researcher training. As a result, a jointly authored manifesto emerged, summarising the perspectives, commonalities, views, as well as cooperation and research potentials of various areas.

You can download the manifesto here.