Publication: Using Assurance Cases to assure the fulfillment of non-functional requirements of AI-based systems - Lessons learned

Published 2023, EEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), Dublin, Ireland, 2023, pp. 172-179, doi: 10.1109/ICSTW58534.2023.00040.

https://ieeexplore.ieee.org/document/10132168/authors#authors

Publication on lessons learned from the use of the Assurance Case method

Authors: Marc Hauer, Lena Müller-Kress, Gertraud Leimüller, Katharina Zweig


Requirements for developing fair AI systems

Published August 2022

AI systems underpin many digital technologies in use today. While their potential for improving human decision-making processes and augmenting efficiency has led to their widespread adoption across a broad range of economic sectors and application areas, AI systems also bear significant risks of adverse effects on individuals and society. But what can be done already in development to prevent or mitigate the risks of unfair treatment due to AI systems? The report presents a synopsis of the findings from interviews with 24 experts from different research and application fields of AI systems, as well as social organisations, on current requirements for the development of fair AI systems as well as initial approaches to making AI fair already in the development phase.

Authors: Sarah Cepeda, Lene Kunze, Gertraud Leimüller, Lena Müller-Kress, Martin Stöger, Rania Wazir

Method handbook: Assurance Cases for fair AI systems

Final version published March 2024

As AI gets applied in more and more use cases, concerns arise regarding the consequences of the use and potential risks for fairness and discrimination. It is now more important than ever to regard fairness in AI systems and to strategically implement fairness specifications into AI systems. With the use of Assurance Cases, which are widely used in Safety Engineering, one can break down high level fairness definitions into specific technical specifications, making fairness a central aspect in AI system developments and lifecycles.

The handbook „Assurance Cases for fair AI“ was developed by the research consortium fAIrbydesign (authors: Lene Kunze, Gertraud Leimüller, Lena Müller-Kress) and Marc P. Hauer, TU Kaiserslautern.


Outlook on the future regulatory requirements for AI in Europe

Published April 2022

The large-scale use of AI systems in everyday situations carries with it the risk that certain individuals or groups will suffer harm or may be disadvantaged by an algorithmic decision. If such risks are to be addressed and ultimately avoided as early as during the product development phase, it is essential to clarify how they are to be classified in legal terms. Therefore, the report first assesses the concepts of fairness, bias and discrimination and illustrates the differences between these terms. In a next step, the existing legal framework is examined with regard to regulations that are already relevant for AI. Building on this analysis, special consideration is given to the Proposal of the European Commission on Artificial Intelligence (AI Act Proposal), which is set to play a fundamental role for the future regulation of AI. The second part of the report comprises a summary of expert interviews with representatives from law, ethics and AI research, as well as standardisation organisations.

Authors: Christiane Wendehorst, Jakob Hirtenlehner


Publications

AI bias incidents mainly affect gender and ethnicity

March 2024

Algorithmic fairness has become an important topic in both public and scientific discourse on artificial intelligence. Reports of AI bias incidents provide important information regarding the risks posed by AI systems. The collection and analysis of 150 reported AI incidents within the "fAIr by design" research project provides initial indications of which groups of people are affected by algorithmic bias and in which areas of application it occurs.

The AI incident database and this summary was developed by the research consortium fAIrbydesign

Author: Nikolas Magele, Bsc.