About the project

Our goals

  • To develop a new interdisciplinary process model and toolkit for designing fair AI systems

  • To develop effective solution strategies for fair AI systems in five use cases within the fields of healthcare, media, and HR

Who we are

The research project is conducted by an interdisciplinary consortium involving six businesses, one NGO and two universities. By purpose we cross the boundaries of data science, social sciences, open innovation and legal studies and also cover a range of different sectors, such as AI developing and AI applying organisations in HR, healthcare and auditing. What unites us, is the shared goal and vision to develop fair AI - to exploit the full potential of this technology without negative consequences.

Multidisciplinarity: helping AI developers integrate fairness objectives into the design of new AI systems based on ideas and methods from a variety of disciplines

User Centricity: defining stakeholder needs and requirements in different industries and application contexts

Open innovation methods: soliciting and integrating a wide range of expertises and perspectives

Practically Applicable: translating ethical principles into a practically applicable process model and toolkit

Iterative design: iteratively testing and refining methods and processes with use case partners in various application contexts

Approach

Milestones of the development

Artificial Intelligence (AI) has a lasting effect on people's lives, from communication to health, from education to work-life. Government regulation in the EU and voluntary certification requirements will demand trustworthiness in the near future and require a minimum level of AI quality.

Among the AI quality dimensions to be achieved, avoiding unintentional bias and discrimination of individuals and social groups by AI systems, for example due to age, social or ethical backgrounds of people, is one of the biggest obstacles to a broad adoption of this technology. But this nut is especially hard to crack, as current  AI development practice does not involve the necessary procedures and methods to make AI fair right from the start. One of the reasons: Fairness cannot be determined by data scientists alone in an ad-hoc fashion, but needs strategic planning, with strong input from the social sciences.

This is exactly where the research done by the Austrian fAIr by design consortium kicks in: A unique mix of competences allows the nine partners to break new ground. By combining open innovation methods, data science and other AI expertise with legal and ethics know-how, the consortium is able to devise novel solutions to achieve fairness and reduce social discrimination in AI systems. The focus is on a spectrum of AI technologies with a direct impact on people in society and work, e.g. in education, human resources (such as performance assessment, recruiting), media and health.

Our research aims at producing three main outcomes:

  • Novel process model for fair AI development: fAIr by design aims to develop a novel procedural model for fair AI development, introducing measures to reduce the risk of unintentional bias and discrimination along the entire life cycle of an AI product. This will be beneficial to AI developers and AI users (e.g., companies, public institutions) which currently have insufficient application-oriented tools to support prevention, recognition and minimisation of undesired discrimination of certain user groups.

  • Fair AI toolbox: An interdisciplinary method toolbox for the development of fair AI will be assembled, combining social sciences methods and technology and taking into account the needs of diverse user groups. This provides AI developers and testers, who currently rely primarily on purely technical fairness tools, with a broader mix of effective, interdisciplinary solutions.

  • Specific fairness strategies for five AI use cases: The consortium will develop specific strategies for the reduction of various discrimination risks in five real-world use cases. This helps AI applying companies to adopt well-functioning, broadly accepted solutions.

About the project