When it comes to classifying large amounts of image data or analyzing a multitude of time series, there really is nothing better than artificial intelligence. But it does present certain challenges: since many applications are based on black box models, humans can have a hard time understanding how an AI system reaches decisions. How is it possible to ensure that the decisions made by AI are indeed trustworthy? The answer lies in the following five aspects: transparency, responsibility, privacy, fairness and reliability.
In addition to fairness and reliability, our research focus is on transparency: the Project Group for Comprehensible Artificial Intelligence (CAI) is developing methods of transferring the decisions made by such black box systems to a white box – thus making them understandable, and for a broad range of applications. For instance, where is AI’s decision threshold for classifying a workpiece as a reject? When it comes to fairness, CAI researchers are developing – among other things – methods of establishing a fair data basis that integrates people of different genders, ages and ethnicities. The team achieves reliability and robustness especially through what are known as hybrid or neuro-symbolic approaches: integrating knowledge into the learning process helps prevent certain decision errors.
On May 23, 2022, Minister of State Melanie Huml and representatives from politics, industry and business attended the open day at the University of Bamberg to learn about the project group’s latest findings.