Comprehensible Artificial Intelligence

Transparency and Intelligibility of AI Systems

For uses of machine learning in practice, it is vital that such applications should be intelligible and explainable

Explainable AI is a key topic of current AI research – the “Third wave of AI,” following on from “Describing” (First wave: knowledge-based systems) and “Statistical learning” (Second wave). It is becoming increasingly obvious that purely data-driven machine learning is unsuitable in many areas of application, or not unless it is combined with further methods.

In collaboration with the University of Bamberg, Fraunhofer IIS has set up an “Comprehensible Artificial Intelligence” project group. Its purpose is to develop explainable machine learning methods:

  • We are working on hybrid approaches to machine learning that combine black-box methods, such as (deep) neural networks, with methods applied in interpretable machine learning (white-box methods). Such methods enable a combination of logic and learning – and, in particular, a type of learning that integrates human knowledge.
  • We are developing methods of interactive and incremental learning for areas of application in which there is very limited data available and the labeling of that data is problematic.
  • We are developing algorithms to generate multimodal explanations, particularly for a combination of visual and verbal explanations. For this purpose, we draw on research from cognitive science.

Current areas of application:

  • image-based medical diagnostics
  • facial expression analysis
  • quality control in Manufacturing 4.0
  • avoiding biases in Machine Learning
  • crop phenotyping
  • automotive sector

Explainable AI Videopodcast

Explore our video podcast and actively participate in shaping the future of AI in industry

Comprehensible
Artificial Intelligence
Publications

Here you can find our publications structured by year.

Our Fields of Research

Comprehensible AI – Our Fields of Research

Partners and Projects

 

Project HIX

The goal of the HIX funding project is to develop and implement human-AI interaction in hybrid intelligence systems for bias and noise reduction and knowledge aggregation.

Duration: October 2021 - September 2023

 

Project hKI-Chemie

The goal of the hKI-Chemie project is supported data processing by AI systems in the chemical industry. The aim is to support employees in identifying process problems at an early stage and selecting suitable solutions.

Duration: June 2021 - June 2025

 

TraMeExCo

TraMeExCo (Transparent Medical Expert Companion) is a project funded by Germany’s Federal Ministry of Education and Research (BMBF). Its purpose is to investigate and develop suitable new methods to enable robust and explainable machine learning in complementary applications in the field of medical engineering.

Duration: September 2018 - August 2021

 

ADA Lovelace Center for Analytics, Data and Applications

New competence center for data analytics and AI in industry
The ADA Lovelace Center uniquely combines AI research with AI applications in industry. Here the partners can network with each other, benefit from each other's know-how and work on joint projects

 

Project partner University of Bamberg

Prof. Dr. Ute Schmidheads the “Cognitive Systems” group at the University of Bamberg