Improving the ability to explain AI model predictions | Massachusetts Institute of Technology News
In high-stakes situations like medical diagnostics, users often want to know what caused a computer vision model to make a particular prediction so they can decide whether to trust its output. Conceptual bottleneck modeling is one way to enable artificial intelligence systems to explain decision-making processes. These methods force deep learning models to make predictions…
