What is XAI

The sophistication of AI-powered systems have increased exponentially in the last decade. And yet, you can (almost) design and deployment whole AI models without any human intervention.

I came across the experiment I shared on home page when I was reading an article titled:

Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From an Explainable AI Competition

We are employing Machine Learning (ML) in more critical areas than ever. Hence it is natural to demand transparency in the decision-making process inside such models.

The article above argues that it makes more sense to shift our focus to creating and working on interpretable models rather than explaining black-box algorithms.

Details

More accurate models does not have to mean less interpretability/transparency.

While I fully agree with the above statement, the truth is, convoluted models based on Deep Learning (DL) or Neural Networks (NN) methods have seen more success in applications (as compared to simpler models) 1

And there is much work to be accomplished, before they enjoy more practical adoption.

📌 This section is in progress


  1. citation required ↩︎