Dezible

Welcome to Dezible.

Read the interesting experiment below or check out the basics of XAI starting here.

Experiment

In 2018, hundreds of top computer scientists, financial engineers, and executives - were asked to engage in a thought experiment where they had cancer and needed surgery to remove a tumor.

Two images were displayed on the screen.

Human surgeon, who could explain anything about the surgery, but had a 15% chance of causing death during the surgery.

Robotic arm that could perform the surgery with only a 2% chance of failure.

The audience was then asked to raise a hand to vote for which of the two they would prefer to perform life-saving surgery.

All but one hand voted for the robot.

It is obvious that a 2% chance of mortality is better than a 15% chance of mortality.

Humans are too inhibited to adopt techniques which are not interpretable, traceable or trustworthy.

So, why must the robot be a black box?

And would people trust a robotic-hand? If the AI system could converse or maybe explain itself (better).

✒️ currently…

Develop the first sections - the What, Why, What for and How in XAI

April 2025: Finishing the What + an example would be nice

Subsections of Dezible

Chapter 1

eXplainable AI

For this section, I will refer to an excellent article published in 2020 titled Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI.

It has reviewed concepts related to explainability of AI methods, discussed some transparent models and built a taxonomy based on the reviewed literature.

✒️ You can read the whole paper here

Subsections of eXplainable AI

What is XAI

IBM1 has a nice definition:

Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.

But why do we need it?

The sophistication of AI-powered systems have increased exponentially in the last decade. And yet, you can (almost) design and deploy whole AI models without any human intervention.

I came across the experiment I shared on home page when I was reading an article titled:

Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From an Explainable AI Competition

We are employing Machine Learning (ML) in more critical areas than ever. Hence it is natural to demand transparency in the decision-making process inside such models.

The article above argues that it makes more sense to shift our focus to creating and working on interpretable models rather than explaining black-box algorithms.

More accurate models does not have to mean less interpretability/transparency.

I fully agree with the above statement but it is also true that convoluted models based on Deep Learning (DL) or Neural Networks (NN) methods have seen more success in applications which deal with higher dimensional data (think of images).

A mathematical folklore by the name of ‘No free lunch’ (NFL) theorem says (roughly), “No one model works best for all possible situations.

So while we work on developing inherently explainable models, we also focus on XAI for existing (complicated) models.

Example of XAI

✒️ currently…

April 2025:

  • Briefly touch upon the types of XAI.

  • Add simple example of linear regression and show why it is inherently explainable.

Who am I?

And why do I want to talk about this topic?

My name is Aditya Bhardwaj and am currently a PhD candidate at University of Twente - you can find me here through LinkedIn profile

I am deeply interested in following the progress in the area of XAI, specially in the methods of evaluating such XAI models.

And as I am starting to go through academic research for my PhD, I realized that there is a lack of systemized knowledge on XAI. There might a lot of resources on this topic, but are present sparsely.

💡 The goal is to collect and organize the literature, with a focus on practical adoption of XAI and the challenges it faces. I will also focus on implementing such methods and sharing it as open-source code.