Explainable AI (XAI) is an active research area to interpret a neural network's decision by ensuring transparency and trust in the task-specified learned models. Recently, perturbation-based model analysis has shown better interpretation, but …
Visualizing the features captured by Convolutional Neural Networks (CNNs) is one of the conventional approaches to interpret the predictions made by these models in numerous image recognition applications. Grad-CAM is a popular solution that provides …
TL;DR * We propose a state-of-the-art post-hoc CNN specific Visual XAI algorithm - SISE. * Input : A test image; The trained model * Output : A visual 2D heatmap * Properties : Noise-free, High resolution, Class discriminative and Correlates to model's prediction.
As an emerging field in Machine Learning, Explainable AI (XAI) has been offering remarkable performance in interpreting the decisions made by Convolutional Neural Networks (CNNs). To achieve visual explanations for CNNs, methods based on class …
Vision sensor data (RGB and Depth) collected from a semi-humanoid robot ‘Pepper’ provided by IATSL laboratory, are used to perform 3D human detection and tracking within a household setup enabling better assistance to old or sick-adults in home-care.
Breast cancer classification on Keras (based on the implementaion of CancerNet algorithm by Adrian Rosebrock [1]).
Breast cancer is the most common form of cancer in women, and Invasive Ductal Carcinoma (IDC) is the most common form of breast cancer.
NEU_XAI Developed and studied XAI algorithms that generates saliency maps according to the importance of each corresponding pixels of the input test image towards the Machine Learning model’s predictive accuracy, with the aim of decoding complex black-box models.