Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

Abstract

As an emerging field in Machine Learning, Explainable AI (XAI) has been offering remarkable performance in interpreting the decisions made by Convolutional Neural Networks (CNNs). To achieve visual explanations for CNNs, methods based on class activation mapping and randomized input sampling have gained great popularity. However, the attribution methods based on these techniques provide lower-resolution and blurry explanation maps that limit their explanation power. To circumvent this issue, visualization based on various layers is sought. In this work, we collect visualization maps from multiple layers of the model based on an attribution-based input sampling technique and aggregate them to reach a fine-grained and complete explanation. We also propose a layer selection strategy that applies to the whole family of CNN-based models, based on which our extraction framework is applied to visualize the last layers of each convolutional block of the model. Moreover, we perform an empirical analysis of the efficacy of derived lower-level information to enhance the represented attributions. Comprehensive experiments conducted on shallow and deep models trained on natural and industrial datasets, using both ground-truth and model-truth based evaluation metrics validate our proposed algorithm by meeting or outperforming the state-of-the-art methods in terms of explanation ability and visual quality, demonstrating that our method shows stability regardless of the size of objects or instances to be explained.

Overview

A global overview of our proposed framework for SISE describing the multiple phases is attached in the figure below. To know more about each phase in detail along with a summary of our algorithm, please read the technical blog post.

SISE_Architecture_block
Fig: Global overview of the Proposed framework

Results

A sample qualitative comparison of the explanation maps generated from our algorithm to other conventional methods can be seen below. To view more qualitative results, please see our preprint paper.

SISE_Results
Fig: Comparison of other existing XAI methods with SISE (last column) to demonstrate SISE’s ability to generate class discriminative explanations on a ResNet-50 model.

Cite

Consider citing our work as below, if you find it useful in your research:

@article{Sattarzadeh_Sudhakar, 
  title={Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation}, 
  volume={35}, 
  url={https://ojs.aaai.org/index.php/AAAI/article/view/17384}, 
  number={13}, 
  journal={Proceedings of the AAAI Conference on Artificial Intelligence}, 
  author={Sattarzadeh, Sam and Sudhakar, Mahesh and Lem, Anthony and Mehryar, Shervin and Plataniotis, Konstantinos N and Jang, Jongseong and Kim, Hyunwoo and Jeong, Yeonjeong and Lee, Sangmin and Bae, Kyunghoon}, 
  year={2021}, 
  month={May}, 
  pages={11639-11647} 
}


Our paper has been published virtually at the 35th AAAI Conference on Artificial Intelligence (AAAI-21).

All figures and content published here are owned by the authors at the Multimedia Laboratory at the University of Toronto and LG AI Research.

Related