Loading…
Counter-CAM : An Improved Grad-CAM based Visual Explainer for Infrared Breast cancer Classification
Understanding the decision-making process of machine learning models is essential for establishing confidence and interpretability. Counter-CAM combines the power of Grad-CAM and counterfactual explanations to provide intuitive and comprehensive insights into model decisions. Counter-CAM enables end...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Understanding the decision-making process of machine learning models is essential for establishing confidence and interpretability. Counter-CAM combines the power of Grad-CAM and counterfactual explanations to provide intuitive and comprehensive insights into model decisions. Counter-CAM enables end users to comprehend why a model makes particular predictions by displaying the critical regions and their modifications in counterfactual images. Initially, we use Grad-CAM to generate heatmaps that emphasize regions of an input image that are crucial for the model's prediction. These heatmaps provide localized explanations but cannot demonstrate how modifications to these regions would affect the model's conclusion. To overcome this limitation, we incorporate counterfactual explanations, demonstrating how minor image changes can result in a different model prediction. By superimposing Grad-CAM on counterfactual images, we generate Counter-CAM. This visual representation juxtaposes the significance of various regions in the original image with their influence on the model's decision in the counterfactual image. We demonstrate Counter-CAM's ability to provide intuitive and visual explanations for model predictions by validating its efficacy on multiple datasets and demonstrating its capacity to provide intuitive and visual explanations for model predictions. Counter-CAM improves the interpretability and explainability of machine learning models, enabling end users to comprehend and trust the decision-making procedure. |
---|---|
ISSN: | 2325-9418 |
DOI: | 10.1109/INDICON59947.2023.10440898 |