Natural Example-Based Explainability: a Survey - IRT Saint Exupéry - Institut de Recherche Technologique Access content directly
Conference Papers Year : 2023

Natural Example-Based Explainability: a Survey


Explainable Artificial Intelligence (XAI) has become increasingly significant for improving the interpretability and trustworthiness of machine learning models. While saliency maps have stolen the show for the last few years in the XAI field, their ability to reflect models' internal processes has been questioned. Although less in the spotlight, examplebased XAI methods have continued to improve. It encompasses methods that use examples as explanations for a machine learning model's predictions. This aligns with the psychological mechanisms of human reasoning and makes example-based explanations natural and intuitive for users to understand. Indeed, humans learn and reason by forming mental representations of concepts based on examples. This paper provides an overview of the state-of-the-art in natural examplebased XAI, describing the pros and cons of each approach. A "natural" example simply means that it is directly drawn from the training data without involving any generative process. The exclusion of methods that require generating examples is justified by the need for plausibility which is in some regards required to gain a user's trust. Consequently, this paper will explore the following family of methods: similar examples, counterfactual and semi-factual, influential instances, prototypes, and concepts. In particular, it will compare their semantic definition, their cognitive impact, and added values. We hope it will encourage and facilitate future work on natural example-based XAI.
Fichier principal
Vignette du fichier
Natural_Example_Based_Explainability__a_Survey - preprint.pdf (4.61 Mo) Télécharger le fichier
Origin : Files produced by the author(s)
licence : CC BY - Attribution

Dates and versions

hal-04117520 , version 1 (05-06-2023)
hal-04117520 , version 2 (28-09-2023)




  • HAL Id : hal-04117520 , version 1


Antonin Poché, Lucas Hervier, Mohamed-Chafik Bakkay. Natural Example-Based Explainability: a Survey. World Conference on eXplainable Artificial Intelligence, Jul 2023, Lisbon, Portugal. ⟨hal-04117520v1⟩
142 View
149 Download


Gmail Facebook X LinkedIn More