Skip to Main content Skip to Navigation
Theses

A generic and adaptive approach to explainable AI in autonomic systems : the case of the smart home

Abstract : Smart homes are Cyber-Physical Systems where various components cooperate to fulfill high-level goals such as user comfort or safety. These autonomic systems can adapt at runtime without requiring human intervention. This adaptation is hard to understand for the occupant, which can hinder the adoption of smart home systems. Since the mid 2010s, explainable AI has been a topic of interest, aiming to open the black box of complex AI models. The difficulty to explain autonomic systems does not come from the intrinsic complexity of their components, but rather from their self-adaptation capability which leads changes of configuration, logic or goals at runtime. In addition, the diversity of smart home devices makes the task harder. To tackle this challenge, we propose to add an explanatory system to the existing smart home autonomic system, whose task is to observe the various controllers and devices to generate explanations. We define six goals for such a system. 1) To generate contrastive explanations in unexpected or unwanted situations. 2) To generate a shallow reasoning, whose different elements are causaly closely related to each other. 3) To be transparent, i.e. to expose its entire reasoning and which components are involved. 4) To be self-aware, integrating its reflective knowledge into the explanation. 5) To be generic and able to adapt to diverse components and system architectures. 6) To preserve privacy and favor locality of reasoning. Our proposed solution is an explanatory system in which a central component, name the ``Spotlight'', implements an algorithm named D-CAS. This algorithm identifies three elements in an explanatory process: conflict detection via observation interpretation, conflict propagation via abductive inference and simulation of possible consequences. All three steps are performed locally, by Local Explanatory Components which are sequentially interrogated by the Spotlight. Each Local Component is paired to an autonomic device or controller and act as an expert in the related knowledge domain. This organization enables the addition of new components, integrating their knowledge into the general system without need for reconfiguration. We illustrate this architecture and algorithm in a proof-of-concept demonstrator that generates explanations in typical use cases. We design Local Explanatory Components to be generic platforms that can be specialized by the addition of modules with predefined interfaces. This modularity enables the integration of various techniques for abduction, interpretation and simulation. Our system aims to handle unusual situations in which data may be scarce, making past occurrence-based abduction methods inoperable. We propose a novel approach: to estimate events memorability and use them as relevant hypotheses to a surprising phenomenon. Our high-level approach to explainability aims to be generic and paves the way towards systems integrating more advanced modules, guaranteeing smart home explainability. The overall method can also be used for other Cyber-Physical Systems.
Complete list of metadata

https://tel.archives-ouvertes.fr/tel-03721520
Contributor : ABES STAR :  Contact
Submitted on : Tuesday, July 12, 2022 - 4:53:53 PM
Last modification on : Wednesday, July 13, 2022 - 9:49:19 AM

File

113859_HOUZE_2022_archivage.pd...
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-03721520, version 1

Citation

Etienne Houzé. A generic and adaptive approach to explainable AI in autonomic systems : the case of the smart home. Ubiquitous Computing. Institut Polytechnique de Paris, 2022. English. ⟨NNT : 2022IPPAT022⟩. ⟨tel-03721520⟩

Share

Metrics

Record views

205

Files downloads

20