Extracting Explanations from Neural Networks
Abstract
The use of neural networks is still difficult in many application areas due to the lack of explanation facilities (the " black box " problem). An example of such applications is multiple criteria decision making (MCDM), applied to location problems having environmental impact. However, the concepts and methods presented are also applicable to other problem domains. These concepts show how to extract explanations from neural networks that are easily understandable for the user. Explanations obtained may in many cases even be better than those of expert systems. The INKA network presented in this paper is well adapted for MCDM problems, while also having properties that simplify the extraction of explanations compared to most other neural networks.