Ethical Judgment of Agents’ Behaviors in Multi-Agent Systems
Abstract
The increasing use of multi-agent technologies in various areas
raises the necessity of designing agents that judge ethical
behaviors in context. This is why several works integrate
ethical concepts in agents’ decision making processes. However,
those approaches consider mainly an agent-centered
perspective, letting aside the fact that agents are in interaction
with other artificial agents or human beings that can
use other ethical concepts. In this article, we address the
problem of producing ethical behaviors from a multi-agent
perspective. To this end, we propose a model of ethical judgment
an agent can use in order to judge the ethical dimension
of both its own behavior and the other agents’ behaviors.
This model is based on a rationalist and explicit approach
that distinguishes theory of good and theory of right. A
proof-of-concept implemented in Answer Set Programming
and based on a simple scenario is given to illustrate those
functionalities.