A systematic review of moral agency in artificial agents

Publié le 16 février 2026 Mis à jour le 16 février 2026

Aloysius Y. F. Tok, Nina L. Powell & Bahia Guellaï (CLLE)

This review examines factors that drive Perceptions of Moral Agency (PMA) in artificial agents (e.g. AI, drones, robots). Although a direct measure of PMA was only recently developed and hence only sparsely employed, we highlight several other indicators (e.g. blame) that may suggest its occurrence or presence. Insights from 36 studies show that various factors present in artificial agents such as human-like behaviours or cultural contexts promote PMA or may serve as its precursor. These findings suggest that both agent-based traits in machines (e.g. appearance, modality of presentation) and human-based factors (e.g. age) may affect PMA in tandem. Furthermore, cultural settings and contexts (e.g. military, high-stakes scenarios) in which these agents are embedded also shape these perceptions. The paper concludes with a discussion of real-world implications in the realm of social robotics, and how the reviewed papers paint an overall picture of machines being attributed a form of “incomplete” or “partial” moral agency.

https://doi.org/10.1007/s44202-025-00537-y
Éditeur Springer Nature Link