Article on Meaningful Human Control published in AI and Ethics journal

How can humans remain in control of AI-based systems designed to perform tasks autonomously? Such systems are increasingly ubiquitous, creating benefits, but also undesirable situations where moral responsibility for their actions cannot be properly attributed to any particular person or group. The concept of meaningful human control has been proposed to address these responsibility gaps by establishing conditions that enable a proper attribution of responsibility for humans. However, translating this concept into a concrete design and engineering practice is far from trivial.

In their paper, "Meaningful human control: actionable properties for AI system development", researchers of the AiTech interdisciplinary program address the gap between philosophical theory and design & engineering practice by identifying four actionable properties for human-AI systems under meaningful human control.

Previous
Previous

online conversation with Arduino