Ethical Agents: an ALifeX Workshop

Chairs: Colin Allen, Wendell Wallach, and Michael Brady


A major goal of ALife is the development of increasingly autonomous agents, whether for virtual environments or real-world applications. With increasing autonomy comes the potential for conflict between the different values of the people who deploy such agents, and perhaps between the values of people and the artificial agents themselves. The trajectory of ALife thus leads straight to ethics, a point recognized by Roz Picard (1997) when she wrote, "The greater the freedom of a machine, the more it will need moral standards." Rather than fretting about dystopian futures, it is important to think seriously and concretely about how artificial autonomous agents can be developed as ethical agents.

Currently this topic is being addressed in a piecemeal fashion with approaches ranging from evolutionary game theory to top down implementation of ethical theory. The objective of the proposed workshop is to bring together researchers from a variety of disciplines who may each have a piece of the puzzle, to educate each other about strengths and limitations of the approaches being taken, and to develop ideas for collaborative projects. The disciplines to be represented are machine learning and reasoning, affective computing, android science and robotics, social psychology, philosophy of mind, applied ethics, evolutionary and game theoretic modeling.