Ad hoc Teamwork and Moral Feedback as a Framework for Safe Robot Behavior
As technology develops, it is only a matter of time before agents will be capable of long term (general purpose) autonomy, i.e., will need to choose their actions by themselves for a long period of time. Thus, in many cases agents will not be able to be coordinated in advance with all other agents with which they may interact. Instead, agents will need to cooperate in order to accomplish unanticipated joint goals without pre-coordination. As a result, the ``ad hoc teamwork'' problem, in which teammates must work together to obtain a common goal without any prior agreement regarding how to do so, has emerged as a recent area of study in the AI literature. However, to date, no attention has been dedicated to the moral aspect of the agents' behavior. In this research, we introduce the M-TAMER framework (a novel variant of TAMER) used to teach agents the idea of human morality. Using a hybrid team (agents and people), if taking an action considered to be morally bad, the agents will receive negative feedback from the human teammate(s). Using M-TAMER, agents will be able to develop an "inner-conscience'' which will enable them to act consistently with human morality.