Do Robots Need Behavioral ‘Laws’ For Interacting With Other Robots?

siddesu writes: Asimov’s three laws of robotics don’t say anything about how robots should treat each other. The common fear is robots will turn against humans. But what happens if we don’t build systems to keep them from conflicting with each other? The article argues, “Scientists, philosophers, funders and policy-makers should go a stage further and consider robot–robot and AI–AI interactions (AIonAI). Together, they should develop a proposal for an international charter for AIs, equivalent to that of the United Nations’ Universal Declaration of Human Rights. This could help to steer research and development into morally considerate robotic and AI engineering. National and international technological policies should introduce AIonAI concepts into current programs aimed at developing safe AIs.”