siddesu writes: Asimov’s three laws of robotics don’t say anything about how robots should treat each other. The common fear is robots will turn against humans. But what happens if we don’t build systems to keep them from conflicting with each other? The article argues, “Scientists, philosophers, funders and policy-makers should go a stage further and consider robot–robot and AI–AI interactions (AIonAI). Together, they should develop a proposal for an international charter for AIs, equivalent to that of the United Nations’ Universal Declaration of Human Rights. This could help to steer research and development into morally considerate robotic and AI engineering. National and international technological policies should introduce AIonAI concepts into current programs aimed at developing safe AIs.”
Michael Moritz, the chairman of Silicon Valley VC firm Sequoia Capital, has a warning for Silicon Valley.
“There are a whole bunch of crazy little companies that will disappear. There are a considerable number of unicorns that will become extinct,” Moritz said in an interview with The Times of London. “There are also a good number that will flourish.