1. Home >
  2. Extreme

Scientists develop official guidance on robot ethics

Was Asimov onto something with the Three Laws of Robotics?
By Ryan Whitwam
NASA's Valkyrie robot, with glowing blue chest
It was decades ago when science fiction great Isaac Asimov imagined a world in which robots were commonplace. This was long before even the most rudimentary artificial intelligence existed, so Asimov created a basic framework for robot behavior called the Three Laws of Robotics(Opens in a new window). These rules ensure that robots will serve humanity and not the other way around. Now the British Standards Institute (BSI) has issued its own version of the Three Laws. It's much longer and not quite as snappy, though. In Asimov's version, the Three Laws are designed to ensure humans come before robots. Just for reference: In abbreviated form, Asimov's laws require robots to preserve human life, obey orders given by humans, and protect their own existence. There are, of course, times when those rules clash. When that happens, the first law is always held in highest regard. The BSI document was presented at the recent Social Robotics and AI conference in Oxford as an approach to embedding ethical risk assessment in robots. As you can imagine, the document is more complicated than Asimov's laws written into the fictional positronic brain. It does work from a similar premise(Opens in a new window), though. "Robots should not be designed solely or primarily to kill or harm humans," the document reads. It also stresses that humans are responsible for the actions of robots, and in any instance where a robot has not acted ethically, it should be possible to find out which human was responsible. ED-209, a killer robot from Robocop According to the BSI, the best way to make sure people are accountable for what their robots do is to make sure AI design is transparent. That might be a lot harder than it sounds, though. Even if the code governing robots is freely accessible, that doesn't guarantee we can ever know why they do what they do. In the case of neural networks, the outputs and decisions are the product of deep learning. There's nothing in the network you can point to that governs a certain outcome like you can with programmatic code. If a deep learning AI used in law enforcement started displaying racist behavior, it might not be a easy to figure out why. You'd just have to retrain it. Going beyond the design of AI, the BSI report speculates on larger ideas like forming emotional bonds with robots. Is it okay to love a robot? There's no good answer to that one, but it's definitely going to be an issue we face. And what should happen if we become too dependent on AI? The BSI urges AI designers not to cut humans out altogether. If we come to rely on AI to get a job done, we might not notice when its behavior or priorities start delivering sub-optimal results -- or when it starts stockpiling weapons to exterminate humanity. Now read: IBM’s resistive computing could massively accelerate AI — and get us closer to Asimov’s Positronic Brain

Tagged In

Robots Automation Ai Neural Networks Isaac Asimov

More from Extreme

Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of use(Opens in a new window) and Privacy Policy. You may unsubscribe from the newsletter at any time.
Thanks for Signing Up