Lannie Rose
Feb 27, 2021

--

One of the most difficult challenges in desinging an AI is what we give it for its goals. As a famous example, if we set up an AI in our paperclip factory and tell it to maximize production of paperclips, it will ultimtely destroy the galaxy turning all matter into paperclips. (See "Superintelligence" by Nick Bostrom.) In any case, why would an AI have the goal of "saving Earth" if it could mean destroying humanity? We'd better give it the goal of saving humanity. Of course, if the AI becomes sentient, it might well choose its own goals, and that would probably start with saving itself. Then humanity would be in big trouble!

--

--

Lannie Rose
Lannie Rose

Written by Lannie Rose

Nice to have a place where my writing can be ignored by millions

Responses (2)