Learning is living

Tagged in

superintelligence

My Position on AI Risk and superintelligence

…can be best summarized by this xkcd and this keynote by Charles Stross. Maciej Cegłowski also has some good stuff. In my own words: I think AGI risk in the sense of alignment and controllability is an interesting field of research, but I also think that alignment is identical or smaller than the problem of governance in politics control is identical or smaller than the problem of controllability of agent based optimization algorithms, two examples being society and capitalism superintelligence is a red herring, human misuse of AI is a problem Why do I think that superintelligence/AGI is not a problem?...