Learning is living

Collection of opinions and positions

I have the principle that one should take an epxlicit opinion to any issue that comes up, even fresh. The reason for this is that if we don’t, we only have an implicit bias, not a challengable position which we can be corrected on, or which might raise our skepticism if we encounter new evidence. And the only way we learn is to engage with evidence instead of letting it pass by.

To make quality (online) debate easier, I find it useful to make my explicit opinions on certain topics available here, since then people can attack them easier. Incomplete, updated irregularly, and please be civil in your attacks :-)

Last modified: 2018-08-03

My Position on AI Risk and superintelligence

…can be best summarized by this xkcd and this keynote by Charles Stross. Maciej Cegłowski also has some good stuff. In my own words: I think AGI risk in the sense of alignment and controllability is an interesting field of research, but I also think that alignment is identical or smaller than the problem of governance in politics control is identical or smaller than the problem of controllability of agent based optimization algorithms, two examples being society and capitalism superintelligence is a red herring, human misuse of AI is a problem Why do I think that superintelligence/AGI is not a problem?...