I wrote a two part blog post trying to give people an intuitive feel on how much heuristics and approximation matters and how this connects to my opinions about AGI/superintelligence risk.

I’m not totally happy with the second part where I try for the first time to condense thoughts that have been swirling around my head for a while now, but hey, I can always rewrite it. I’m sure the internet would never be vicious to me about not getting things perfect on the first try.

Anyway, here is the link to the nbviewer. I hope it is somewhat stimulating and look forward to any feedback via the contact details in the footer :-)