can be best summarized by this xkcd and this keynote by Charles Stross. Maciej Cegłowski also has some good stuff.

In my own words:

  • I think AGI risk in the sense of alignment and controllability is an interesting field of research, but I also think that
    • alignment is identical or smaller than the problem of governance in politics
    • control is identical or smaller than the problem of controllability of agent based optimization algorithms, two examples being society and capitalism
    • superintelligence is a red herring, human misuse of AI is a problem
  • Why do I think that superintelligence/AGI is not a problem?
    • silicon AI is right now mainly platform and physics limited, not algorithm limited, and the exponential compute growth has been fought for with more and more capital investment
    • there are also inherent limits to computation
    • we are thus either WAY past “takeoff” (if you don’t constrain yourself to silicon AI) or we do not need to worry about takeoff (if you do) because of the mentioned limits
  • Why do so many people still worry about AGI? In my opinion, because
    • it is interesting
    • it is less messy than boring old problems like “good governance”, “moral decision making in a diverse world” or “dealing with the balance between capitalism and humanism”
    • it has the allure of a few smart people saving the world from calamity (one of which might be YOU. Yes, you reading this text!)
    • the tech community sometimes thinks they are less affected by human need for spirituality and religion, but they are wrong. It thus has a history of techno utopian pseudo religions, and this is another incarnation
    • (very lightly) you can make a good living as an academic, author and intellectual of profiting of this type of techno optimism mixed with techno fear. I don’t like this reasoning to much because it is a bit ad hominem, but there is some validity to it.It is hard to be critical of something your paycheck depends on. I am convinced most of the proponents are honest believers,though and I am glad people exist who take this seriously. Its just important to push back as well because…
    • …I also think it is a nice red herring away from hard questions we would rather ignore dealing with like
    • do we want a totally quantified and algorithmified society?
    • if we go through another industrial revolution, can we avoid the suffering that comes from a large chunk of workers (not all of them, just the non-retrainable chunk) becoming “obsolete”?
    • how do we deal with mass surveillance? how do we deal with the problem of robot police in authoritarian governments? how do we feel about drone assassinations?

Some stuff I do find very worthwhile

  • research on misuse of AI, like the malicious AI report
  • technical AI safety with a focus on current approaches, i.e.research on how to make current AI more robust and safe against accidents, like the work by some fellow EPFL researchers on safe interruptibility
  • the research into fair training to combat the entrenchment of biases into AI models
  • the use of adversarial AI as motivator to solve more complex game theoretical settings
  • ongoing work in error bounds,differential privacy, interpretability etc.
  • work to estimate and mitigate the political and economic changes coming from AI, as to preserve the civil liberties and freedoms we came to enjoy(especially in western europe) between 1980 and 2010

More to come

This is it in a nutshell, but I feel I should give some arguments and citations to justify my perspective, so I will update this post over time to add those. But this should give you something tangible to argue against if you want to convince or correct me:-)

As promised:

  • Why I am skeptical of the feasibility of runaway superintelligence: The power and necessity of imperfection. Note how this argument strengthens need for technical AI safety like done by El Mhamdi et.al, and the work done on interpretability since the imperfections would mean danger if a faulty system is put into production