34 people have given 289 responses
We understand very little about intelligence and consciousness today. When they grow exponentially there may be a good chance risk increases in various substantial ways we can’t understand today
More intelligent beings overcome less intelligent beings
AIs can take over even without being able to solve the halting problem
Even if we have an obligation to create intelligence, it still matters what intelligence is created, and many outcomes can still be bad for humanity’s values.
More intelligent being can overcome less intelligent beings if they both being have the same affordances
AIs won’t come to agreement on everything
The halting problem is irrelevant to AI doom
We have an obligation to create intelligence
Most of these points were NOT cruxes for my p(doom)
If there’s even a tiny p doom it’s likely extremely high impact to work on due to its current neglectedness
Godels incompleteness theorem means AI can’t control everything
AI will overcome the halting problem
The argument for doom is tightly linked and any of these would knock out the argument
It is the doomers who are stacking probabilities, not the "AI is safe" side