57 people have given 497 responses
Risk from competent malign agents - AI agents could be competent and want different things from humans, which could be bad for us
Expert opinion - Many AI experts are worried about the destructive capabilities of AI
Catastrophic tools - AI might lower the cost or knowledge required to make catastrophic weapons
Loss of control via speed - Additional speed makes many situations more dangerous. AI is developing very fast
Large impacts suggest large risks - AI's impacts are likely to be very large
Black boxes - We don't understand how AI systems work, so they might be very destructive
Second species argument - Apes should have been wary of "inventing" a species more intelligent than themselves, so should we
AI may produce or accelerate destructive multi-agent dynamics - There may not be a stable equilibrium with AI in it
Loss of control via inferiority - A child heir to a large fortune could struggle to know which adults to trust and so slowly lose control of it
Humans aren't aligned to one another - Given full control, most people's utopias are incompatible. AI isn't new here, but it gives more power to achieve "utopias" that others might find horrific