viewpoints.xyz
Vote

Alex Talk

What do you think of the following statements?

34 people have given 289 responses

We understand very little about intelligence and consciousness today. When they grow exponentially there may be a good chance risk increases in various substantial ways we can’t understand today

919

More intelligent beings overcome less intelligent beings

827

AIs can take over even without being able to solve the halting problem

784

Even if we have an obligation to create intelligence, it still matters what intelligence is created, and many outcomes can still be bad for humanity’s values.

750

More intelligent being can overcome less intelligent beings if they both being have the same affordances

730

AIs won’t come to agreement on everything

654

The halting problem is irrelevant to AI doom

6412

We have an obligation to create intelligence

5928

Most of these points were NOT cruxes for my p(doom)

566

If there’s even a tiny p doom it’s likely extremely high impact to work on due to its current neglectedness

5038

Godels incompleteness theorem means AI can’t control everything

4828

AI will overcome the halting problem

4020

The argument for doom is tightly linked and any of these would knock out the argument

3322

It is the doomers who are stacking probabilities, not the "AI is safe" side

00