If we survive general artificial intelligence before 2100, what will be the reason?
7
100á¹€75
2100
38%
We don't build AGI
22%
We build AGI, but it is not powerful enough to kill us
29%
We build AGI, it is powerful enough to kill us, but it doesn't try
11%
Other
Get
á¹€1,000
to start trading!
Ordenar por:

What about, "We build AGI, it is powerful enough to kill us, but it's controlled/overseen, so even in small places where it might try to gain power, it can't do so?" (The Control agenda, for one thing)

comprou á¹€10 NO

Note that it would arguably be "controlled" by other AGI-like systems.

Good question.
I think it should go in "is not powerful enough to kill us".
That we are controlling and overseen it, being a particular reason it can't kill us.

Not "powerful enough" should be understood as "not powerful enough in the context where it is", and not "not powerful enough if it was completely free, or if we didn't become cyborgs, or…"

If AGI doesn't try to kill us, how will you determine whether it is powerful enough to have done so?

Good question, it would be quite hard to determine it expects in the extreme cases.
I didn't really think about how to resolve the market.
I admit it isn't great.
Let's say it will decided by what the experts think of it when the time come, and if there are disagreements between them, then it will resolve to the % of expert thinking it would be powerful enough or not.

Do you have another idea ?

© Predita Markets, Inc.•Termos de Uso•Privacidade