Will Inner or Outer AI alignment be considered "mostly solved" first?
11
1kṀ3142031
Inner56%
1H
6H
1D
1W
1M
ALL
As declared by a majority of the consensus at alignmentforum, slatestarcodex, lesswrong, MIRI, and my opinion.
Esta pergunta Ă© gerenciada e resolvida pela Predita.
Get
1,000 to start trading!
Pessoas também estão operando
Perguntas relacionadas
AI honesty #2: by 2027 will we have a reasonable outer alignment procedure for training honest AI?
24% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
51% chance
Will I focus on the AI alignment problem for the rest of my life?
45% chance
How difficult will Anthropic say the AI alignment problem is?
If AI has an okay outcome because of a huge alignment effort, where did AI progress stall out?
Will the 1st AGI solve AI Alignment and build an ASI which is aligned with its goals?
15% chance
If a huge alignment effort is part of the reason for AI having an okay outcome, will it involve a new AI paradigm?
60% chance
Is AI alignment computable?
50% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance