
I make a contribution to AI safety that is endorsed by at least one high profile AI alignment researcher by the end of 2026
14
1ká¹€630Dec 31
40%
chance
1H
6H
1D
1W
1M
ALL
Let's consider any AI alignment researcher who has written a sequence in either the Alignment Forum library or the LessWrong library "high profile" for the purposes of this question.
Esta pergunta é gerenciada e resolvida pela Predita.
Get
1,000 to start trading!
Pessoas também estão operando
Perguntas relacionadas
AI safety community successfully advocates for a global AI development slowdown by December 2027
12% chance
Will some piece of AI capabilities research done in 2023 or after be net-positive for AI alignment research?
81% chance
Will non-profit funding for AI safety reach 100 billion US dollars in a year before 2030?
38% chance
Will a Turing Award be given out for work on AI alignment or existential safety by 2040?
79% chance
By end of 2028, will there be a global AI organization, responsible for AI safety and regulations?
40% chance
By 2028, will I believe that contemporary AIs are aligned (posing no existential risk)?
33% chance
Will someone commit violence in the name of AI safety by 2030?
60% chance
Conditional on their being no AI takeoff before 2030, will the majority of AI researchers believe that AI alignment is solved?
34% chance
Conditional on their being no AI takeoff before 2050, will the majority of AI researchers believe that AI alignment is solved?
52% chance
