A survey of 51 leading experts from AGI labs, academia, and civil society that found overwhelming support for many AGI safety and governance practices, including pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming.
Results from a large survey of AI and machine learning (ML) researchers on
their beliefs about progress in AI. The survey, fielded in late 2019, elicited forecasts for near-term AI development milestones and high- or human-level machine intelligence, defined as when machines are able to accomplish every or almost every task humans are able to do currently.
This report, led by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI. While open-sourcing has, to date, provided substantial net benefits for most software and AI development processes, we argue that for some highly capable models likely to emerge in the near future, the risks of open sourcing may outweigh the benefits.
Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI, including through their work, advocacy, and choice of employment. Nevertheless, this influential group’s attitudes are not well understood, undermining our ability to discern consensuses or disagreements between AI/ML researchers. To examine these researchers’ views, we conducted a survey of those who published in two top AI/ML conferences (N= 524).
Countries, companies, and universities are increasingly competing over top-tier artificial intelligence (AI) researchers. Where are these researchers likely to immigrate and what affects their immigration decisions? We conducted a survey of the immigration preferences and motivations of researchers that had papers accepted at one of two prestigious AI conferences: the Conference on Neural Information Processing Systems (NeurIPS) and the International Conference on Machine Learning (ICML).
Blog post summary of a survey of 51 leading experts from AGI labs, academia, and civil society. We found overwhelming support for many AGI safety and governance practices, including pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming.