This report synthesises public attitudes towards AI with a focus on the United Kingdom and United States, highlighting emerging trends, challenges, and future opportunities. It draws from academic studies and public polling, as well as introduces a new resource—the AI Survey Hub for Attitudes and Research Exchange (AI SHARE) database, which aggregates survey data from over 200 studies conducted between 2014 and 2023.
In a two-wave survey of over 500 local US elected officials conducted in 2022 and 2023, capturing views pre and post the ChatGPT moment, our research reveals that while officials see AI as potentially beneficial for US innovation, they increasingly anticipate significant societal risks in the coming decades. Notably, we found growing support for general government regulation of AI for both Democrats and Republicans, though this hasn’t translated into increased backing for specific AI policies, and officials have become more pessimistic about AI’s long-term effects.
A survey of 51 leading experts from AGI labs, academia, and civil society that found overwhelming support for many AGI safety and governance practices, including pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming.
Results from a large survey of AI and machine learning (ML) researchers on
their beliefs about progress in AI. The survey, fielded in late 2019, elicited forecasts for near-term AI development milestones and high- or human-level machine intelligence, defined as when machines are able to accomplish every or almost every task humans are able to do currently.
This report, led by Elizabeth Seger, attempts to clarify open-source terminology and to offer a thorough analysis of risks and benefits from open-sourcing AI. While open-sourcing has, to date, provided substantial net benefits for most software and AI development processes, we argue that for some highly capable models likely to emerge in the near future, the risks of open sourcing may outweigh the benefits.
Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI, including through their work, advocacy, and choice of employment. Nevertheless, this influential group’s attitudes are not well understood, undermining our ability to discern consensuses or disagreements between AI/ML researchers. To examine these researchers’ views, we conducted a survey of those who published in two top AI/ML conferences (N= 524).
Countries, companies, and universities are increasingly competing over top-tier artificial intelligence (AI) researchers. Where are these researchers likely to immigrate and what affects their immigration decisions? We conducted a survey of the immigration preferences and motivations of researchers that had papers accepted at one of two prestigious AI conferences: the Conference on Neural Information Processing Systems (NeurIPS) and the International Conference on Machine Learning (ICML).
Blog post summary of a survey of 51 leading experts from AGI labs, academia, and civil society. We found overwhelming support for many AGI safety and governance practices, including pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming.