To steer the world toward better outcomes from AI, by untangling confusing topics and promoting constructive compromise.
Or, in a play on Google’s original mission statement: To organize the AI world’s disagreements and make them universally accessible and constructive.
We are entering the AI age. Collectively, we have many questions to address as we navigate this era. How will AI capabilities progress? How will those capabilities impact the economy, the job market, cybersecurity, misinformation, medicine, education, geopolitics, and more? What policy choices will promote innovation and beneficial applications while minimizing harm?
These are complex, multi-disciplinary questions, and the pace of change challenges our intuitions. Everyone is suffering from information overload, and gravitating toward oversimplified positions. Public discourse has a Groundhog Day flavor; the same arguments and rebuttals get made over and over again. People start from different assumptions, talk past one another, and get frustrated, leading to the emergence of tribal camps. Questions of AI policy are framed as a zero-sum tradeoff between progress and safety.
The world is oversupplied with papers, reports, blogs, and tweets, but they all contradict one another, furthering the confusion. Events are moving much too quickly for our normal decision-making processes to keep up – much as we saw in the early days of the Covid pandemic. Amid the confusion, policy decisions are outsourced to the loudest voices or simply mired in deadlock. If we can’t improve the information environment, we are liable to prevent society from realizing the vast potential of AI, stumble into a series of disasters, spark a new arms race, fall behind our international rivals – or, most likely, all of the above at once.
AI Soup is a nonprofit, with two focus areas.
Directed discussions: we will host discussions on the big questions around AI, from timelines to policy choices. We will bring together experts with a range of expertise (AI, economics, policy, biosecurity, cybersecurity, and others), backgrounds (industry, academia, government), and viewpoints. Discussions will be heavily facilitated, with the twin aims of avoiding miscommunication and driving toward clear answers. We do not expect to find “consensus”; instead, the goal is to elicit the range of reasonable positions, relevant arguments and evidence, and identify critical questions for further discussion or research.
Information resource: we will distill the discussions into an accessible overview of each topic. Rather than just throwing another report onto the pile, we will stand out by providing:
Our hope is that each initiative will support the other. Notable participants in our discussions will lend quality and credibility to the information we publish. That will draw an audience, in turn making it easier for us to recruit expert participants.
Why a new organization? In part, simply because no one seems to be doing the work. But also, it’s important that this project not be associated with any one camp in the AI debates. It can’t come from an organization that is tied to EA, big tech, AI safety, or a particular political stance.
Isn’t this just a think tank? There are parallels, but also important differences. We aim to reconcile existing ideas and opinions, rather than injecting our own. We will make a priority of presenting our work in an accessible form, promoting it to a wide audience, and keeping it up to date – less like an academic journal and more like Wikipedia.