Gary Marcus, a former CEO of Geometric Intelligence and Emeritus Professor at NYU, and Anka Reuel, a founding member of kira and a PhD student in computer science at Stanford University, are renowned experts in the field of AI. They argue that in order to tackle the dangers posed by biases, misinformation, and other potential harms, there is a pressing need to prioritize global governance in the development and deployment of AI technologies.
OpenAI's ChatGPT, a prominent example of these technologies, has gained immense traction as a consumer internet application due to its versatile applications in fields such as education and medicine. Furthermore, its interactive nature provides an enjoyable user experience.
AI Tools Are Double-Edged Swords
Although AI systems currently showcase impressive achievements, they also carry risks that have raised concerns among experts. Europol has issued warnings about the potential for AI to amplify cybercrime significantly.
Many AI experts express deep concerns about the possibility of AI-generated misinformation causing harm to the upcoming American presidential election in 2024 and posing a threat to democracy by eroding trust. Additionally, some speculate about potential long-term risks that AI may pose to humanity as a whole.
Al Systems Pose Serious Problems
These systems also have the potential for deliberate misuse, ranging from interfering in elections by manipulating candidates' statements to spreading false medical information. OpenAI's analysis of gpt-4, their most advanced language model, acknowledged 12 significant concerns without providing concrete solutions.
Over the past year, 37 AI-related regulations have been enacted globally, including a ban on ChatGPT in Italy. However, there is a lack of global coordination in AI governance, with inconsistent regulations in some countries like the U.S. and Britain. This patchwork approach to oversight poses risks to the benefit and safety of all, and companies also face challenges in developing AI models for different jurisdictions.
AI Requires Careful Management to Mitigate Risks
Despite differing opinions on many aspects of AI, there is broad consensus on responsible principles such as safety, reliability, transparency, and accountability. A recent poll conducted by the Centre for the Governance of AI revealed that 91% of a diverse sample of 13,000 individuals from 11 countries agreed on the need for careful management of AI.
In light of the situation, Gary Marcus and Anka Reuel, who are experts in AI, propose the creation of a global, neutral, non-profit International Agency for AI (IAAI). The IAAI would collaborate with governments, technology companies, non-profits, academia, and society at large to develop governance and technical solutions for safe and secure AI technologies.
The need for an agency, as emphasized by Google CEO Sundar Pichai on April 16, is now evident. However, the structure of such an agency would vary depending on the domain and industry, each with its own set of guidelines, likely involving global governance and technological innovation.
For instance, in systems like ChatGPT, there are numerous use cases that lack a remedy, such as the potential bias in its output when used to make judgments about job candidates based on their entire file.
The envisioned governance would work collaboratively to address policy questions, including "off-label" uses of chatbots, and develop technical tools for effective auditing of AI systems.
The Role of the Envisioned International Agency for AI
The International Agency for AI (IAAI) could play a crucial role in addressing the spread of misinformation by convening experts and developing tools. On the policy side, it could explore penalties for widespread misinformation, while on the technical side, it could focus on developing automated tools to detect misinformation. Existing technologies are better at generating misinformation than detecting it, and considerable technical innovation is needed.
The global collaboration envisioned by the IAAI would require involvement from various stakeholders, including governments, companies, and the public, to address short-term and long-term risks associated with AI governance.
The two AI experts emphasize that there are examples of global cooperation in other areas, but the rapid pace of AI development leaves little time to waste. A global, neutral, non-profit organization with support from governments, large corporations, and society at large would be a crucial first step in addressing the challenges of AI governance.