Intentional Harm
AI could lower the required knowledge, effort, and risk to people who seek to harm others such as with advanced cyber attacks or bioterrorism.
As we step into the age of artificial intelligence, we're met with a spectrum of complex challenges as well as promising prospects. Our organization, Safe AI Future, champions a groundbreaking federal investment to tackle the key issues vital for a secure and just shift towards an AI-centric world.
Given the necessity of engaging approximately 100,000 to 500,000 full-time professionals to address the multifaceted challenges of a secure AI transition as described below, the allocation would encompass a diverse range of initiatives. The government would distribute grants to premier academic institutions, independent scholars, non-profit organizations, and private sector companies. These funds would facilitate a breadth of projects, from enhancing physical infrastructure to bolster resilience against cyberattacks to devising and rigorously evaluating strategic plans for AI implementation.
There are a number of areas listed below, each with perhaps thousands of problems to be solved. Even failing to solve just a few could be catastrophic.
When faced with thousands of critical unsolved problems for a safe AI transition, we don't know exactly how much we need to spend on each problem to find and implement the solutions. However, we know that as spending goes up, the odds of harm fall until they approach zero.