Intentional Harm
AI could lower the required knowledge, effort, and risk to people who seek to harm others such as with advanced cyber attacks or bioterrorism.
Thousands of unsolved problems need to be addressed for a safe AI transition. Most problems have zero full time people working on them. Here are some example areas and problems. This is NOT an exhaustive list but it gives an idea of the magnitude of the effort.
AI could lower the required knowledge, effort, and risk to people who seek to harm others such as with advanced cyber attacks or bioterrorism.
If AI goals don't align with human values, then unexpected dangers are likely to emerge.
Bias in AI could be based off millions of data points per person, subject almost everyone to some discrimination, and be hard to detect or control.
Using AI to influence without consent: Subtle misinformation over time, election interference, profit maximization at the expense of well-being
AI's efficiency risks job loss. Investment needed for AI-human collaboration, new skills, and fair transition policies like UBI.
Prevent human over-dependency on AI by balancing education, promoting critical skills, and researching AI's societal impact.
Ensure responsible AI integration in governance: guidelines for AI's supplementary role, preserving human accountability.
Studies human-AI emotional bonds, exploring positives, negatives, and guidelines for healthy interactions.