[ad_1]
Ever since the launch of ChatGPT and the subsequent start of the AI revolution, industry experts and governments have been rallying for stronger security measures to prevent AI systems from going off the rails. Now, in recent news, a group of influential AI experts and industry leaders have issued a warning, emphasizing the urgent need to address the existential risk posed by AI on a global scale.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads the statement.
Published by the San Francisco-based non-profit organization, the Center for AI Safety, the statement highlights the ongoing efforts of companies racing to build the most advanced AI systems without implementing adequate safety measures, which could potentially be a threat to humanity. As a result, the statement has received widespread support and signatures from prominent figures, including Demis Hassabis, CEO of Google DeepMind, Sam Altman, CEO of OpenAI, and artificial intelligence researchers such as Geoffrey Hinton and Yoshua Bengio, who both received the prestigious Turing Award in 2018 for their groundbreaking contributions to the field.
Moreover, the executive director of the Center for AI Safety, Dan Hendrycks, explained in a report that the statement aims to avoid potential disagreements and dilutions by suggesting specific interventions to mitigate the risks.
Understanding the threat
While current AI systems may not pose significant threats to humanity, the AI safety debate primarily revolves around hypothetical scenarios in which AI systems rapidly surpass safe levels of operation and begin evolving autonomously. Experts believe that once these AI systems reach a certain level of sophistication, controlling their operations would become nearly impossible. Therefore, it is crucial for companies like Google and OpenAI, who are spearheading the AI revolution, to acknowledge the threat.
Consequently, even the government is also exploring ways to control and regulate AI development. And although President Joe Biden recognizes the immense potential of AI in solving global challenges like disease and climate change, he also emphasized the need to address the potential risks it poses to society, the economy, and national security.
[ad_2]
Source link