16.7 C
Thursday, February 15, 2024

Oppenheimer’s grandson joins name for international motion on AI and different existential threats

What simply occurred? A gaggle of over 100 international figures from the worlds of enterprise, politics, and different fields have signed an open letter urging international leaders to deal with present and future existential threats, together with AI, local weather change, pandemics, and nuclear battle.

The letter, printed Thursday, comes from The Elders, a nongovernmental group arrange by former South African President Nelson Mandela, and the Way forward for Life Institute, a nonprofit that goals to steer transformative expertise in the direction of benefiting life and away from large-scale dangers.

Signatories of the letter embody billionaire Virgin Group founder Richard Branson, former United Nations Common Secretary Ban Ki-moon, and Charles Oppenheimer (grandson of J. Robert Oppenheimer). It is also signed by a number of former presidents and prime ministers, activists, CEOs, founders, and professors.

“Our world is in grave hazard,” the letter begins. “We face a set of threats that put all humanity in danger. Our leaders usually are not responding with the knowledge and urgency required.”

The altering local weather, the pandemic, and wars through which the choice of utilizing nuclear weapons has been raised are cited as examples of present threats. The letter states that worse might come, particularly as we nonetheless do not know simply how important the rising threats related to ungoverned AI will show.

“Lengthy-view management means displaying the dedication to resolve intractable issues not simply handle them, the knowledge to make selections primarily based on scientific proof and purpose, and the humility to take heed to all these affected.”

The letter requires governments to agree on sure gadgets, akin to agreeing the best way to finance the transition away from fossil fuels and towards clear power, relaunching arms management talks to scale back the chance of nuclear battle, and creating an equitable pandemic treaty. With regards to AI, the suggestion is to construct the governance wanted to make the expertise a drive for good, not a runaway threat.

MIT cosmologist Max Tegmark, who arrange the Way forward for Life Institute alongside Skype co-founder Jaan Tallinn, informed CNBC that The Elders and his group don’t see AI as “evil,” however worry it may very well be used as a harmful device if it advances quickly within the palms of the mistaken individuals.

The Way forward for Life Institute additionally printed the open letter final yr that referred to as for a six-month pause on superior AI growth. It was signed by 1,100 individuals, together with Apple co-founder Steve Wozniak, Elon Musk, and Pinterest co-founder Evan Sharp. That letter did not have its supposed impact; not solely did AI corporations fail to decelerate growth, many truly sped up their efforts to develop superior AI.

Comparisons between AI and nuclear battle aren’t new. Specialists and CEOs warned of the extinction threat posed by the expertise final Could. And even ChatGPT-creator OpenAI says an AI smarter than individuals might trigger the extinction of the human race.

Latest news
Related news


Please enter your comment!
Please enter your name here