Richard Branson, Oppenheimer grandson urge action on AI, climate

Richard Branson believes the environmental costs of space travel will “come down even further.”

Patrick T. Fallon | AFP | Getty Images

Dozens of high-profile figures in business and politics are calling on world leaders to address the existential risks of artificial intelligence and the climate crisis.

Virgin Group founder Richard Branson, along with former United Nations General Secretary Ban Ki-moon, and Charles Oppenheimer — the grandson of American physicist J. Robert Oppenheimer — signed an open letter urging action against the escalating dangers of the climate crisis, pandemics, nuclear weapons, and ungoverned AI.

The message asks world leaders to embrace long-view strategy and a “determination to resolve intractable problems, not just manage them, the wisdom to make decisions based on scientific evidence and reason, and the humility to listen to all those affected.”

Signatories called for urgent multilateral action, including through financing the transition away from fossil fuels, signing an equitable pandemic treaty, restarting nuclear arms talks, and building global governance needed to make AI a force for good.

The letter was released on Thursday by The Elders, a nongovernmental organization that was launched by former South African President Nelson Mandela and Branson to address global human rights issues and advocate for world peace.

The message is also backed by the Future of Life Institute, a nonprofit organization set up by MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn, which aims to steer transformative technology like AI towards benefiting life and away from large-scale risks.

Fiserv CEO: We are focused on how we use AI to help our clients run their businesses better

Tegmark said that The Elders and his organization wanted to convey that, while not in and of itself “evil,” the technology remains a “tool” that could lead to some dire consequences, if it is left to advance rapidly in the hands of the wrong people.

“The old strategy for steering toward good uses [when it comes to new technology] has always been learning from mistakes,” Tegmark told CNBC in an interview. “We invented fire, then later we invented the fire extinguisher. We invented the car, then we learned from our mistakes and invented the seatbelt and the traffic lights and speed limits.”

‘Safety engineering’

“But when the thing already crosses the threshold and power, that learning from mistakes strategy becomes … well, the mistakes would be awful,” Tegmark added.

“As a nerd myself, I think of it as safety engineering. We send people to the moon, we very carefully thought through all the things that could go wrong when you put people in explosive fuel tanks and send them somewhere where no one can help them. And that’s why it ultimately went well.”

He went on to say, “That wasn’t ‘doomerism.’ That was safety engineering. And we need this kind of safety engineering for our future also, with nuclear weapons, with synthetic biology, with ever more powerful AI.”

The letter was issued ahead of the Munich Security Conference, where government officials, military leaders and diplomats will discuss international security amid escalating global armed conflicts, including the Russia-Ukraine and Israel-Hamas wars. Tegmark will be attending the event to advocate the message of the letter.

The Future of Life Institute last year also released an open letter backed by leading figures including Tesla boss Elon Musk and Apple co-founder Steve Wozniak, which called on AI labs like OpenAI to pause work on training AI models that are more powerful than GPT-4 — currently the most advanced AI model from Sam Altman’s OpenAI.

The technologists called for such a pause in AI development to avoid a “loss of control” of civilization, which might result in a mass wipe-out of jobs and an outsmarting of humans by computers.

Leave a Reply

Your email address will not be published. Required fields are marked *