Editor’s Notice: The next is a quick letter from Ray Kurzweil, a director of engineering at Google and cofounder and member of the board at Singularity Group, Singularity Hub’s father or mother firm, in response to the Way forward for Life Institute’s latest letter, “Pause Large AI Experiments: An Open Letter.”
The FLI letter addresses the dangers of accelerating progress in AI and the following race to commercialize the expertise and requires a pause within the growth of algorithms extra highly effective than OpenAI’s GPT-4, the massive language mannequin behind the corporate’s ChatGPT Plus and Microsoft’s Bing chatbot. The FLI letter has hundreds of signatories—together with deep studying pioneer, Yoshua Bengio, College of California Berkeley professor of pc science, Stuart Russell, Stability AI CEO, Emad Mostaque, Elon Musk, and plenty of others—and has stirred vigorous debate within the AI neighborhood.
Relating to the open letter to “pause” analysis on AI “extra highly effective than GPT-4,” this criterion is simply too imprecise to be sensible. And the proposal faces a severe coordination downside: people who conform to a pause could fall far behind firms or nations that disagree. There are super advantages to advancing AI in crucial fields comparable to drugs and well being, schooling, pursuit of renewable vitality sources to exchange fossil fuels, and scores of different fields. I didn’t signal, as a result of I consider we will deal with the signers’ security considerations in a extra tailor-made manner that doesn’t compromise these important strains of analysis.
I participated within the Asilomar AI Principles Conference in 2017 and was actively concerned within the creation of pointers to create synthetic intelligence in an moral method. So I do know that security is a crucial concern. However extra nuance is required if we want to unlock AI’s profound benefits to well being and productiveness whereas avoiding the actual perils.
— Ray Kurzweil
Inventor, best-selling creator, and futurist