{"id":11131,"date":"2024-06-04T20:03:57","date_gmt":"2024-06-04T20:03:57","guid":{"rendered":"https:\/\/thisbiginfluence.com\/?p=11131"},"modified":"2024-06-04T20:03:57","modified_gmt":"2024-06-04T20:03:57","slug":"openai-insider-estimates-70-percent-chance-that-ai-will-destroy-or-catastrophically-harm-humanity","status":"publish","type":"post","link":"https:\/\/thisbiginfluence.com\/?p=11131","title":{"rendered":"OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div id=\"incArticle\">\n<h2 class=\"block pb-1 text-3xl leading-none uppercase border-b lg:hidden xs:text-4xl font-k lg:text-5 border-red\">&#8220;The world isn\u2019t prepared, and we aren\u2019t prepared.&#8221;<\/h2>\n<h2 class=\"font-k text-4 font-black  lg:border-b border-gray-900 pb-1\">Getting Warner<\/h2>\n<p>After former and present OpenAI workers <a href=\"https:\/\/righttowarn.ai\/\" class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:#ff0033\">released an open letter<\/a> claiming they&#8217;re being <a href=\"https:\/\/futurism.com\/openai-insiders-silenced\" class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:#ff0033\">silenced against raising safety issues<\/a>, one of many letter&#8217;s signees made an much more terrifying prediction: that the chances AI will both destroy or catastrophically hurt humankind are larger than a coin flip.<\/p>\n<p>In an <a href=\"https:\/\/www.nytimes.com\/2024\/06\/04\/technology\/openai-culture-whistleblowers.html\" class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:#ff0033\">interview with\u00a0<em>The <\/em><em>New York Times<\/em><\/a>, former OpenAI governance researcher <a href=\"https:\/\/futurism.com\/openai-safety-worker-quit-confidence-agi\" class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:#ff0033\">Daniel Kokotajlo<\/a> accused the corporate of ignoring the monumental dangers posed by synthetic normal intelligence (AGI) as a result of its decision-makers are so enthralled with its prospects.<\/p>\n<p>&#8220;OpenAI is basically enthusiastic about constructing AGI,&#8221; Kokotajlo mentioned, &#8220;and they&#8217;re recklessly racing to be the primary there.&#8221;<\/p>\n<p>Kokotajlo&#8217;s spiciest declare to the newspaper, although, was that the possibility AI will wreck humanity is round 70 % \u2014 odds you would not settle for for any main life occasion, however that OpenAI and its ilk are barreling forward with anyway.<\/p>\n<h2 class=\"font-k text-4 font-black  lg:border-b border-gray-900 pb-1\">MF Doom<\/h2>\n<p>The time period &#8220;<a href=\"https:\/\/www.nytimes.com\/2023\/12\/06\/business\/dealbook\/silicon-valley-artificial-intelligence.html\" class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:#ff0033\">p(doom)<\/a>,&#8221; which is AI-speak for the likelihood that AI will usher in doom for humankind, is the <a href=\"https:\/\/www.lesswrong.com\/posts\/EwyviSHWrQcvicsry\/stop-talking-about-p-doom\" class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:#ff0033\">subject of constant controversy<\/a> within the machine studying world.<\/p>\n<p>The 31-year-old Kokotajlo informed the <em>NYT<\/em> that after he joined OpenAI in 2022 and was requested to forecast the expertise&#8217;s progress, he grew to become satisfied not solely that the trade would obtain AGI by the yr 2027, however that there was an amazing likelihood that it might catastrophically hurt and even destroy humanity.<\/p>\n<p>As famous within the <a href=\"https:\/\/righttowarn.ai\/\" class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:#ff0033\">open letter<\/a>, Kokotajlo and his comrades \u2014 which incorporates former and present workers at Google DeepMind and Anthropic, in addition to <a href=\"https:\/\/futurism.com\/the-byte\/godfather-ai-risk-eliminate-humanity\" class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:#ff0033\">Geoffrey Hinton<\/a>, the so-called &#8220;Godfather of AI&#8221; who <a href=\"https:\/\/futurism.com\/the-byte\/godfather-ai-quits-google\" class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:#ff0033\">left Google last year<\/a> over comparable issues \u2014 are asserting their &#8220;proper to warn&#8221; the general public in regards to the dangers posed by AI.<\/p>\n<p>Kokotajlo grew to become so satisfied that AI posed huge dangers to humanity that finally, he personally urged OpenAI CEO Sam Altman that the corporate wanted to &#8220;pivot to security&#8221; and spend extra time implementing guardrails to reign within the expertise moderately than proceed making it smarter.<\/p>\n<p>Altman, per the previous worker&#8217;s recounting, appeared to agree with him on the time, however over time it simply felt like lip service.<\/p>\n<p>Fed up, Kokotajlo stop the agency in April, telling his group in an e mail that he had &#8220;misplaced confidence that OpenAI will behave responsibly&#8221; because it continues making an attempt to construct near-human-level AI.<\/p>\n<p>&#8220;The world isn\u2019t prepared, and we aren\u2019t prepared,&#8221; he wrote in his e mail, which was shared with the <em>NYT<\/em>. &#8220;And I\u2019m involved we&#8217;re dashing ahead regardless and rationalizing our actions.&#8221;<\/p>\n<p>Between the <a href=\"https:\/\/futurism.com\/tags\/ilya-sutskever\" class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:#ff0033\">big-name exits<\/a> and these kinds of <a href=\"https:\/\/futurism.com\/the-byte\/openai-cryptic-warning\" class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:#ff0033\">terrifying predictions<\/a>, the newest information out of OpenAI has been grim \u2014 and it is onerous to see it getting any sunnier transferring ahead.<\/p>\n<p class=\"\"><strong>Extra on OpenAI:<\/strong> <a href=\"https:\/\/futurism.com\/the-byte\/sam-altman-openai-safety-team-replacement\" class=\"underline hover:text-the-byte hover:no-underline transition-all duration-200 ease-in-out\" style=\"text-decoration-color:#ff0033\"><em>Sam Altman Replaces OpenAI&#8217;s Fired Safety Team With Himself and His Cronies<\/em><\/a><\/p>\n<p><\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/futurism.com\/the-byte\/openai-insider-70-percent-doom\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>&#8220;The world isn\u2019t prepared, and we aren\u2019t prepared.&#8221; Getting Warner After former and present OpenAI workers released an open letter claiming they&#8217;re being silenced against raising safety issues, one of many letter&#8217;s signees made an much more terrifying prediction: that the chances AI will both destroy or catastrophically hurt humankind are larger than a coin [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":11133,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[9],"tags":[9253,2853,1869,8621,771,9254,6926,1024,3562],"class_list":["post-11131","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech","tag-catastrophically","tag-chance","tag-destroy","tag-estimates","tag-harm","tag-humanity","tag-insider","tag-openai","tag-percent"],"_links":{"self":[{"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=\/wp\/v2\/posts\/11131","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=11131"}],"version-history":[{"count":0,"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=\/wp\/v2\/posts\/11131\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=\/wp\/v2\/media\/11133"}],"wp:attachment":[{"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=11131"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=11131"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thisbiginfluence.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=11131"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}