{"id":780474,"date":"2024-04-08T15:17:59","date_gmt":"2024-04-08T20:17:59","guid":{"rendered":"https:\/\/spaceweekly.com\/?p=780474"},"modified":"2024-04-08T15:17:59","modified_gmt":"2024-04-08T20:17:59","slug":"does-the-rise-of-ai-explain-the-great-silence-in-the-universe","status":"publish","type":"post","link":"https:\/\/spaceweekly.com\/?p=780474","title":{"rendered":"Does the Rise of AI Explain the Great Silence in the Universe?"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p>Artificial Intelligence is making its presence felt in thousands of different ways. It helps scientists make sense of vast troves of data; it helps detect financial fraud; it drives our cars; it feeds us music suggestions; its chatbots drive us crazy. And it\u2019s only getting started.<\/p>\n<p>Are we capable of understanding how quickly AI will continue to develop? And if the answer is no, does that constitute the Great Filter?<\/p>\n<p><span id=\"more-166544\"\/><\/p>\n<p>The Fermi Paradox is the discrepancy between the apparent high likelihood of advanced civilizations existing and the total lack of evidence that they do exist. Many solutions have been proposed for why the discrepancy exists. One of the ideas is the \u201cGreat Filter.\u201d <\/p>\n<p>The Great Filter is a hypothesized event or situation that prevents intelligent life from becoming interplanetary and interstellar and even leads to its demise. Think climate change, nuclear war, asteroid strikes, supernova explosions, plagues, or any number of other things from the rogue\u2019s gallery of cataclysmic events. <\/p>\n<p>Or how about the rapid development of AI?<\/p>\n<p>A new paper in Acta Astronautica explores the idea that Artificial Intelligence becomes Artificial Super Intelligence (ASI) and that ASI is the Great Filter. The paper\u2019s title is \u201cIs Artificial Intelligence the Great Filter that makes advanced technical civilizations rare in the universe?\u201d The author is Michael Garrett from the Department of Physics and Astronomy at the University of Manchester. <\/p>\n<figure class=\"wp-block-pullquote\">\n<blockquote>\n<p>\u201cWithout practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations.\u201d<\/p>\n<p><cite>Michael Garrett, University of Manchester<\/cite><\/p><\/blockquote>\n<\/figure>\n<p>Some think the Great Filter prevents technological species like ours from becoming multi-planetary. That\u2019s bad because a species is at greater risk of extinction or stagnation with only one home. According to Garrett, a species is in a race against time without a backup planet. \u201cIt is proposed that such a filter emerges before these civilizations can develop a stable, multi-planetary existence, suggesting the typical longevity (L) of a technical civilization is less than 200 years,\u201d Garrett writes.<\/p>\n<p>If true, that can explain why we detect no technosignatures or other evidence of ETIs (Extraterrestrial Intelligences.) What does that tell us about our own technological trajectory? If we face a 200-year constraint, and if it\u2019s because of ASI, where does that leave us? Garrett underscores the \u201c\u2026critical need to quickly establish regulatory frameworks for AI development on Earth and the advancement of a multi-planetary society to mitigate against such existential threats.\u201d<\/p>\n<figure class=\"wp-block-image size-large\"><figcaption class=\"wp-element-caption\">An image of our beautiful Earth taken by the Galileo spacecraft in 1990. Do we need a backup home? Credit: NASA\/JPL<\/figcaption><\/figure>\n<p>Many scientists and other thinkers say we\u2019re on the cusp of enormous transformation. AI is just beginning to transform how we do things; much of the transformation is behind the scenes. AI seems poised to eliminate jobs for millions, and when paired with robotics, the transformation seems almost unlimited. That\u2019s a fairly obvious concern.<\/p>\n<p>But there are deeper, more systematic concerns. Who writes the algorithms? Will AI discriminate somehow? Almost certainly. Will competing algorithms undermine powerful democratic societies? Will open societies remain open? Will ASI start making decisions for us, and who will be accountable if it does?<\/p>\n<p>This is an expanding tree of branching questions with no clear terminus. <\/p>\n<p>Stephen Hawking (RIP) famously warned that AI could end humanity if it begins to evolve independently. \u201cI fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans,\u201d he told Wired magazine in 2017. Once AI can outperform humans, it becomes ASI.<\/p>\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"580\" height=\"373\" src=\"https:\/\/www.universetoday.com\/wp-content\/uploads\/2015\/11\/StephenHawking_1.jpg\" alt=\"Stephen Hawking was a major proponent for colonizing other worlds, mainly to ensure humanity does not go extinct. In later years, Hawking recognized that AI could be an extinction-level threat. Credit: educatinghumanity.com\" class=\"wp-image-123568\" srcset=\"https:\/\/www.universetoday.com\/wp-content\/uploads\/2015\/11\/StephenHawking_1.jpg 580w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2015\/11\/StephenHawking_1-250x161.jpg 250w\" sizes=\"auto, (max-width: 580px) 100vw, 580px\"\/><figcaption class=\"wp-element-caption\">Stephen Hawking was a major proponent for colonizing other worlds, mainly to ensure humanity does not go extinct. In later years, Hawking recognized that AI could be an extinction-level threat. Credit: educatinghumanity.com<\/figcaption><\/figure>\n<p>Hawking may be one of the most recognizable voices to issue warnings about AI, but he\u2019s far from the only one. The media is full of discussions and warnings, alongside articles about the work AI does for us. The most alarming warnings say that ASI could go rogue. Some people dismiss that as science fiction, but not Garrett. <\/p>\n<p>\u201cConcerns about Artificial Superintelligence (ASI) eventually going rogue is considered a major issue \u2013 combatting this possibility over the next few years is a growing research pursuit for leaders in the field,\u201d Garrett writes.<\/p>\n<p>If AI provided no benefits, the issue would be much easier. But it provides all kinds of benefits, from improved medical imaging and diagnosis to safer transportation systems. The trick for governments is to allow benefits to flourish while limiting damage. \u201cThis is especially the case in areas such as national security and defence, where responsible and ethical development should be paramount,\u201d writes Garrett.<\/p>\n<figure class=\"wp-block-embed is-type-video is-provider-youtube wp-block-embed-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\">\n<p>\n<span class=\"embed-youtube\" style=\"text-align:center; display: block;\"><iframe loading=\"lazy\" title=\"How Artificial Intelligence is improving MRI scans\" width=\"1110\" height=\"624\" src=\"https:\/\/www.youtube.com\/embed\/KH-7A1RTn7Y?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/span>\n<\/p><figcaption class=\"wp-element-caption\">News reports like this might seem impossibly naive in a few years or decades. <\/figcaption><\/figure>\n<p>The problem is that we and our governments are unprepared. There\u2019s never been anything like AI, and no matter how we try to conceptualize it and understand its trajectory, we\u2019re left wanting. And if we\u2019re in this position, so would any other biological species that develops AI. The advent of AI and then ASI could be universal, making it a candidate for the Great Filter. <\/p>\n<p>This is the risk ASI poses in concrete terms: It could no longer need the biological life that created it. \u201cUpon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics,\u201d Garrett explains. <\/p>\n<p>How could ASI relieve itself of the pesky biological life that corrals it? It could engineer a deadly virus, it could inhibit agricultural food production and distribution, it could force a nuclear power plant to melt down, and it could start wars. We don\u2019t really know because it\u2019s all uncharted territory. Hundreds of years ago, cartographers would draw monsters on the unexplored regions of the world, and that\u2019s kind of what we\u2019re doing now. <\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"552\" src=\"https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/Ancient-map-monsters-1024x552.jpg\" alt=\"This is a portion of the Carta Marina map from the year 1539. It shows monsters lurking in the unknown waters off of Scandinavia. Are the fears of ASI kind of like this? Or could ASI be the Great Filter? Image Credit: By Olaus Magnus -  Public Domain, \" class=\"wp-image-166552\" srcset=\"https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/Ancient-map-monsters-1024x552.jpg 1024w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/Ancient-map-monsters-580x313.jpg 580w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/Ancient-map-monsters-250x135.jpg 250w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/Ancient-map-monsters-768x414.jpg 768w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/Ancient-map-monsters.jpg 1267w\" sizes=\"auto, (max-width: 767px) 89vw, (max-width: 1000px) 54vw, (max-width: 1071px) 543px, 580px\"\/><figcaption class=\"wp-element-caption\">This is a portion of the Carta Marina map from the year 1539. It shows monsters lurking in the unknown waters off of Scandinavia. Are the fears of ASI kind of like this? Or could ASI be the Great Filter? Image Credit: By Olaus Magnus \u2013  Public Domain, <\/figcaption><\/figure>\n<p>If this all sounds forlorn and unavoidable, Garrett says it\u2019s not. <\/p>\n<p>His analysis so far is based on ASI and humans occupying the same space. But if we can attain multi-planetary status, the outlook changes. \u201cFor example, a multi-planetary biological species could take advantage of independent experiences on different planets, diversifying their survival strategies and possibly avoiding the single-point failure that a planetary-bound civilization faces,\u201d Garrett writes.<\/p>\n<p>If we can distribute the risk across multiple planets around multiple stars, we can buffer ourselves against the worst possible outcomes of ASI. \u201cThis distributed model of existence increases the resilience of a biological civilization to AI-induced catastrophes by creating redundancy,\u201d he writes. <\/p>\n<p>If one of the planets or outposts that future humans occupy fails to survive the ASI technological singularity, others may survive. And they would learn from it. <\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"552\" src=\"https:\/\/www.universetoday.com\/wp-content\/uploads\/2019\/09\/Starship-2019-Mars-Moon-base-render-SpaceX-1-1024x552.jpg\" alt=\"Artist's illustration of a SpaceX Starship landing on Mars. If we can become a multi-planetary species, the threat of ASI is diminished. Credit: SpaceX\" class=\"wp-image-143575\" srcset=\"https:\/\/www.universetoday.com\/wp-content\/uploads\/2019\/09\/Starship-2019-Mars-Moon-base-render-SpaceX-1-1024x552.jpg 1024w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2019\/09\/Starship-2019-Mars-Moon-base-render-SpaceX-1-250x135.jpg 250w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2019\/09\/Starship-2019-Mars-Moon-base-render-SpaceX-1-580x313.jpg 580w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2019\/09\/Starship-2019-Mars-Moon-base-render-SpaceX-1-768x414.jpg 768w\" sizes=\"auto, (max-width: 767px) 89vw, (max-width: 1000px) 54vw, (max-width: 1071px) 543px, 580px\"\/><figcaption class=\"wp-element-caption\">Artist\u2019s illustration of a SpaceX Starship landing on Mars. If we can become a multi-planetary species, the threat of ASI is diminished. Credit: SpaceX<\/figcaption><\/figure>\n<p>Multi-planetary status might even do more than just survive ASI. It could help us master it. Garrett imagines situations where we can experiment more thoroughly with AI while keeping it contained. Imagine AI on an isolated asteroid or dwarf planet, doing our bidding without access to the resources required to escape its prison. \u201cIt allows for isolated environments where the effects of advanced AI can be studied without the immediate risk of global annihilation,\u201d Garrett writes. <\/p>\n<p>But here\u2019s the conundrum. AI development is proceeding at an accelerating pace, while our attempts to become multi-planetary aren\u2019t. \u201cThe disparity between the rapid advancement of AI and the slower progress in space technology is stark,\u201d Garrett writes. <\/p>\n<p>The difference is that AI is computational and informational, but space travel contains multiple physical obstacles that we don\u2019t yet know how to overcome. Our own biological nature restrains space travel, but no such obstacle restrains AI. \u201cWhile AI can theoretically improve its own capabilities almost without physical constraints,\u201d Garrett writes, \u201cspace travel must contend with energy limitations, material science boundaries, and the harsh realities of the space environment.\u201d<\/p>\n<p>For now, AI operates within the constraints we set. But that may not always be the case. We don\u2019t know when AI might become ASI or even if it can. But we can\u2019t ignore the possibility. That leads to two intertwined conclusions. <\/p>\n<p>If Garrett is correct, humanity must work more diligently on space travel. It can seem far-fetched, but knowledgeable people know it\u2019s true: Earth will not be inhabitable forever. Humanity will perish here by our own hand or nature\u2019s hand if we don\u2019t expand into space. Garrett\u2019s 200-year estimate just puts an exclamation point on it. A renewed emphasis on reaching the Moon and Mars offers some hope.<\/p>\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"468\" height=\"263\" src=\"https:\/\/www.universetoday.com\/wp-content\/uploads\/2023\/04\/artemis_-_picture3.jpg\" alt=\"The Artemis program is a renewed effort to establish a presence on the Moon. After that, we could visit Mars. Are these our first steps to becoming a multi-planetary civilization? Image Credit: NASA\" class=\"wp-image-161104\" srcset=\"https:\/\/www.universetoday.com\/wp-content\/uploads\/2023\/04\/artemis_-_picture3.jpg 468w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2023\/04\/artemis_-_picture3-250x140.jpg 250w\" sizes=\"auto, (max-width: 468px) 100vw, 468px\"\/><figcaption class=\"wp-element-caption\">The Artemis program is a renewed effort to establish a presence on the Moon. After that, we could visit Mars. Are these our first steps to becoming a multi-planetary civilization? Image Credit: NASA<\/figcaption><\/figure>\n<p>The second conclusion concerns legislating and governing AI, a difficult task in a world where psychopaths can gain control of entire nations and are bent on waging war. \u201cWhile industry stakeholders, policymakers, individual experts, and their governments already warn that regulation is necessary, establishing a regulatory framework that can be globally acceptable is going to be challenging,\u201d Garrett writes. Challenging barely describes it. Humanity\u2019s internecine squabbling makes it all even more unmanageable. Also, no matter how quickly we develop guidelines, ASI might change even more quickly. <\/p>\n<p>\u201cWithout practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations,\u201d Garrett writes.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"635\" src=\"https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/UN_General_Assembly_hall-1024x635.jpg\" alt=\"This is the United Nations General Assembly. Are we united enough to constrain AI? Image Credit: By Patrick Gruban, cropped and downsampled by Pine - originally posted to Flickr as UN General Assembly, CC BY-SA 2.0, \" class=\"wp-image-166554\" srcset=\"https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/UN_General_Assembly_hall-1024x635.jpg 1024w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/UN_General_Assembly_hall-580x360.jpg 580w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/UN_General_Assembly_hall-250x155.jpg 250w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/UN_General_Assembly_hall-768x476.jpg 768w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/UN_General_Assembly_hall-1536x953.jpg 1536w, https:\/\/www.universetoday.com\/wp-content\/uploads\/2024\/04\/UN_General_Assembly_hall.jpg 1920w\" sizes=\"auto, (max-width: 767px) 89vw, (max-width: 1000px) 54vw, (max-width: 1071px) 543px, 580px\"\/><figcaption class=\"wp-element-caption\">This is the United Nations General Assembly. Are we united enough to constrain AI? Image Credit: By Patrick Gruban, cropped and downsampled by Pine \u2013 originally posted to Flickr as UN General Assembly, CC BY-SA 2.0, <\/figcaption><\/figure>\n<p>Many of humanity\u2019s hopes and dreams crystallize around the Fermi Paradox and the Great Filter. Are there other civilizations? Are we in the same situation as other ETIs? Will our species leave Earth? Will we navigate the many difficulties that face us? Will we survive? <\/p>\n<p>If we do, it might come down to what can seem boring and workaday: wrangling over legislation. <\/p>\n<p>\u201cThe persistence of intelligent and conscious life in the universe could hinge on the timely and effective implementation of such international regulatory measures and technological endeavours,\u201d Garrett writes.<\/p>\n<div class=\"sharedaddy sd-block sd-like jetpack-likes-widget-wrapper jetpack-likes-widget-unloaded\" id=\"like-post-wrapper-24000880-166544-66144de3237f1\" data-src=\"https:\/\/widgets.wp.com\/likes\/?ver=13.2#blog_id=24000880&amp;post_id=166544&amp;origin=www.universetoday.com&amp;obj_id=24000880-166544-66144de3237f1&amp;n=1\" data-name=\"like-post-frame-24000880-166544-66144de3237f1\" data-title=\"Like or Reblog\">\n<h3 class=\"sd-title\">Like this:<\/h3>\n<p><span class=\"button\"><span>Like<\/span><\/span> <span class=\"loading\">Loading&#8230;<\/span><\/p>\n<p><span class=\"sd-text-color\"\/><\/div>\n<\/p><\/div>\n<p><br \/>\n<br \/><a href=\"https:\/\/www.universetoday.com\/166544\/does-the-rise-of-ai-explain-the-great-silence-in-the-universe\/?rand=772204\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial Intelligence is making its presence felt in thousands of different ways. It helps scientists make sense of vast troves of data; it helps detect financial fraud; it drives our&hellip; <\/p>\n","protected":false},"author":1,"featured_media":780475,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[13],"tags":[],"class_list":["post-780474","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-genaero"],"_links":{"self":[{"href":"https:\/\/spaceweekly.com\/index.php?rest_route=\/wp\/v2\/posts\/780474","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/spaceweekly.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/spaceweekly.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/spaceweekly.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/spaceweekly.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=780474"}],"version-history":[{"count":0,"href":"https:\/\/spaceweekly.com\/index.php?rest_route=\/wp\/v2\/posts\/780474\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/spaceweekly.com\/index.php?rest_route=\/wp\/v2\/media\/780475"}],"wp:attachment":[{"href":"https:\/\/spaceweekly.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=780474"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/spaceweekly.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=780474"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/spaceweekly.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=780474"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}