There is a popular science fiction cliché that one day artificial intelligence becomes dishonest and kills all humans, eliminating the species. Could this really happen? In real world surveys, IA researchers say they see human extinction as a plausible result of AI development. In 2024, hundreds of these researchers signed a statement that said: “Mitigating the risk of extinction of AI should be a global priority together with other social risks, such as pandemics and nuclear war war.”
Okay, guys.
Pandemics and nuclear war are real and tangible concerns, rather than Ai Doom, at least for me, a scientist from the Rand Corporation. We do all kinds of research on national security issues and we could be better known for our role in the development of strategies for the prevention of the duration of the Cold War Nuclear Catastrophe. Rand takes the great threats to humanity seriously, so I, skeptical about the potential for human extinction of AI, proposes a project to investigate if he could do it.
About support for scientific journalism
If you are enjoying this article, consider support our journalism awarded with Subscription. When buying a subscription, it is helping to guarantee the future of shocking stories about the discoveries and ideas that shape our world today.
The hypothesis of my team was this: The scenario where AI is conclusive a threat of extinction for humanity cannot be described. In other words, our initial hypothesis was that humans were too adaptable, too abundant and too scattered throughout the planet for AI to clean us using any hypothetical tool at their disposal. If we could try this incorrect hypothesis, it would mean that AI could be a true threat of extinction for humanity.
Many people are evaluating the catastrophic risks of AI. In the most extreme cases, some people claim that AI will become a superintelligence, with the assurance that it will use novel and advanced technology such as nanotechnology to take care and eliminate us. The forecasts have predicted the hood of the probability of the existential risk of an AI catastrophe, or that have arrived between an extinction of 0 and 10 percent of Humanity Chans for 2100. We were skeptical of the valuation of predictions such as the formulation of policies.
Our team was formed by a scientist, an engineer and a mathematician. We swallow any of our skepticism of AI and in the very rand fashion as a whole to detail how AI could cause human extinction. A simple global catastrophe or social collapse was not enough for us. We were trying to run the risk of extinction seriously, that we were only very interested in a complete elimination of our species. We also keep it interested in whether Ai would try to kill us, only in which he lit.
It was a morbid task. We get used to analyzing exactly how AI could exploit three mainly perceived main threats as existential risks: nuclear war, biological pathogens and climate change.
It turns out that it is very difficult, however, not completely out of the kingdom or possibility, it appears to kill us all.
The good news, if I can call it that, is that we do not believe that AI can kill us all with nuclear weapons. Even if AI somehow acquire the ability to launch the more than 12,000 eyelets in the global nuclear reserve of nine countries, explosions, radioactive consequences and the resulting nuclear winter would not be a short event yet. Humans are too abundant and scattered for detonations to kill us all. The AI could detonate weapons above all the densest areas in the planet’s fuel and still cannot produce so much ash of the meteorite that it probably eliminated the dinosaurs. Nor are there enough nuclear eyes on the planet to complete the entire usable agricultural country of the planet. In other words, a nuclear weapon initiated by AI would be cataclysmic, but it probably still remains short of killing all human beings, because some humans are Windevive and have the potential to reconstitute the species.
On the other hand, we consider that pandemics are a plausible threat of extinction. The previous natural pests have been catastrophic, but human societies have survived and soldier. Even a minimal population (probably a few thousand members) could reconstitute the species. A lethal pathogen hypothetically 99.99 percent would leave more than 800,000 live humans.
However, we determine that a combination of pathogens could probably be designed to achieve almost 100 percent lethality, and AI could be used to deploy such pathogens in a way that ensured a rapid and global reach. The key limitation is that AI would need to infect in some way or exterminate communities that would inevitably isolate melas when they face a species-vis-Órgemica.
Finally, if AI accelerates the anthropogenic climate change of varyus garden-varyous, it would not rise to a threat of extinction for all humanity. Humans would probably find new environmental niches to survive, even if it meant moving to the poles of the earth. Making the earth completely uninhabitable for humans would require AI to pump something much more powerful than carbon dioxide in the atmosphere. That is the good news.
The bad news is that there are these much more powerful greenhouse gases. They can occur at industrial scales. And persist in the atmosphere for hundreds or thousands of years. If it were to evade international monitoring and orchestrate the production of a few hundred megatones of thesis chemicals (which is less than the plastic mass that humans produce every year), it would be enough to cook the earth to the point of that.
I want to clarify this: none of our extinction scenarios A-Extinion could occur by accident. Each would be a great challenge to carry out. AI would have to overcome the main restrictions.
In the course of our analysis, we also identify four things that our hypothetical super evil has to have: first, it would need to establish in some way a goal to cause extinction. The AI would also have to obtain control over the key physical systems that create the threat, such as the control of the launch of nuclear weapons or the manufacturing infrastructure of chemicals. He would need the ability to persuade humans to help and hide their actions enough for success. And it has to be able to survive without humans to support it, because even a society begins to collapse, monitoring actions would be replications to cause total extinction.
If AI did not possess these four capabilities, our team concluded that its extinction project would fail. That said, it is plausible to create an AI that has all these capabilities, even if it is involuntarily. Moreoover, humans can create AI with the four abilities intentionally. The developers are already trying to create an agent, or more autonomous, AI, and already observed an AI that has the capacity of schemes and deception.
But if extinction is a plausible result of the development of AI, doesn’t it mean that we should follow the precautionary principle? That is to say: shut up everything because it is better that it is sure? We say the answer is no. The closed approach is only appropriate if we do not care about the benefits of AI. For better or worse, we care a lot about the benefits that AI will probably bring, and it is inappropriate to give them to avoid a potential but highly uncertain catastrophe, equally consequently as human extinction.
So, will one day kill us all? It is not absurd to say that loop. At the same time, our work also showed that humans do not need the help of AI to destroy our elder. A safe form of extinction risk lessons, whether or not it derives from AI, is to increase our survival possibilities by reducing the number of nuclear weapons, restricting the chemicals of globe heating and improving pandemic surveillance. It also makes sense to invest in AI security research, whether or not to buy the argument that AI is a potential risk of extinction. The same responsible approaches to the development of AI that mitigate the risk of extinction will also mitigate the risks of other damage related to AI that are less consistent and less uncertain than existential risks.
This is an opinion and analysis article, and the opinions expressed by the author or authors are not necessarily those of American scientist.
]