Imagine for a second that the impressive rhythm of AI progress in recent years continues for some more.
In that period of time, we have passed from AIS that could produce some reasonable sentences to AIS that can produce complete reports of Think Tank of reasonable quality; From AIS that could not write code to AIS that can write mediocre code in a small code base; From AIS that could produce surreal and absurd images to AIS that can produce short -lived short videos and audio clips on any subject.
Register here to explore the big and complicated problems facing the world and the most efficient ways to solve them. Sent twice a week.
Companies are investing billions of dollars and tons of talent to improve these models in what they do. So where does that take us?
Imagine that at the end of this year, some company decides to double one of the most valuable use of AI: improve AI research. The company designs a larger and better model, which carefully adapts to the task or training other super expensive but super valuable models.
With the help of this AI coach, the company advances to its competitors, launching AIS in 2026 that work reasonably well in a wide range of tasks and that essentially function as an “employee” that can “hire”. Around the following year, the stock market rises as an almost infinite number of AI employees becomes suitable for a broader and broader range of works (including the mine and, very possible, yours).
Welcome to the future (near)
This is the opening of the 2027 AI, an attentive and short -term detailed forecast of a group of researchers who think that the massive changes in our world are quickly arriving and that we are not prepared. The authors include Daniel Kokotajlo, a former Operai researcher who became famous for risking millions of dollars from his capital in the company when he refused to sign a non -dissemination agreement.
“AI comes fast” is something that people say for years, but in a way that is difficult to dispute and difficult to falsify. AI 2027 is an effort to go in the exact opposite direction. Like all the best forecasts, it is built to be falsifiable: each prediction is specific and detailed enough to make it easy to decide if it becomes a reality after the fact. (Assuming, or of course, we are all all here).
The authors describe how progress will be perceived in AI, how they will affect the stock market, how they will alter geopolitics, and justify those predictions in hundreds of appendices pages. AI 2027 could end up being completely wrong, but if so, it will be really easy to see where he is wrong.
Although I am skeptical about the exact timeline of the group, which foresees most of the fundamental moments that lead us to the catastrophe or political intervention of AI as a duration of this presidential administration, the series of events they present is quite convincing for me.
Any company would duplicate an AI that improves its development of AI. (And some of them are already doing this internally). If that happens, we will see even faster improvements than the improvements of 2023 to now, and in a few years, there will be a massive economic interruption as an “AI employee” becomes a viable alternative to a human hiring for most of the works that can be done remotely.
But in this scenario, the company uses most of its new “employees of AI” internally, to continue producing new advances in AI. As a result, technological progress becomes faster and faster, but our ability to apply any supervision weakens more and more. We see glimpses of strange and worrying behavior of advanced AI systems and try to make settings to “fix them.” But these beer surface level adjustments end, which simply hide the degree to which the increasingly powerful AI systems has begun to pursue its own objectives that we cannot understand. This has also begun to happen to some extent. It is common to see that Ais complaints do “annoying” things how to pretend the code tests that do not happen.
This prognosis not only seems plausible, but also seems to be the default course for what will happen. Of course, you can discuss the details of how fast it could develop, and can commit to the position that the progress of AI will surely be the end of next year. But if the progress of AI does not come out dead, then it seems very difficult to imagine how he won anyway to lead us along the wide path that AI 2027 imagines, Soner or later. And the forecast makes a convincing case that will happen before what almost anyone expects.
Do not be mistaken: the path that the authors of AI 2027 Imision ends with a plausible catastrophe.
By 2027, large quantities or computing power would be dedicated to AI systems that investigate AI, all with decreased human supervision, not because the companies of AI do not because To supervise him, but because they can no longer, so advanced and so fast that their creations become. The United States government would bend to win the arms race with China, as well as the decisions made by the AIS become increasingly impenetrable for humans.
The authors expect signs that the new and powerful the AI systems that are being developed are chasing their own dangerous objectives, and they are concerned that the signals be ignored by people in power due to the geopolitical fears of catching up, as an like an a. ai ai ai ai ai ai ai a. an ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai ai in aa, ai in that in which. -The capture letter capture capture safety is heated.
All this, or of course, sounds chilling. The question is this: Can people be able to do better than the authors predict that they will?
Definitely. I would say it would not be so difficult. But will it do better? After all, we have certainly failed in much easier tasks.
According to the reports, Vice President JD Vance has read the 2027, and has expressed his hope that the new Pope, who has already appointed AI as a main challenge for humanity, exercises international leadership to try to avoid the worst results he hypothesizes. We’ll see.
We live in interesting times (and deeply alarming). I think it is worth the reading of AI 2027 to make the vague cloud of concern that permeates the speech of AI specific and falsifiable, to understand what some older people are in the world of AI and the Government, and to decat it, and to decide. TRUE.
A version of this story originally appeared in the future perfect bulletin. Register here!
]