How often do you hear predictions about the end of the world, where all-powerful artificial intelligence plays the main role? At least once a week, some businessman or celebrity expresses concerns about a terrifying future under its rule.
Of course, a well-known personality plus a gloomy forecast is the perfect recipe for a sensational headline. But if earlier such materials reflected real, sometimes frightening progress in technology, now it more often resembles empty marketing or simple misunderstanding of what is really happening.
Why are we still scared of bad retellings of “Terminator” when modern chatbots often blatantly lie and cannot remember five lines of dialogue? And most importantly — who could benefit from this?
Not Impressive
It’s worth noting right away: AI technologies have made a huge leap over the past decade. Modern systems have learned to write coherent texts, recognize patterns in large data sets, and create visual content. Not long ago, machines couldn’t replace such human labor.
The prospects of progress are frightening. However, at the moment, the development of mass products has only stalled at discussions of the so-called general artificial intelligence and the release of almost identical language models (sometimes new versions are even worse than their predecessors).
What do we have in the end: a tool-helper trained to perform simple tasks with text and sometimes images. People have adapted it for Vibe coding or writing social media posts. At the same time, the results often require double-checking — neural networks are incapable of handling more complex tasks.
You can now ask your favorite chatbot to write a doctoral dissertation on the topic “X”: you will get a loosely connected text with references from the first or second page of a search engine. To improve the result, it is recommended to use extended prompts, but that’s just more fine-tuning in the “machine language” and additional training.
With prolonged use of AI, probably every user realizes the limitations of today’s models. All progress ultimately hits a wall of training data volume and server capacity, and the very notion of “intelligence” has taken a back seat.
Intelligence Without Brains
To understand the context, it’s necessary to explain how AI works. Briefly, large language models of classic chatbots operate as follows:
The input text is broken into tokens (parts of words, characters).
Each token is assigned a numerical vector.
The model analyzes the relationships between tokens and determines which words are most important for understanding the context.
Based on this, LLM “predicts” each next token, forming a response.
The model doesn’t “predict” out of thin air. It has undergone pre-training on a huge database, usually from open sources on the internet. That’s where the neural network draws all its “intelligence.”
Language models do not “understand” text in the human sense but calculate statistical patterns. All leading modern chatbots use the same basic architecture called “Transformer,” which works on this principle.
Of course, this is a rough analogy, but LLMs can be called very powerful calculators based on a large database. A strong, useful tool that simplifies many aspects of our lives, but it’s premature to attribute full intelligence to such technology.
Modern chatbots resemble more a new iteration of search engines (hello, Google Gemini) than a pocket omniscient assistant.
Moreover, questions about the reliability of AI responses remain. After reviewing statistics of hallucinations and lies by neural networks, there is a strong desire to return to the classic “Google it.”
Comparison of answer accuracy between GPT-5 and o4-mini. Source: OpenAI.## Boo, scared?
The main thesis of apocalypse supporters is that “AI is becoming exponentially smarter,” so once it surpasses human intelligence, humanity as a species will come to an end.
Modern AI undoubtedly already surpasses us in data processing and transformation accuracy. For example, a neural network can quite thoroughly recount “Wikipedia.” But that’s roughly where its knowledge ends. More precisely, the model simply cannot apply it for “personal purposes,” because it doesn’t know how, and that’s not its task.
Furthermore, it is already known that artificial intelligence does not understand the world around us. The laws of physics are a dark forest for AI.
All development of language models has boiled down to expanding the range of predictions (guessing tokens). However, AI is quickly approaching the limits of text-based training, and more and more thoughts are emerging about the need to create “spatial” intelligence.
But if the weak points of the technology itself can still be identified, and work in these areas is already underway, more complex questions remain open.
Even for humanity, many aspects of the brain’s structure remain a mystery. What can we say about recreating such a complex structure in a digital environment?
In addition, another almost insurmountable obstacle for AI is creativity — the ability to create something new. LLMs are technically incapable of going beyond their architectural limitations because their work is based on processing existing data.
Thus, the future of AI directly depends on what information humanity feeds into it, and so far, all training materials are solely aimed at benefiting people.
To be fair, it’s worth mentioning Elon Musk and his Grok. At one point, users noticed bias in the chatbot and a tendency to overestimate the billionaire’s capabilities. This is a rather alarming signal from an ethical standpoint, but it’s unlikely that a potential “NeuroElon” could physically harm humanity.
It has become a tradition that the only goal of artificial intelligence applications is to obey user requests. A chatbot has no will or desires of its own, and in the foreseeable future, this paradigm is unlikely to change.
Anatomy of Fear
And why are we still scared of this very AI, which turned out to be not very “smart”? The main answers are obvious.
Ignoring the misunderstanding of the technology itself, the simplest reason is greed for money or popularity.
Let’s look at the case of one of the “prophets of doom” — Eliezer Yudkowsky. An AI researcher and co-author of the book If Anyone Builds It, Everyone Dies, since the 2000s, has been warning about superintelligent AI supposedly being devoid of human values.
Book cover. Source: Instaread.“Superintelligence” is not yet visible, as Yudkowsky himself often admits. But this doesn’t stop him from speaking loudly on podcasts and selling books.
Famous physicist and “godfather of AI” Geoffrey Hinton also expressed near-apocalyptic fears. He estimated the probability that technology could lead to human extinction within the next 30 years at 10-20%.
According to Hinton, as capabilities grow, the strategy of “keeping artificial intelligence under control” may cease to work, and agent systems will seek survival and expand control.
In this case, it’s still unclear who and for what purposes can give neural networks a “will to live.” Hinton continues working in the field of neural network training and was nominated for a Nobel Prize in 2024 for achievements in this area. In early 2026, he became the second scientist in history after cybernetician Yoshua Bengio to reach 1 million citations.
Surprisingly, more grounded are the forecasts of Google Brain co-founder Andrew Y. He called artificial intelligence an “extremely limited” technology and expressed confidence that algorithms will not be able to replace humans in the foreseeable future.
Obviously, loud and sharp-tongued prognosticators exist in any field. Their existence in the AI industry can also be justified by the public’s love for science fiction. Who doesn’t want to thrill themselves with stories in the spirit of Philip K. Dick or Robert Sheckley, only with the plot unfolding in current reality?
In such an environment, statements by large corporations warning about threats to jobs and predicting rapid AI development raise more questions. While the second point largely explains the need to cut costs, the first inadvertently leans toward more conspiracy theories.
For example, one of the largest companies in the world — Amazon — has laid off over 30,000 employees in the past six months. Management cites plans for optimization and the impact of automation, including AI implementation.
The development of warehouse robots continues. But some say the problem is much more prosaic — the mass layoffs are due to poor HR management during the COVID-19 pandemic.
Amazon is not the only example. AI companies from Silicon Valley continue expanding their staff and leasing new premises.
Meanwhile, as early as 2023, almost the same companies signed a document from the Center for AI Safety calling for slowing down the development of the technology — claiming that artificial intelligence poses “existential risks” comparable to pandemics and nuclear wars.
Statement of the Center for AI Safety. Source: aistatement.com.Over time, the letter was forgotten, work continued, and no visible threat materialized.
From a corporate perspective, in the era of talk about the inflated AI bubble, appealing to technological changes seems a more convenient explanation for business than admitting to structural management errors. But such statements create a false picture of what’s happening and distract from real problems — misinformation and deepfakes.
Artificial intelligence does not steal jobs; it changes the approach to work itself, sometimes simplifying it. However, a narrow Harvard study shows that AI can sometimes complicate and slow down processes within a company.
The technology will undoubtedly penetrate all areas of our lives: education, science, commerce, politics. But in what form it will be present depends solely on people. For now, neural networks do not have a voice.
Beyond Our Reach
The above was about publicly accessible AIs like chatbots and generative “drawers.” Of course, behind closed doors, more serious developments exist.
Among relatively simple ones are LLMs in medicine or archaeology. For example, some help synthesize new proteins, others decode ancient documents that resist traditional analysis.
However, the results of such research, testing, and launches are often only accessible through hard-to-reach internal reports or publications in specialized media, so awareness about them is close to zero. Yet, it’s quite likely that the biggest breakthroughs are happening precisely in this field.
Probably, the “AI doomsday machine” will not appear even in closed laboratories. All such models are highly specialized, capable only of what they are designed for.
All fears of AI getting out of control are just reflections of our own fears: whether it’s losing jobs or more complex ethical questions. But as long as we, humans, determine the future of technology, setting its direction and goals, AI remains a tool, not an independent subject with its own will.
Talking about potential risks is correct. Making up apocalyptic theories is human nature. But such things should always be approached with skepticism or even irony. If we have a “turn off” button, our world is not threatened by any digital superintelligence.
Vasily Smirnov
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
Fear has big eyes - ForkLog: cryptocurrencies, AI, singularity, the future
How often do you hear predictions about the end of the world, where all-powerful artificial intelligence plays the main role? At least once a week, some businessman or celebrity expresses concerns about a terrifying future under its rule.
Of course, a well-known personality plus a gloomy forecast is the perfect recipe for a sensational headline. But if earlier such materials reflected real, sometimes frightening progress in technology, now it more often resembles empty marketing or simple misunderstanding of what is really happening.
Why are we still scared of bad retellings of “Terminator” when modern chatbots often blatantly lie and cannot remember five lines of dialogue? And most importantly — who could benefit from this?
Not Impressive
It’s worth noting right away: AI technologies have made a huge leap over the past decade. Modern systems have learned to write coherent texts, recognize patterns in large data sets, and create visual content. Not long ago, machines couldn’t replace such human labor.
The prospects of progress are frightening. However, at the moment, the development of mass products has only stalled at discussions of the so-called general artificial intelligence and the release of almost identical language models (sometimes new versions are even worse than their predecessors).
What do we have in the end: a tool-helper trained to perform simple tasks with text and sometimes images. People have adapted it for Vibe coding or writing social media posts. At the same time, the results often require double-checking — neural networks are incapable of handling more complex tasks.
You can now ask your favorite chatbot to write a doctoral dissertation on the topic “X”: you will get a loosely connected text with references from the first or second page of a search engine. To improve the result, it is recommended to use extended prompts, but that’s just more fine-tuning in the “machine language” and additional training.
With prolonged use of AI, probably every user realizes the limitations of today’s models. All progress ultimately hits a wall of training data volume and server capacity, and the very notion of “intelligence” has taken a back seat.
Intelligence Without Brains
To understand the context, it’s necessary to explain how AI works. Briefly, large language models of classic chatbots operate as follows:
The model doesn’t “predict” out of thin air. It has undergone pre-training on a huge database, usually from open sources on the internet. That’s where the neural network draws all its “intelligence.”
Language models do not “understand” text in the human sense but calculate statistical patterns. All leading modern chatbots use the same basic architecture called “Transformer,” which works on this principle.
Of course, this is a rough analogy, but LLMs can be called very powerful calculators based on a large database. A strong, useful tool that simplifies many aspects of our lives, but it’s premature to attribute full intelligence to such technology.
Modern chatbots resemble more a new iteration of search engines (hello, Google Gemini) than a pocket omniscient assistant.
Moreover, questions about the reliability of AI responses remain. After reviewing statistics of hallucinations and lies by neural networks, there is a strong desire to return to the classic “Google it.”
The main thesis of apocalypse supporters is that “AI is becoming exponentially smarter,” so once it surpasses human intelligence, humanity as a species will come to an end.
Modern AI undoubtedly already surpasses us in data processing and transformation accuracy. For example, a neural network can quite thoroughly recount “Wikipedia.” But that’s roughly where its knowledge ends. More precisely, the model simply cannot apply it for “personal purposes,” because it doesn’t know how, and that’s not its task.
Furthermore, it is already known that artificial intelligence does not understand the world around us. The laws of physics are a dark forest for AI.
All development of language models has boiled down to expanding the range of predictions (guessing tokens). However, AI is quickly approaching the limits of text-based training, and more and more thoughts are emerging about the need to create “spatial” intelligence.
But if the weak points of the technology itself can still be identified, and work in these areas is already underway, more complex questions remain open.
Even for humanity, many aspects of the brain’s structure remain a mystery. What can we say about recreating such a complex structure in a digital environment?
In addition, another almost insurmountable obstacle for AI is creativity — the ability to create something new. LLMs are technically incapable of going beyond their architectural limitations because their work is based on processing existing data.
Thus, the future of AI directly depends on what information humanity feeds into it, and so far, all training materials are solely aimed at benefiting people.
To be fair, it’s worth mentioning Elon Musk and his Grok. At one point, users noticed bias in the chatbot and a tendency to overestimate the billionaire’s capabilities. This is a rather alarming signal from an ethical standpoint, but it’s unlikely that a potential “NeuroElon” could physically harm humanity.
It has become a tradition that the only goal of artificial intelligence applications is to obey user requests. A chatbot has no will or desires of its own, and in the foreseeable future, this paradigm is unlikely to change.
Anatomy of Fear
And why are we still scared of this very AI, which turned out to be not very “smart”? The main answers are obvious.
Ignoring the misunderstanding of the technology itself, the simplest reason is greed for money or popularity.
Let’s look at the case of one of the “prophets of doom” — Eliezer Yudkowsky. An AI researcher and co-author of the book If Anyone Builds It, Everyone Dies, since the 2000s, has been warning about superintelligent AI supposedly being devoid of human values.
Famous physicist and “godfather of AI” Geoffrey Hinton also expressed near-apocalyptic fears. He estimated the probability that technology could lead to human extinction within the next 30 years at 10-20%.
According to Hinton, as capabilities grow, the strategy of “keeping artificial intelligence under control” may cease to work, and agent systems will seek survival and expand control.
In this case, it’s still unclear who and for what purposes can give neural networks a “will to live.” Hinton continues working in the field of neural network training and was nominated for a Nobel Prize in 2024 for achievements in this area. In early 2026, he became the second scientist in history after cybernetician Yoshua Bengio to reach 1 million citations.
Surprisingly, more grounded are the forecasts of Google Brain co-founder Andrew Y. He called artificial intelligence an “extremely limited” technology and expressed confidence that algorithms will not be able to replace humans in the foreseeable future.
Obviously, loud and sharp-tongued prognosticators exist in any field. Their existence in the AI industry can also be justified by the public’s love for science fiction. Who doesn’t want to thrill themselves with stories in the spirit of Philip K. Dick or Robert Sheckley, only with the plot unfolding in current reality?
In such an environment, statements by large corporations warning about threats to jobs and predicting rapid AI development raise more questions. While the second point largely explains the need to cut costs, the first inadvertently leans toward more conspiracy theories.
For example, one of the largest companies in the world — Amazon — has laid off over 30,000 employees in the past six months. Management cites plans for optimization and the impact of automation, including AI implementation.
The development of warehouse robots continues. But some say the problem is much more prosaic — the mass layoffs are due to poor HR management during the COVID-19 pandemic.
Amazon is not the only example. AI companies from Silicon Valley continue expanding their staff and leasing new premises.
Meanwhile, as early as 2023, almost the same companies signed a document from the Center for AI Safety calling for slowing down the development of the technology — claiming that artificial intelligence poses “existential risks” comparable to pandemics and nuclear wars.
From a corporate perspective, in the era of talk about the inflated AI bubble, appealing to technological changes seems a more convenient explanation for business than admitting to structural management errors. But such statements create a false picture of what’s happening and distract from real problems — misinformation and deepfakes.
Artificial intelligence does not steal jobs; it changes the approach to work itself, sometimes simplifying it. However, a narrow Harvard study shows that AI can sometimes complicate and slow down processes within a company.
The technology will undoubtedly penetrate all areas of our lives: education, science, commerce, politics. But in what form it will be present depends solely on people. For now, neural networks do not have a voice.
Beyond Our Reach
The above was about publicly accessible AIs like chatbots and generative “drawers.” Of course, behind closed doors, more serious developments exist.
Among relatively simple ones are LLMs in medicine or archaeology. For example, some help synthesize new proteins, others decode ancient documents that resist traditional analysis.
However, the results of such research, testing, and launches are often only accessible through hard-to-reach internal reports or publications in specialized media, so awareness about them is close to zero. Yet, it’s quite likely that the biggest breakthroughs are happening precisely in this field.
Probably, the “AI doomsday machine” will not appear even in closed laboratories. All such models are highly specialized, capable only of what they are designed for.
All fears of AI getting out of control are just reflections of our own fears: whether it’s losing jobs or more complex ethical questions. But as long as we, humans, determine the future of technology, setting its direction and goals, AI remains a tool, not an independent subject with its own will.
Talking about potential risks is correct. Making up apocalyptic theories is human nature. But such things should always be approached with skepticism or even irony. If we have a “turn off” button, our world is not threatened by any digital superintelligence.
Vasily Smirnov