BlogAGI Might Be a Pipe Dream, At Least the Way We’re Trying to Build It
Back to blog
2026-02-18·1 min read

AGI Might Be a Pipe Dream, At Least the Way We’re Trying to Build It

We are not just scaling intelligence, we are scaling energy consumption. At some point, the returns shrink while the cost keeps growing.

AIAGIScalabilityPhilosophy

We all talk about Artificial General Intelligence like another software milestone, but it’s more than that, It would mark a fundamental shift in what intelligence means on this planet. When Neil Armstrong took his first step on the Moon, he called it a giant leap for mankind. AGI would not just be a leap in exploration, but a leap in the nature of intelligence itself.

Science fiction has been preparing us for this idea for decades. In Books like Frank Herberts Dune and Isaac Asimov’s Foundation , this superior intelligence shapes entire civilization. In Hitchhiker’s Guide to the Galaxy, we build a super computer to find the answer to everything, only to realize that we dont understand the question itself. Philosophy has asked us the same thing for centuries. Marcus Aurelius wrote that intelligence is not just calculation or prediction , but judgement and purpose and possibly the most important, understanding.

Today we are trying to build something that claims of that , using data, transformers, and enormous amounts of compute .

But before we assume that AGI is inevitable, there is a harder question to face. Are we actually on the path that can reach general intelligence or are we mistaking a powerful pattern recognition for something much deeper ?

This is a working theory, not a conclusion. The path we are on may be hitting limits in data, every as well as architecture and true General Intelligence may require fundamentally different from what we are building today

Modern AI systems feed on human-generated data. They are trained on it and understand their entire process through it. But we are nearly at a point where all of the written data present on the internet is exhausted or proprietary behind companies. So there's a very big limit to the information present outside.

Another point to notice is that most of the information is not really high-quality. People do have to realize that AI is trained on Reddit, and as we all might know, Reddit is not a source of truth. Anyhow, it can be filled with false actors and a lot of different items. High-quality, diverse, and labeled data is already exhausted, and if intelligence depends on better data, then we're approaching a ceiling. If intelligence depends on better data, we may be approaching a limit.

One proposed solution is synthetic data. But here we run into another problem. Much of this “synthetic” data is generated by existing AI systems. I have friends working at companies building these pipelines, and even there, the process often involves models generating data that is then lightly curated by humans. When new models are trained on outputs of older models, errors and biases compound. It becomes a feedback loop, like copies of copies of copies. The result is a gradual dilution of originality and signal. Synthetic data may extend current systems, but it is unlikely to bootstrap true general intelligence.

Transformers themselves have limits. They excel at predicting sequences, but they are not grounded in a model of reality. In many ways, they are extremely advanced pattern completers. Research across institutions like Apple, Stanford, and MIT has pointed out gaps: limited persistent memory, unstable identity over long horizons, and weak long-term causal reasoning. Context windows grow, but they remain bounded. Each interaction is effectively a very large prompt, not a continuously evolving mind.

Along with this, every leap in model size comes with a massive increase in energy consumption. Training frontier models already costs millions in compute and produces significant carbon output. Intelligence at global scale may need to be efficient, but our current approach is anything but.

Today, some models approach the trillion parameter range. It is tempting to compare that to the roughly 100 Trillions synapses in the human brain, but the gap highlights a deeper issue. Brute force scaling alone is unlikely to reach human-level intelligence within realistic energy and resource constraints.

Early scaling delivered dramatic gains. Now, each increase in size yields smaller improvements at far greater cost. We are seeing diminishing returns. Bigger is not a long term strategy. It is a temporary hack.

Artificial General Intelligence may require self-awareness, subjective experience, or at least a personal sense of self. We do not know how to define that precisely, let alone engineer it. Building intelligence without understanding consciousness might be like building airplanes without understanding air. If current methods plateau, which in my opinion we might be able to see that in the very near future, AGI might not be reachable through this paradigm. That does not mean intelligence is impossible, only that our current approach may not be sufficient or even the right one. The ceiling might be architectural, not computational. Maybe transformers aren't the way to go.

Future progress may come from hybrid systems, world models, embodied agents, neuro-morphic hardware, or even new learning paradigms. Intelligence might emerge from systems that learn, remember, and act and adapt in the real world. The next leap may look less like a bigger model and more like a different kind of system entirely.