
"Hearst Magazines and Yahoo may earn commission or revenue on some items through these links."
Here’s what you’ll learn when you read this story:
Moore’s Law was a ten-year forecast on transistors that instead held true for decades.
We are reaching the physical limit of how powerful the same size of computer chip can be.
For those who seek the technological singularity, no new paradigm is ready to step in.
Once upon a time, the world seemed like it was made of larger building blocks. Rain fell from what looked like opaque, puffy masses that also blocked the Sun. The human body seemed self-contained and solid, and there wasn’t a way to prove otherwise. Even when alchemists were melting pieces of ore, they thought mercury must be related to silver. How could it not be? They looked the same, after all.
We know that the universe itself moves toward disorder, but our knowledge of the universe moves toward the minuscule. Higher resolutions, more powerful zoom, electron microscopes, particle accelerators, nuclear energy. For technology, powering these levels of detail has fallen into the broad purview of computer chips—and computer chips have also gotten higher-resolution, in a sense.
On a basic level, computers use circuitry—carefully mapped series of connections between different conductive or semiconductive parts—to do tons of arithmetic. Early punchcards (predecessors to the chips we have now) had openings so that portions of circuitry could literally form a connection, like playing certain notes on the piano using certain fingers, or connecting phone lines on an old switchboard. But our knowledge of electronics has grown so minuscule that it’s almost hard to fathom.
In the 1960s, in the midst of a global semiconductor boom, engineer Gordon Moore gave a presentation in which he made an observation. Moore was a cofounder of computer chip pioneer Intel, and it was there that he noticed that transistors—switches used to direct the current within electrical devices—were shrinking at a pretty consistent rate. This led to the 1958 invention of the integrated circuit, which could be installed in devices that were previously built one transistor at a time.
Moore showed a now-iconic graph of the number of components on an integrated circuit over time, which predicted an overall rate of growth of doubling about every two years between 1962 and his extrapolated 1970. In the accompanying lecture, he suggested this trend could actually last for the next ten years (until 1975). But Moore’s Law, as it was later called, has held for decades past this initial forecast. It’s been taught as canon in computer science programs around the world (even though it has never been, nor was it ever intended to be, a hard and fast rule).
1But for the last several years, those in the computing industry (and those who study it) have started to discuss the “end of Moore’s Law.” There’s a point at which transistors simply can’t get any smaller because of the basics of physics itself—these tiny transistors must still be able to communicate with the rest of what’s required to build an integrated circuit, be widely manufacturable, and stay cost effective.2
Moore’s Paradox?
The slowdown of Moore’s Law has been notable for a while. From MIT’s Computer Science and Artificial Intelligence Lab (CSAIL):
If you ask MIT Professor Charles Leiserson, Moore’s Law has been over since at least 2016. [H]e points out that it took Intel five years to go from 14-nanometer technology (2014) to 10-nanometer technology (2019), rather than the two years Moore’s Law would predict.
But that reality, and the reality of physics itself, is somewhat at odds with the widely promoted corporate technologies of 2025. Companies like OpenAI make opaque promises about how generative AI will change all of our lives, save us hours a week, and make many sectors of human labor obsolete. Venture capitalists have leveraged these promises to attract investors, while companies like Microsoft have started to force their employees to use generative AI in the workplace.
You can counter the slowing shrinkage of transistor design by simply making larger and larger computers. Manufacturers and generative AI companies are already doing this. They’re also designing all other elements of these machines to be as efficient as possible. However, that’s not a long-term solution to the growing demand for this amount of computing. Like leadership of the late Roman Empire or the icing on a dry cake, our computing components can’t be spread too thin.
However, if you’re rich and you don’t like the idea of a limit on computing, you can turn to futurism, longtermism, or “AI optimism,” depending on your favorite flavor. People in these camps believe in developing AI as fast as possible so we can (they claim) keep guardrails in place that will prevent AI from going rogue or becoming evil. (Today, people can’t seem to—or don’t want to—control whether or not their chatbots become racist, are “sensual” with children, or induce psychosis in the general population, but sure.)
The goal of these AI boosters is known as artificial general intelligence, or AGI. They theorize, or even hope for, an AI so powerful that it thinks like... well... a human mind whose ability is enhanced by a billion computers. If someone ever does develop an AGI that surpasses human intelligence, that moment is known as the AI singularity. (There are other, unrelated singularities in physics.) AI optimists want to accelerate the singularity and usher in this “godlike” AGI.
Predictability
One of the key facts of computer logic is that, if you can slow the processes down enough and look at it in enough detail, you can track and predict every single thing that a program will do. Algorithms (and not the opaque AI kind) guide everything within a computer. Over the decades, experts have written the exact ways information can be sent, one bit—one minuscule electrical zap—at a time through a central processing unit (CPU).
From there, those bits are assembled into a slightly more concrete format as another type of code. That code becomes another layer, and another, until a solitaire game or streaming video or Microsoft Word document comes out. Networks work the same way, with your video or document broken into pieces, then broken down further and further until tiny packets of data can be carted back and forth as electrical zaps over lengths of wire.
The human brain is, in some ways, another piece of electrical machinery. The National Institute of Standards and Technology (NIST) quantifies it as an exaflop caliber computer: “a billion-billion (1 followed by 18 zeros) mathematical operations per second—with just 20 watts of power.” By this standard, you power dozens of human brains by plugging them into a single U.S. household outlet. NIST cites the world-class Oak Ridge Frontier supercomputer as requiring “a million times more power” to do the same level of computing.
It’s possible that the human brain is also predictable when you understand all of its parts and influences enough. But our brains have little in common with the abstracted, mathematical way our computers are designed. The earliest computers were mechanical, with physical parts that visibly connected with and moved each other. And despite an iconic, massively influential paper stating otherwise, the cell is not like a machine. (Mitochondria, you can still be the “powerhouse!”)
The California Institute of Technology, Caltech, has a primer on how the brain works:
When you think, networks of cells send signals throughout your brain. These networks integrate new information from your senses with emotions, habitual thought processes, memories, and context to drive decisions.
For example, when you see a friend’s face, networks of nerve cells get to work. Your brain uses a few quick measurements to check who the friend is, notes how your body involuntarily responds to seeing them, generates an emotional response, puts the sight of them in context with memories and current events, chooses a response, and, perhaps, instructs your arm and face to wave and smile.
As you grew from infancy to the person you are today, the things you sensed, your experiences, and your choices and reflections have changed your brain, developing its unique cellular pathways.
There are countless ways the human brain could be boosted or hindered by factors we can’t even measure yet. We don’t even know why many common antidepressants and other medications work in the brain—just that they do. We can’t predict when a particular turn of phrase or “certain slant of light“will remind us of childhood, a popular TV show, what we had for dinner the other day, or a pair of shoes we used to wear. We are many years away from a diagrammatic understanding of the brain the way we understand manufactured computer parts.
Computing Power
Because of that gap in understanding, there’s no guarantee that a certain amount of computing power comparable to a human brain (or even a million human brains) would become sentient or have consciousness. That seems especially true when aspiring “AI caretaker” engineers want their AIs to know everything from all of human history.
But let’s say that efficiency or quantity of information isn’t an issue. Let’s say we can build one-million-exaflop computers to run advanced AIs that will mimic human think tanks. How does the end of Moore’s Law affect scientists who work toward that technological singularity?
The answer is simple: size. That’s both the size of electrical energy required and the physical size associated with storage, processing, cooling, and everything else required to keep a computer running. There are a few directions we could go to solve the size problem, but none of them are easy to achieve.
AI boosters push nuclear fusion (another technology that is still far away) as a cure-all for the energy problems associated with large AI computing. But no one knows for sure when (or if) nuclear fusion will produce more energy than what is required to run nuclear fusion facilities. That has not happened yet. It will not happen for years and years.
There’s also space-based possibilities. The Kardashev scale is a thought exercise about Solar System- or galaxy-scale civilizations. As humankind advances, the next step on the Kardashev scale would be for us to start to turn entire planets into data farms or harvest the energy of entire stars using Dyson spheres. But while Moore’s Law was a forecast based on expertise in both technology and global supply chains, the Kardashev scale and Dyson spheres are thought exercises with no real-life analog at all. They are science fiction dreams.
On a more grounded level, quantum computing has been touted as an advance toward the realm of AI, ultimately leading into the singularity. But quantum computing is in its infancy, to say the least. It currently requires extreme cooling unlike anything in today’s traditional computer realm. There is no usable consumer version of a quantum computer, and we’re not even close to one. They must be painstakingly assembled by hand by engineers and physicists with things like atomic tweezers.
All of that means we have a lot of options that are at least 10 years away—or even as much as 100 or 1,000 years away. Venture capitalists today are selling a vision of the future. Today, there is no nuclear fusion energy, there is no efficient quantum computing, and there is no Dyson sphere.
AI’s (Exa)Flop Era
“In this head the all-baffling brain,
In it and below it the makings of heroes.”—Walt Whitman
In the huge field of artificial intelligence, there are countless ways to define and work toward goals like finding new prescription drugs or faraway galaxies. AGI is a separate, specific idea, but even within that there are variations. The public discourse has grown very muddled because of the ambiguity of terms like “artificial intelligence” outside of their intended engineering contexts.
I personally believe that AGI is very far away—though some very smart people, like Google DeepMind and Imperial College London computer scientist Murray Shanahan, believe it’s closer than I think. (Shanahan’s book for MIT Press about the technological singularity is a great introduction.)
But others, like OpenAI’s Sam Altman, don’t seem to know what they’re talking about in any detail. Altman waves away questions about specifics of technologies he does not understand, while Shanahan writes detailed papers about the Wittgensteinian philosophical tests that AI models are growing ever more able to pass. Like the meme says, they are not the same.
Altman has suggested a Dyson sphere that encloses our Solar System, for example, as a back-of-the-napkin solution to the rising energy costs of AI. In 2019, over 750 million people on Earth still didn’t have access to electricity, an additional over 400 million aren’t able to use local available electricity, and both numbers are subject to stagnation or even worsening in the wake of the global COVID-19 pandemic.
A Dyson sphere is a science fiction invention with no stable version anywhere near Earth or our stellar neighborhood. We would need to drain the entire Solar System (and more!) of certain elements to even build what Altman suggests. While Moore’s Law is real, many factors of the singularity are not—at least, not this decade. Climate change and the global energy crisis, though, are very, very real.
Case Study: YInMn Blue
A lot of claims of “artificial intelligence” come down to highly developed algorithms combined with the ability of computers to test millions or billions of configurations at a time. This is one of its best use cases, because the human mind is just not good at this kind of work. The same way we can look around a room and categorize and remember many details at a glance, computers can plug away at enormous lists of ingredients without missing a beat or losing their place.
In 2024, chemist Mas Subramanian (the creator of the novel pigment YInMn Blue) told Popular Mechanics that algorithms to discover new molecules are difficult to work with because of factors that the public doesn’t really understand. It’s just not that easy to find a new pigment, for example—YInMn blue has an unusual crystal structure. The chemical reaction that makes the color is found in a bipyramidal shape, Subramanian explains, rather than a tetrahedral or octohedral network. (Bipyramidal is like two tetrahedrons, or “D4” shapes, glued together. The octohedron has eight faces in a different form.)
As a layperson, it’s hard to understand how crystal structures like this can make a huge difference in the outcome of a substance. But take carbon, for example. Graphite and diamond are different crystalline forms of the same element.
That need for context is a major limitation of algorithms as we know them. Machine learning might tell you to put diamond in your innovative new pencil or graphite in your engagement ring.
So, Subramanian explains, the machine learning algorithm suggests a long list that must be vetted by a human, and many suggestions don’t work in real life right off the bat. And because these models are trained on what already exists, they can’t innovate, in the most literal sense. “The breakthrough discovery comes from unknowns,” Subramanian said. “If you don’t have that in the starting point, how will you predict?”
Back to Moore’s Law
The end of Moore’s Law as an engineering benchmark is as helpful to us today as Moore’s original presentation was in the 1960s. Concrete observations based on data and logistics can help manufacturers around the world adjust their planned products, research and development, and even marketing. Indeed, as the transistor industry approaches the limits of physics itself, they highlight a gap we’re about to encounter as the human species—there is nothing that can start to replace and surpass our existing computer paradigm in the near future.
Today, people like Sam Altman will tell you they’re selling you the building blocks of the singularity. But as the people of Gary, Indiana, found out in The Music Man, someone selling you your first trombone shouldn’t tell you it comes with a first-chair position in the New York Philharmonic. The landmarks of expert-level artificial intelligence studies don’t sound like sales pitches or soundbytes—they sound more like Shanahan’s clarifying note, written after he used some imprecise language in a paper that escaped containment and entered the mainstream press:
My paper “Talking About Large Language Models” has more than once been interpreted as advocating a reductionist stance towards large language models. But the paper was not intended that way, and I do not endorse such positions. This short note situates the paper in the context of a larger philosophical project that is concerned with the (mis)use of words rather than metaphysics, in the spirit of Wittgenstein’s later writing.
Indeed, in a context where large language models (LLMs) are used to “summarize,” Shanahan’s care means a great deal. His precision and corrections give others in his field somewhere to start—whether they agree or disagree with his positions. He concludes: “The aim, rather, was to remind readers of how unlike humans LLM-based systems are, how very differently they operate at a fundamental, mechanistic level, and to urge caution when using anthropomorphic language to talk about them.”
It’s very different than Altman’s public comment that he might need to Dyson-sphere the entire Solar System. The point stands: we don’t even know how we’d build a computer big enough to need it.
You Might Also Like
Comments