In 1965, Intel cofounder Gorden E. Moore predicted that the number of transistors in an integrated circuit would double roughly every two years. Moore’s prediction was based on a trend in chip manufacturing that Moore had observed at Fairchild Semiconductor, a now-defunct semiconductor company formerly based in San Jose, California. The prediction made by Moore would later be known as “Moore’s Law,” and it would be used as a benchmark to measure the progress of computer chip technology.
For decades, Moore’s Law described semiconductor trends fairly accurately as transistors became smaller and more numerous at a very rapid rate. Transistor quantity on computer chips did not strictly double every two years. However, in the long run, the number of transistors in an integrated circuit continued to grow exponentially. In 1994, the most advanced computer chips had just 10,000 transistors per square millimeter of surface area. By 2022, that figure had skyrocketed to 135,600,0000.
The improvements in integrated circuits predicted by Moore’s Law have been a major driver of technological innovation since the late 1960s. As transistors became smaller, computers became cheaper, faster, and more efficient. The personal computers of the 1980s and 1990s, followed by the smartphones and tablets of the 2000s and 2010s, were made possible by the abundance of tiny, efficient transistors. Similarly, high-end supercomputers became more powerful than ever, allowing a new generation of scientists and inventors to conduct research and development with the help of computers and microprocessors.
The Physical Limits of Integrated Circuits
Despite the many advancements in computing and semiconductors that have occurred since 1965, there has been a slowdown in the growth of transistor quantity in recent years. Transistors continue to become smaller and more densely packed with each passing year, but not at the same rate as the period from the 1960s to the 2000s.
The semiconductor industry is already feeling the effects of the slowdown in Moore’s Law as advancements in microprocessors become smaller with each passing year. The performance of single-core processors increased by an average of 52 percent annually from 1986 to 2001. However, by 2018, this figure had dropped to an unusually low 3.5 percent.
There are still some prospects for even smaller transistors and more efficient chips in the near future. This includes Apple’s upcoming 3-nm A17 chip, which is expected to be released by the end of 2023. And within the next few years, consumers may be able to purchase devices with transistors as small as 1 or 2 nm.
However, making traditional silicon-based transistors even smaller comes with plenty of challenges. For example, smaller transistors are naturally packed together more closely. The transistors dissipate heat while they run, so when transistors become smaller and more densely packed, each individual transistor experiences more heat per unit of surface area. Thus, smaller transistors are more likely to overheat, which can hurt computer performance and cause microprocessors to wear down more quickly.
In addition, smaller transistors are more vulnerable to quantum fluctuations, which become more prominent at smaller scales. Quantum fluctuations can make computer chips less reliable if they affect the transistors on the chips. Researchers have documented cases of quantum fluctuations interfering with flash memory and other essential functions of computer chips with extremely tiny transistors.
While it has become increasingly difficult for semiconductor manufacturers to make transistors smaller, it is still possible to increase the quantity of transistors by producing computer chips with multiple cores. Technology companies have begun to sell electronic devices with multicore processors in order to increase computational processing capacity in the same manner as Moore’s Law. By now, virtually every new computer sold has a multicore processor. For example, an Apple MacBook Pro can now come with as many as 12 cores in its central processing unit (CPU) and 38 cores in its graphics processing unit (GPU).
A major advantage of multicore processors over their single-core counterparts is their ability to run multiple tasks (or threads) at the same time. This feature of multicore processors makes it especially convenient to run several highly specialized tasks concurrently on multicore GPUs, with each core running its own separate thread.
However, adding cores does not increase the speed or efficiency of a computer chip. Since the chips and transistors do not become any smaller in a multicore processor, they do not use energy more efficiently than single-core processors. In fact, multicore processors require more power to run than single-core processors since extra power is needed to power the additional cores.
Silicon’s decades-long dominance in the semiconductor industry, enabled by the relatively low cost of manufacturing silicon in a usable form, may soon be coming to an end. The slowdown in the advancements of silicon-based computer chips has caused innovators, researchers, and entrepreneurs to search for alternative materials they could use to make transistors. These materials are often more conductive than silicon and have an electronic structure that is more suitable for certain types of chips. The semiconductor industry hopes that next-generation computer chips made from these materials would be able to keep pace with Moore’s Law.
Graphene is often thought to be the most promising alternative to silicon in computer chips due to its high electron mobility, thermal conductivity, and mechanical strength. Overall, graphene can conduct electricity 250 times better than silicon, a rate faster than any other known substance. In 2017, a team of researchers from universities in Illinois, Texas, and Florida built integrated circuits using transistors made from graphene ribbons and carbon nanotubes. These graphene-based microchips achieved processing speeds about 1,000 times faster than a silicon-based integrated circuit. Graphene transistors also have energy-saving advantages since they only require one percent of the power needed to run a normal computer.
There is still a lot of work to do before graphene transistors can become widely used in consumer and industrial applications. It is currently very expensive to produce graphene-based integrated circuits, let alone manufacture them on an industrial scale. However, semiconductor researchers hope that, in the near future, graphene transistors would allow computer hardware companies to continue advancing chip technology at a rapid rate.
The rise of quantum computing presents another opportunity to circumvent the limitations posed by traditional silicon-based transistors and continue advancing semiconductor technology at a rate on par with Moore’s Law. While quantum computers are still not as widely used as classical computers, they do present an opportunity to dramatically expand computational capabilities.
A major advantage of quantum computing over classical computing is that each quantum bit (qubit) is more powerful than a classical bit. While a classical bit can only exist in one logical state (0 or 1) at a time, a qubit can exist in a linear superposition of two logical states (0 and 1) simultaneously. Two qubits can together exist in a linear superposition of four logical states (00, 01, 10, and 11) simultaneously.
Three qubits can exist in a superposition of eight logical states, four qubits in a superposition of sixteen logical states, and so on. Whenever another qubit is added, the number of logical states that the qubits can together exist doubles. This pattern allows the processing power of quantum computers to grow exponentially as the number of qubits grows linearly.
Quantum computing does face major hurdles that researchers must overcome before commercially viable quantum computers can become widespread. Qubits, due to their quantum nature, can become entangled with each other and with other particles in the environment. In addition, qubits must be kept at extremely low temperatures (close to absolute zero) in order to avoid quantum decoherence, which occurs when qubits lose their quantum properties. Both entanglement and decoherence can destroy information stored by qubits, making quantum computers prone to errors.
However, companies like Intel, Google, and IBM are investing billions of dollars into developing technologies that would make quantum computing commercially feasible. Researchers are developing ways to correct errors made by quantum computers, though this may require building quantum computers with millions of qubits. Thus, the world may have to wait until around 2035 for quantum computers to become mainstream.
Moore’s Law has served the semiconductor industry well for about half a century as transistors became smaller and integrated circuits became more efficient. However, as transistors approach 1 to 2 nanometers in size, they may become increasingly unreliable or difficult to manufacture. These challenges are already leading to slowdowns in the advancement of semiconductor technology.
However, these slowdowns may be overcome with the help of powerful alternatives to classical silicon-based computer chips. From graphene transistors to quantum computers, innovators are developing new tools to exponentially increase computational power in a manner similar to Moore’s Law. While these technologies are still in their infancies, they will likely accelerate the advancement of computers and semiconductors in the near future.