Friday, April 22, 2005

Are we near the end of Moore's Law?

In Gordon Moore's original observation, made in 1965, he argued that the number of transistors per integrated circuit increased as an exponential function, doubling about every year. The pace wasn't able to sustain quite this level, but Moore made a downward revision in 1975, saying that they doubled about every 2 years. Some claim that he revised it to 18 months, which, in the past 20 years, has proven even more reliable (Moore's original paper-[pdf]). When this prediction was made, the processor was cost-effective at 50 transistors per chip. Soon after, Intel produced the 4004, the world's first single chip microprocessor. The 4004 contained 2300 transistors, and was shrunk to an eighth of an inch wide by a sixth of an inch long. Today, the Itanium 2 chip contains half a billion transistors, or 229, to look at it in context. Wikipedia has a pretty nice graph of the relevant data.

There is now good reason to suggest that Moore's Law, which has been so reliable for so long, may be on the verge of losing its relevance. Many have suggested that Moore's Law can no longer be maintained because of economic factors or technological limitations. The intent of this post is to show why the opposite is true. I believe we are on the verge of outstripping Moore's doubling time.

Chip manufacturers are confident that they will be able to continue to maintain the pace of Moore's Law for the next decade. As of the fourth quarter of 2004, transistors in microprocessors were a little over 100 nanometers(nm) across (a nanometer is 10-9 meters, or one one-billionth of a meter). If we assume that the transistor gets proportionally smaller in order to maintain chip size, then in 10 years, we would expect the transistor to be 10 nm across, and that the processor would contain 50 billion of them. If the industry leaders are correct, this should be well within our capabilities. But in 2003, several members of the Institute of Electrical and Electronics Engineers, Zhirnov, Cavin and Hutchby, submitted a paper that proposed that we may be about to hit a wall when it comes to scaling electronics.

Their paper, Limits to Binary Logic Switch Scaling--A Gedanken Model [pdf], proposed that switching in transistors is limited to constraints defined by Heisenberg's Uncertainty Principle. The paper used the term "energy barriers" to describe the potential between the gate and the carrier, but no matter how great the potential difference, eventually the tunneling of electrons and holes will become too great for the transistor to perform reliable operations. In short, the two states of the switch would become indistinguishable. This cannot be allowed in a binary system, but it would happen if its size gets as small as 4 nm. Indeed, this would be the size of a transistor produced in 13 years, keeping strict adherence to Moore's Law.

They add that the heat from these transistors will be very difficult to moderate, because to do so would require somehow diverting the heat produced by this 5 nm device away from the processor. Alternatively, the entire processor could be cooled, which would produce more heat than it takes away.

In addition, there are rising costs for the producers of these chips. From the Wikipedia article

It is interesting to note that as the cost of computer power continues to fall (from the perspective of a consumer), the cost for producers to achieve Moore's Law has followed the opposite trend: R&D, manufacturing, and test costs have increased steadily with each new generation of chips. As the cost of semiconductor equipment is expected to continue increasing, manufacturers must sell larger and larger quantities of chips to remain profitable. (The cost to "tapeout" a chip at 0.18u was roughly $300,000 USD. The cost to "tapeout" a chip at 90nm exceeds $750,000 USD, and the cost is expected to exceed $1.0M USD for 65nm.) In recent years, analysts have observed a decline in the number of "design starts" at advanced process nodes (0.13u and below.) While these observations were made in the period after the year 2000 economic downturn, the decline may be evidence that the long-term global market cannot economically sustain Moore's Law.

On what basis then could it be suggested that Moore's law could possibly be outstripped by technology? What evidence is there to suggest that we can possibly speed up the pace of electronics advancement better than we have in 40 years of exponential improvement? For this, we should look to some of the current advances in nanotechnology.

Exhibit 1: MIT's Technological Review. This article suggests a way that we may begin to solve the problem of heat dissipation. In the last year, nanoscience has managed to create something that has eluded electrical engineers for many decades. The (5,5) single-walled carbon nanotube (SWNT) is a superconductor at room temperature (a nanotube is defined by a chiral vetor- 5,5 in this case. The dimension is a function of this vector, and knowing something about the chiral vector will provide insight into how the nanotube looks when it is rolled up. This is an example of an armchair configuration). It is 0.55 nm in diameter, and has already been used in an experimental transistor. Unlike any other transistor currently being produced, The SWNT can take on properties of both P- and N-type semiconductors simultaneously, depending on the gate voltage (more information on nanotube electronics).

Exhibit 2: Quantum Computing. Why be content looking for smaller ways to perform the same old processes? There are now a number of alternative processors starting to move into the realm of feasability. At Almaden Research Center, the seven-qubit (quantum bit) quantum computer has already managed to run Shor's factoring algorithm. Take a standard computer with 'n' bits, and a quantum computer with 'n' qubits. If the two computers can process a bit with the same speed, the quantum computer can run through 2n states in the same amount of time it takes the conventional computer to process just one.

The DNA computer is also worth mentioning here. The distance between levels on a DNA chain is 3 nm, and a typical human chain is a couple of centimeters in length. That means each DNA chain is capable of storing 7 million DNA-bits, each of which is capable of 4 different "states," adanine, thymene, cytosine or guanine. That's 47,000,000 possible states, and during cell division, this gets processed in just over an hour!

Exhibit 3: The human brain. According to the linked article, the human brain should have the capacity to process 100 million MIPS (million instructions per second) or 100 trillion instrutions per second. From SIGNAL magazine,

On an evolutionary scale, current processing speeds of 1,000 MIPS place robots at the small vertebrate level. "A guppy," [Hans] Moravec, [of Carnegie Mellon's mobile robot laboratory] says, adding that besides carrying out their specific functions, autonomous robots are only aware of their immediate surroundings. However, he predicts that increasing processing speeds will bring more capable systems within a decade. Once robots are commercially available in large numbers, many solutions for issues such as hazard recognition will arrive through incremental use and modification. "There is no substitute for field use for learning about problems and solving them," he says.
What this indicates is that computers are catching up fast. If Moore's law holds, then in 30 years, computers will be able to "think" faster than humans. Even before computers overtake the human brain, they may well become capable of improving on their own designs. The possibility of computers eventually rendering humans obsolete is touched on in Vinge's Singularity (original paper).

What these arguments still fail to take into account is the type of human ingeneuity that drives future innovation. There is incentive to revolutionize computing, because if alternative processors catch on, any company still trying to develop conventional microprocessors will quickly be left far behind. Any kind of unforseen breakthrough will shorten this timetable, causing the exponential slope of Moore's Law to accelerate even faster.

So, here it comes. My prediction is that computer processors will improve by a factor of 4 in the next two years. Then, while they approach the limit to smallness, they will slow down and follow a more natural 1.5 year doubling time. Once DNA and quantum computers, or some other revolutionary type of microprocessor becomes an effective replacement to the conventional semiconducting microprocessor, Moore's Law will cease to be an effective predictor of the future of computing.

0 Comments:

Post a Comment

<< Home