Thursday, October 15, 2009

PC Speed Limit

Computers speeds can only continue to increase at the current pace for 75 more years, according to physicists who determined nature's limit to making faster processors.

With the speed of computers so regularly seeing dramatic increases in their processing speed, it seems that it shouldn't be too long before the machines become infinitely fast -- except they can't.

A pair of physicists has shown that computers have a speed limit as unbreakable as the speed of light. If processors continue to accelerate as they have in the past, we'll hit the wall of faster processing in less than a century.

Intel co-founder Gordon Moore predicted 40 years ago that manufacturers could double computing speed every two years or so by cramming ever-tinier transistors on a chip. His prediction became known as Moore's Law, and it has held true throughout the evolution of computers -- the fastest processor today beats out a ten-year-old competitor by a factor of about 30.

If components are to continue shrinking, physicists must eventually code bits of information onto ever smaller particles. Smaller means faster in the microelectronic world, but physicists Lev Levitin and Tommaso Toffoli at Boston University in Massachusetts, have slapped a speed limit on computing, no matter how small the components get.

"If we believe in Moore's laW ... then it would take about 75 to 80 years to achieve this quantum limit," Levitin said.

"No system can overcome that limit. It doesn't depend on the physical nature of the system or how it's implemented, what algorithm you use for computation … any choice of hardware and software," Levitin said. "This bound poses an absolute law of nature, just like the speed of light."

Scott Aaronson, an assistant professor of electrical engineering and computer science at the Massachusetts Institute of Technology in Cambridge, thought Levitin's estimate of 75 years extremely optimistic.

Moore's Law, he said, probably won't hold for more than 20 years.

In the early 1980s, Levitin singled out a quantum elementary operation, the most basic task a quantum computer could carry out. In a paper published today in the journal Physical Review Letters, Levitin and Toffoli present an equation for the minimum sliver of time it takes for this elementary operation to occur. This establishes the speed limit for all possible computers.

Using their equation, Levitin and Toffoli calculated that, for every unit of energy, a perfect quantum computer spits out ten quadrillion more operations each second than today's fastest processors.

"It's very important to try to establish a fundamental limit -- how far we can go using these resources," Levitin explained.

The physicists pointed out that technological barriers might slow down Moore's law as we approach this limit. Quantum computers, unlike electrical ones, can't handle "noise" -- a kink in a wire or a change in temperature can cause havoc. Overcoming this weakness to make quantum computing a reality will take time and more research.

As computer components are packed tighter and tighter together, companies are finding that the newer processors are getting hotter sooner than they are getting faster. Hence the recent trend in duo and quad-core processing; rather than build faster processors, manufacturers place them in tandem to keep the heat levels tolerable while computing speeds shoot up. Scientists who need to churn through vast numbers of calculations might one day turn to superconducting computers cooled to drastically frigid temperatures. But even with these clever tactics, Levitin and Toffoli said, there's no getting past the fundamental speed limit.

Aaronson called it beautiful that such a limit exists.

"From a theorist's perspective, it's good to know that fundamental limits are there, sort of an absolute ceiling," he said. "You may say it's disappointing that we can't build infinitely fast computers, but as a picture of the world, if you have a theory of physics allows for
infinitely fast computation, there could be a problem with that theory."

Lauren Schenkman
Inside Science News Service

1 comment:

  1. What Profs. Lev B. Levitin and Tommaso Toffoli's paper "Fundamental Limit on the Rate of Quantum Dynamics: The Unified Bound Is Tight" (Physical Review Letters, Vol. 103, Issue 16 [October 2009]; also at arXiv:0905.3417) demonstrates is that processor speed can diverge to infinity if the energy of the system diverges to infinity.

    In a previous paper by Norman Margolus and Levitin, the bound was given as t >= h/(4*E), with t being the minimum operation cycle in seconds, h being Planck's constant, and E being energy in joules. Levitin and Toffoli said paper generalizes it to all cases.

    With this new bound, one obtains ~ 3.31303448*10^-34 seconds as the minimum operation cycle per joule of energy; or for the reciprocal, a maximum of ~ 3.0183809*10^33 operations per second per joule of energy.

    So notice here that processor speed can increase without limit if the energy of the system is increased without limit. When the authors of the paper speak of a fundamental speed limit of computation, they are referring to per unit of energy.

    In the article "Computers Faster Only for 75 More Years: Physicists determine nature's limit to making faster processors" (Lauren Schenkman, Inside Science News Service, October 13, 2009), paper co-author Levitin is quoted as saying, "If we believe in Moore's law ... then it would take about 75 to 80 years to achieve this quantum limit." What Levin is referring to here is given the current energy-density of our present ordinary matter, processors cannot be made which have greater processing-density after around said time, i.e., one won't be able to fit more processing power within the same amount of space given the current energy-density of common matter. But even with the same energy-density, one can still increase processing speed by increasing the size or number of processors, yet they would then take up more space. As well, one can increase the processing-density without limit if one increases the energy-density without limit.

    In the same Inside Science article, Scott Aaronson, an assistant professor of electrical engineering and computer science at the Massachusetts Institute of Technology in Cambridge, is quoted as saying that what this bound means is that "we can't build infinitely fast computers," which is a misstatement of what the bound actually states. The bound actually states that one can build infinitely fast computers if one has an infinite amount of energy.

    For the cosmological limits to computation, see physicist and mathematician Prof. Frank J. Tipler's below paper, which demonstrates that the known laws of physics (i.e., the Second Law of Thermodynamics, general relativity, quantum mechanics, and the Standard Model of particle physics) require that the universe end in the Omega Point (the final cosmological singularity and state of infinite informational capacity identified as being God), and it also demonstrates that we now have the quantum gravity Theory of Everything (TOE):

    F. J. Tipler, "The structure of the world from pure numbers," Reports on Progress in Physics, Vol. 68, No. 4 (April 2005), pp. 897-964. http://math.tulane.edu/~tipler/theoryofeverything.pdf Also released as "Feynman-Weinberg Quantum Gravity and the Extended Standard Model as a Theory of Everything," arXiv:0704.3276, April 24, 2007. http://arxiv.org/abs/0704.3276

    See also the below resource:

    "Omega Point (Tipler)," Wikipedia, October 30, 2009. http://en.wikipedia.org/w/index.php?title=Omega_Point_%28Tipler%29&oldid=322843275

    ReplyDelete