Moore’s (f)Law

February 9, 2006

First published in the I.T. Times, February 2006 Edition

Moore’s Law states that computer clock speeds will double every 18 months. Cor, blimey guvnor! Let me run down to the shop and get me one of them brand spanking new 6.4Ghz Pentium 4s!

What, no 6.4s?

Um… 5.8s?

None of them either? So what do we have?

3.8Ghz I hear you say, which sounds remarkably like… two years ago.

That can’t be right. Has Moore’s Law broken down? Run out of steam?

Not quite.

The fact is that Intel co-founder George E. Moore wrote a paper titled “Cramming more components into Integrated Circuits” (Electronics Magazine, April 19, 1965) in which he suggested that the complexity of an integrated circuit, with respect to minimum component cost will double every 18 months. Moore made this observation which was later turned into a “Law” (Capital ‘L’ which makes it more believable (Or so it is believed)) by Carver Mead, a professor at Caltech and VLSI pioneer. Ten years later Moore revised his observation to complexity doubling every 24 months, and insisted that he never said 18 months to begin with.

Therefore we’d be expecting computers to cost half as much as they did two years ago? Again, not quite. The above observation implies that you would be able to make an integrated circuit twice as complex today than you would have been able to two years ago, for roughly the same cost. You could also assume that if complexity has doubled it would be possible to get the same performance as two years ago at half the price today.

Moore’s observations were never very specific and open to interpretation. The microprocessor industry has used it as a benchmark and warped it to suit their purposes for forty years with mixed success. The more popular formulations of Moore’s Law equate transistor counts in integrated circuits to complexity. As transistor count can (sometimes) be a rough indication of performance we can be forgiven for expecting modern computers to be twice as fast as they were two years ago. The computer makers do not get away quite so lightly as they had a hand in fostering this myth to begin with.

It is often joked that if the automotive industry followed the same performance as the semiconductor industry then we would have cars that would get 100,000 miles to the gallon, reach supersonic speeds and cost more to park than to buy. Of course they would also be a couple of inches long which is where the analogy kind of breaks down.

The fastest computer on the market today terms of raw clock speed, is the Intel Pentium 4 3.8Ghz. Two years ago the Pentium 4 was at 3.2Ghz, which equates to an increase in clock speed of around 15%. Where has all the extra complexity gone? More features that would otherwise have required separate devices, and better resource management amongst other things. Modern CPUs generally have more onboard cache memory than before, more execution pipelines, and are better at managing themselves. The move towards multiple-core CPUs accounts for some complexity. The clock speed wars of recent history have cooled into a no-mans land of propaganda and misrepresentation.

In fact, clock speed by itself is no longer a reliable indication of CPU speed, and can be quite misleading at times. Take AMD’s current fastest CPU is clocked around 2.5Ghz and yet performs on par, if not better than Intel’s best offering. Chip manufacturers are trying to distance themselves from this minefield of Clock Speed = True Performance by disguising the truth behind a confusing mix of “performance ratings” and meaningless model numbers. Throw multiprocessing into the mix and things get a whole lot more uncertain, further hampered by the fact that software hasn’t quite evolved far enough to take advantage of this new trend. Any mention of clock speed is enough to cause a duck-and-cover response in your average chipmaker.

But the ultimate question becomes “Do we need all this speed at all?“ (Regardless of whether we can measure how fast it really is anyway). Quite possibly so, or if we don’t need it we’ll find a way to use it when we get it so we need it after all so can we please have it. For basic productivity tasks we’ve had more than enough processing power for the last half dozen years or more. More complicated areas such as storage, servers and scientific applications could certainly welcome such progress. And lets not upset the entertainment industry by suggesting that their future toys don’t make it off the drawing board.

Of course if we all woke up one morning convinced that our computing needs are fulfilled it would make many very needy manufacturers extremely nervous. If we are content, how are they going to convince us to part with our hard-earned cash in exchange for that shiny new box a full fifteen percent faster than the old one? Paint it tangerine and take away the floppy drive?

Oh wait, someone’s already tried that.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s