Friday, 26 August 2016

Silicon Week: Smaller, smarter, but maybe not much faster: The future of the CPU

http://ift.tt/2bTMt9w

Silicon Week: Smaller, smarter, but maybe not much faster: The future of the CPU

Introduction and power corrupts

Note: Our future CPU tech feature has been fully updated. This article was first published in November 2011.

It's easy to think that some things will be around forever, but technologies change. Vinyl records became digital discs and then digital downloads. The internal combustion engine morphed into the hybrid engine and will probably be replaced by electric motors. For many of us the PC has been replaced by mobile devices.

The same changes happen in the world of CPUs. First of all we had valve-based computers, then computers made from individual transistors. We invented the integrated circuit, followed by the silicon microprocessor. CPUs became faster, more efficient, their components more miniaturised. The smallest features have shrunk from 10,000nm to just 14nm, and the transistor count has increased from a few thousand to several billion. But sooner or later the silicon CPU is going to reach the point where it can't be improved any further.

We've seen signs of that already. The ever-increasing clock speeds of previous decades have gone, with CPU speeds sitting at just over 3GHz for most of the last decade. Manufacturers have been able to improve performance at that clock speed with clever design and impressive feats of miniaturisation, but that has its limits too. Some observers believe that even by shifting to more accurate manufacturing techniques such as extreme ultraviolet, the limit may be around 11nm – a scale we're already very close to.

Power corrupts

There's the issue of power, too. The reason we don't run silicon chips much faster than 3GHz is because they begin to consume large amounts of power and generate large amounts of heat. Manufacturers have done a great job in addressing this limitation – while processors have become ever faster through the use of multiple cores, improved energy efficiency means that the power drain from top-end processors is actually lower than in the top-end processors of a few years previous. But sooner or later we'll run out of tricks.

As if all of that wasn't worrying enough, there's the issue of quantum physics to consider. When you get down to really, really small dimensions, electrons start getting very weird indeed. They can be in two places or in two states at the same time, and they can appear on the other side of a thin barrier even if there isn't a hole in it. If that barrier is the insulation that's supposed to stop electrons from going where they shouldn't, that opens a great big box marked Pandora.

And that's why the future of the CPU looks very different from the processors we have now.

Top Image Credit: Rudolf Getel/Flickr/CC BY 2.0

Size matters and hitting the limit

Tunnel Field Effect Transistors use carbon nanotubes to slash energy consumption (Image Credit: Wikipedia)

Size matters

In February, Intel's technology and manufacturing group chief William Holt told the International Solid State Circuits Conference that "we're going to see major transitions. The new technology will be fundamentally different." Predicting that Moore's Law – which states that processors will double the number of transistors every two years – would only hold for another two processor generations, Holt described two potentially revolutionary new technologies: tunnelling transistors and spintronics.

Tunnelling transistors, or to give them their full name, Tunnel Field-Effect Transistors (TFET), are similar to existing MOSFET switches but rely on quantum tunnelling to use much lower energy than other switching technologies. Such transistors could make a huge difference in CPU power consumption.

TFETs are promising, but they're also some way from production. Spintronic devices are much closer to reality – last year, Toshiba announced an experimental spintronic memory array that used 80% less energy than SRAM, and spintronics are expected to appear in high-end graphics cards over the next few years. Spintronics is another quantum-based technology, and it uses electrons' spin states to transmit data. It requires much less energy than traditional electronics, and it uses cheaper materials too.

Spintronic chips use much less energy than traditional RAM (Image Credit: Yellowcloud/Flickr/CC BY 2.0)

Better, yes. Faster? Maybe not

As Holt explained: "The best pure technology improvement we can make will bring improvements in power consumption but will reduce speed." Speed is no longer the key consideration – energy efficiency is. "Particularly as we look at the Internet of Things, the focus will move from speed improvements to dramatic reductions in power," he further observed.

Whether it's the data centre or your smartwatch, the chips you've got are probably fast enough – but their energy usage is thousands or even millions of times higher than it could be in the not too distant future.

Intel is also working on ways to improve communications between CPUs. Its silicon photonics chips use lasers to communicate between devices, and while those lasers will initially be used to connect networking equipment inside data centres, in the longer term lasers will connect servers and possibly even replace the cabling inside computers and other devices – devices whose innards would literally communicate at the speed of light.

This blurry pic shows nanomagnets in action – their energy consumption is miniscule (Image Credit: UC Berkeley)

Hitting the limit

Back in 1961, IBM Research's Rolf Landauer used the second law of thermodynamics to calculate that there was an absolute minimum amount of energy required to perform a computer operation, and in 2012 a group of European scientists demonstrated proof of his theory: the heat released by a calculation was exactly as Landauer predicted.

But the limit is tens of thousands of times lower than the energy used by today's silicon CPUs, and a process called reversible logic could one day create chips that get much, much closer and maybe even below Landauer's limit – which is useful in an increasingly mobile world where battery technology remains stubbornly opposed to great leaps forward.

This isn't just theoretical. In March 2016 engineers at UC Berkeley experimented with magnetic computing, and the results suggest that it may be possible to produce chips whose power requirements may be as little as one millionth of the energy per operation we have today.

According to senior project author and Berkeley professor Jeffery Bokor, "making transistors go faster was requiring too much energy. The chips were getting so hot they'd melt." The team used nanomagnets to control electrons, proving that the theory they'd published five years previously could be achieved in reality.

Don't expect a nanomagnetic CPU any time soon, but as the authors note: "The significance of this result is that today's computers are far from the fundamental limit and that future dramatic reductions in power consumption are possible."



from TechRadar: computing components news http://ift.tt/2bEvxlJ
via IFTTT

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...