Intel is in the business of not only making the impossible, possible – it also spends billions in productising its inventions and bringing them to market. Right now, the chip-giant is preparing for the launch of Sandy Bridge E for X79. From the brainstorming drawing board, to sitting in a box on the local store’s shelf is a tough journey, so when KitGuru was presented with the opportunity for an exclusive interview with Rob Willoner, Intel’s Technology Analyst and resident guru, we leapt at the chance for a mini-voyage of discovery.
The subject matter is quite tough, so we began by focusing on some of the every-day analogies we’ve heard to explain the process of making advanced chips. Specifically, to get Rob’s angle on them, to see if they hold true for Intel’s technologies and to understand how the challenges are being combated.
Oren Riess at Intel’s production centre in Qiryat-Gat is quoted as saying that modern Intel processes are like using a child’s crayon to draw ultra-thin pencil lines. For the benefit of KitGuru’s readers, what do you think Oren means by this?
“Oren was referring to the something that’s really mind-bending when you think about it”, said Rob. “It’s all about the differences between the size of lithography, represented by the crayon in this analogy, and the structures or lines that we are able to paint with this lithography”.
“We currently use light that has a wavelength of 193nm and ‘paint’ transistors that are only 22nm. More accurately we’re using quarters and halves of light waves of light with 193nm wavelength and overlap them in an intelligent way – computational masks”, he said.
With a smile, he added, “It’s worth noting that some of the individual transistor features are even smaller than 22nm. It’s like using a really chunky crayon to draw finer-than-fine lines”.
We all know that heat is a huge challenge inside modern chips. Speaking with someone on JEDEC, he said that (when considering chip manufacturing) it was useful to imagine a household tap where water flows continuously around the tap, even when it’s closed. We were then told to imagine a house that contains hundreds of millions of these leaky taps. We put this version of the world to Rob and asked if it was a fair picture of the inside of a modern CPU?
“The leaky tap is not a bad analogy”, said Rob. “But it’s important to note that a transistor certainly does not allow more current to flow when in OFF state as compared to its ON state”.
“Hundreds of millions of these ‘taps’ is also accurate here. To really help fix the situation, Intel has now come out with its 3D/Tri-Gate transistors“, said Rob.
“These transistors can be fully depleted. This means that when compared to a planar [Old style 2D – Ed] transistor in its OFF state, a Tri-Gate transistor can turn off the current flow between source and drain altogether”, he explained. “This is a huge leap in driving leakage power down. The fully depleted operation also greatly increases the current flow when the transistor is in the ON state”.
To understand the globe ‘according to Intel’, we asked Rob where in the world does most of the Intel research on process happen?
“For process technology, most of the research happens in Hillsboro, Oregon”, he said. “Other kinds of research, such as the Silicon photonics work, go on in Santa Clara, but Intel Labs are spread throughout the world – including some very significant labs in Europe”.
This research is all about making increasingly complex products, smaller and smaller.
On an every day scale, small differences between two items make very little difference. For example, no one would care if every strand of spaghetti in a packet was the same thickness. It’s not important.
When you scale down, things get squashed together. At that stage, even minute differences between 2 components can have a huge effect on the overall product. For example, with several transmission wires right next to each other, you will find that some are measurably better or worse at carrying current than their neighbours. This difference brings additional physics issues.
KitGuru wanted to know, in a typical shrink, roughly how much of what Intel does is ‘completely new technology’ and how much is ‘reworking existing sections’ – for example to avoid issues with ‘aggressor wires’ etc?
“Up through about the turn of the century, we used to do rather straightforward scaling”, Rob explained. “So every 2 years or so, we would shrink everything from transistors and interconnects to insulators etc, by 30% on a side. That meant a 50% reduction in area, i.e. a doubling of transistor count in a given space”.
“This shrinking would get us benefits of better performance, reduced power, and more compact designs. It also meant a lowering of the cost per transistor”, he said. “But the benefits of such straightforward shrinking slowed down around the 90nm technology and we’ve had to come up with new materials and new structures – as well as the shrinking – in order to continue reaping the desired benefits”, he explained.
KitGuru wanted to know which advances had the biggest impact?
Rob explained, “The biggest innovations were strained Silicon in 2003, HK/MG in 2007 and the Tri-Gate transistors we delivered in 2011. There have been others as well, though they haven’t received as much publicity”.
If you’re interested in these improvements to silicon etc, then you might want to scan through this PDF from Intel.
We’ve all seen Mario Paniccia discussing photonics and it seems amazing. How do advances by one group, for example the wonderfully named photonics lab, impact or compliment developments in other areas, for example Tri-Gate technology?
“Silicon Photonics (SiPh) and transistor-based logic technology do not replace each other. Transistors are switches and SiPh is a way of transmitting data without the use of electrical copper based connections”, Rob explained. “It’s probably better for KitGuru readers to think in terms of switches and wires. SiPh would replace wires mainly in off chip communications at first”.
A pause, then he adds, “Yes, the SiPh could one day replace the on-die wiring as well, but not the transistors themselves. So both are complementary”.
KitGuru’s earlier story about Intel’s development of a 7nm process being ready around the end of 2016, has drawn a lot of speculation from various quarters. How do you see it?
“There is always a lot of rumour and speculation surrounding future processes. What I can tell you is that Intel expects to come to market with products based on the 7nm process in 2017”. To market in 2016 means a working process in 2016. Not bad.
That movement will require some changes. The renaissance of Germanium and research into the stunning switching speeds made possible by mixing various quantities of Arsenic, Indium and Gallium are all adding up to a very exciting future. For example, around 5 years ago, Prof. Milton Feng managed to get a transistor running at 845GHz. Not MHz. GHz. That’s a glimpse into the distant future, but what is Intel doing in the here and now to extend the life of existing process technologies?
Rob told KitGuru that Immersion Lithography has proven very useful, “A thin film of water placed between the lens and the wafer in a patterning step is called immersion lithography. The water’s refractive index improves the optics so, effectively, we get a thinner ‘brush’ than we would with traditional ‘air only’ methods from the past”.
“At the interface of the glass lens and the water, you get more bending of light than you would at the interface of the glass and the lens”, he explained.
As Intel shrinks down from process to process, each transition must provide different challenges. Which process shifts are easier and which are harder to achieve?
“At today’s feature sizes, no process is easy”, Rob told us. “These things take many years to develop, with huge teams of engineers”. Looks like it’s been a seriously uphill roadmap for the chip industry since 90nm.
KitGuru’s recently interview with Intel software guru, John Hengeveld, covered topics like increased parallelism. Does every increase in parallelism means an exponential increase in wiring/interconnection complexity? Rob answered this for us, but first wanted to separate and define the term parallelism for us.
“John Hengeveld’s area of expertise is software parallelism and, in theory, that does not have an immediate effect on chips at all”, said Rob. “However, increasingly parallel chips naturally requires more wiring complexity”.
With a smile, he adds “But this doesn’t represent the biggest change. First and foremost more parallelism in a chip means more transistors on a chip. If you want more execution units, where each of them would execute a separate and independent task, you need more transistors to do the switching”.
“Sure the transistors need their wires, but it’s just doubling the number of wires rather than increasing complexity”, said Rob.
That last statement puts everything in a nutshell.
The chips of tomorrow will offer significant increases in ability, and each increase in ability will be directly linked to the number of transistors on the chip. Right now, almost everything comes down to improved processes, increasing the power, performance and feature set available.
KitGuru says: While most people come to KitGuru for objective, informed buying advice, it’s only one part of what we do. Let’s face it, we’re geeks. We absolutely love the technology – understanding more about the challenges and how it all works actually gives us as much of a kick as the plugging in and using itself. We thank Rob for taking the time to help us understand a little more about where chip development is going. That’s in the future – for now… Roll on X79.
Update: We have been given the chance to do a little follow-up with Intel on the subject of processes. So in addition to our regular options of commenting below or in the KitGuru forums, please also note that you can leave questions which we can ask on your behalf. Can’t guarantee that they will all be passed on or responded to – but we’ll try our best with the most interesting ones!