In the second part of KitGuru’s exclusive interview with AMD’s Developer Relations Guru, Nicolas Thibieroz, we managed to cover a lot more ground – specifically when it comes to his company’s approach to technologies like 3D. Nic also has opinions on Intel’s abilities. KitGuru replaces the batteries in its active shutter glasses and prepares to see the answers leap off the page.
In addition to inventing the modern graphic card, nVidia has taken the lead in several graphics technologies over the past few years. That said, we’re reminded of an old saying about the people who shout loudest, not always being the ones who shout last. Picking up Ageia at a bargain price certainly seemed to give nVidia a unique proposition in the world of physics processing and, initially, it also had a strong lead in 3D. However, at Computex 2010, KitGuru saw the first demonstration of 3D over multiple screens from a single card – and the solution wasn’t an nVidia one. It was inside a Sapphire demonstration unit.
Nic Thibieroz is an expert on all things rendered, so we asked him about the competing 3D technologies. He has a certain bias, we’re sure, but does he have logic as well as passion? Let’s see.
“nVidia and AMD have two very different approaches when it comes to supporting stereo 3D”, said Nic. “As far as AMD is concerned we’re committed to support industry standards like HDMI 1.4a, DisplayPort and third-party stereoscopic equipment. nVidia has been supporting stereo via their proprietary active shutter glasses for slightly longer, but we believe our approach is better as it ultimately gives the freedom of choice to our customers while allowing the stereo ecosystem to thrive”.
That leads us to the technical differences, “AMD’s native stereo solution relies on a quad-buffer interface whereby the game is responsible for rendering the left and right eye images into a buffer. Our drivers then do the rest and present the combined render onto a stereoscopic-compatible display”.
What does Nic think the benefits are?
“This approach gives developers full control over the creation of those images and therefore prevents the occasional depth perception issues (e.g. for HUD elements) that may arise when a stereoscopic middleware is employed”, he explained. “One way to better appreciate the value of native Stereoscopic content is to notice the difference in quality when comparing movies that were captured in Stereoscopic mode compared to those that were converted afterwards”.
During out initial discussion about Shogun 2, that Nic kept referring to industry and open standards. We asked him if he thought they were really important.
“The way I see it is that, an open standard is about enabling”, Nic replied. “It enables individuals to choose the hardware or software of their choice without being locked in to having to use a particular brand. In the context of game development this would apply to both game development studios and game buyers”.
“Developers working on the game are free to choose the standard adopter they want to produce their assets, allowing greater flexibility in their choice. Game buyers can also do the same when it comes to buying a piece of hardware implementing this open standard”, said Nic. “It enables developers to focus their resources on supporting a single standard instead of several brand-specific variations. This reduces the level of resources required to implement the features described by this standard, and encourages adoption across more platforms – which is ultimately beneficial to the customer”.
And it doesn’t stop there, according to Nic “It enables competition – amongst adopters of an open standard – to come up with the best implementation, and allows them to differentiate their implementations on various factors like price, platform, performance etc. Doing so encourages innovation and customer adoption while avoiding monopolistic situations”.
Nic is not shy about AMD’s position, “AMD is all about open standards. In my opinion, open standards are what allow the industry to move forward without a particular vendor restricting or controlling access to a feature”.
KitGuru ponders that last sentence and wonder what Nic’s view is on Microsoft and its control of DirectX, which is – essentially – less than a 100% open standard. We scribble that thought in our notes and carry on.
One last quick question. If open standards are so important, why doesn’t everyone support them as a matter of course?
“That is a question you might consider asking of those who reject the idea of open standards”, Nic replied. “AMD’s position is that an open standard benefits everyone – by increasing competition, which leads to increased innovation and efficiency, which in turn yields lower prices for consumers”.
OK, so the subject of standard has been raised – initially with 3D. We have all seen the kind of numbers that AMD is able to deliver with Direct Compute and Open CL, so how strong is AMD in these areas – specifically compared to Intel and nVidia?
Nic started off by targeting nVidia, “DirectCompute and OpenCL are two industry standard APIs that enable programmers to take advantage of the massive parallel processing capabilities of modern GPUs. With more than 2.7 TFlops of horsepower available on the AMD Radeon™ 6970 GPU, it is of no surprise to me that well-written DirectCompute/OpenCL applications run great on AMD GPUs”.
“However, the way such programs are written, can heavily influence their performance when run on different platforms. For example, a DirectCompute or OpenCL program with heavy parallel computational needs will always run faster on a high-end GPU and clearly AMD GPUs lead the pack in this regard. In contrast a multi-core CPU simply doesn’t have enough math capabilities to compete in this scenario”, said Nic, switching his target to Intel.
We challenged him that there are cases where the modern, multi-core CPU can do really well. He replied, “If the code to execute is more serial in nature then a CPU may rise to the challenge, but then the algorithm is probably not that good a fit for DirectCompute/OpenCL in the first place. It is also important to talk about memory accesses and, in particular, the writing out of results. An algorithm written in a way that stores intermediate results to external memory – and then reads them back multiple times in the same program – is more likely to benefit from an architecture with a higher L2 cache capacity”, said Nic. “I wouldn’t necessarily call this an efficient approach though, as in many cases the algorithm should be adapted to use the Thread Group Shared Memory that was intended for this purpose, or modified to optimise external memory write accesses”.
KitGuru says: So far, Nic has spoken confidently about the work that AMD’s Developer Relations team does with triple-A titles like Shogun 2, as well as the way he believe that his company approaches standards and processing techniques in the right way. We will round off this trio of interviews tomorrow with a look at how DirectX could become a restriction to graphics innovators, how consoles could lead the next generation of gaming and how much concern Nic has for Intel’s drive into the Fusion-class processor market.
Comment below or in the KitGuru forum.