Games

Wii Less Powerful Then Xbox 1?

Last week in an interview with eWeek, Robbie Bach, president of Microsoft’s Entertainment and Devices division, made the claim that the Nintendo Wii “[doesn’t] have the graphics horsepower that even Xbox 1 had.” Of course this made the Nintendo fans pretty upset. But was there actually any real basis to his claim? Newsweek’s N’Gai Croal decided to investigate by talking to “two of our most reliable technical experts at third party publishers”. Coral has very good credibility, so there is really no reason to doubt what he reported:



One point of speculation was this: does the Wii have programmable shaders, either vertex shaders, pixel shaders, or both, as did the original Xbox? The answer, according to our first source, was no. “The Wii’s GPU has fixed functions for vertex, lighting, and pixel operations,” said the source “All ‘programmable shaders’ means is that the code you write for the shader gets run on the vertex and pixel hardware of the GPU. This is how it works on the high-end ATI and Nvidia GPU parts. The Wii is an older fixed function design where you have lots of operations but the pipelines are not programmable in the sense of downloading shader code to run [on them].”

Our second source echoed that assessment of the Wii’s graphics chip, comparing its fixed-function design to that the Gamecube, saying that it was “basically pretty similar” to Nvidia’s seven-year-old GeForce2. “A dev support guy from Nintendo said that the Wii chipset is ‘Gamecube 1.5 with some added memory,'” our second source told us. “I figure if they say that, it must be true.”

Our second source went on to explain that the “Gamecube 1.5” moniker, while accurate, doesn’t mean that gamers won’t see graphical improvements on the Wii. “There are three main differences which will result in graphics improvements. One, the increased memory clock speed, from 162 megahertz to 243 megahertz, means that it is easier to do enough pixels for 480p mode versus 480i. Two, the enhanced memory size of the Wii gives much more room for image-related operations such as anti-aliasing, motion blur, etc. The performance to these memory systems from the graphics chip is also improved. So full-screen effects and increased texture usage seem likely as a result.”

The same source cited a third factor: an apparent increase in fixed-function “texture environment stages”–also known as TEV stages–from 8 in the Gamecube to 16 on the Wii. (The source stressed “apparent” because this feature wasn’t described in the Wii’s graphics overview documentation–which was simply repurposed from the Gamecube–but it was listed among the Wii’s programming calls. “Assuming this isn’t a bug, it means that much more complex per-pixel graphics operations are possible,” our source told us. “However, each additional TEV stage use slows down the graphics chip more and more, so it is a trade-off. You can do more powerful pixel operations and you’ll bottleneck the chip and not be able to do as many of them, nor as many vertex operations (since the pixel and vertex systems are tightly coupled on fixed-function graphics chips.) Eight additional stages mean more complex operations are possible. It would be easier to do bump mapping perhaps, or environment mapping, but you would have to get creative with how you do it. It wouldn’t be easy.



Our final verdict on the charges leveled at the Wii? While Bach’s statement that the Wii is graphically underpowered compared to the first Xbox wasn’t quite a bulls-eye, it’s so darned close to the mark–technically speaking–that we’ve got to compliment him on his aim. The question, then, is how much will developers be able to squeeze out of the less-flexible Wii hardware? But if the Wii keeps selling like ice on a hot summer day, it’s unlikely that Nintendo will lose too much sleep over the power disparity.

Source

Well there you have it. I guess the GameCube 1.5 moniker isn’t far off from the truth.