Posts Tagged ‘GPU’

Imagination Technologies : PowerVR GPUs

Apple’s new A5 processor features a dual core PowerVR SGX 543 – the same graphics tech that’s set to be featured in the forthcoming Sony NGP, the difference being that the new PlayStation portable will double the core count, bringing an unprecedented amount of graphical power to the mobile space.

Firstly, where we are at right now…

Secondly, an interview which gives us some perspective on where we are going…

Two great resources for those of us that are excited about the future of mobile gaming.

Posted: April 13th, 2011
Categories: Apple, GPU, PSP, Sony, ipad
Tags: , , , , , ,
Comments: View Comments.

iPad 2 : Some Definitive GPU Benchmarks…

This is the first definitive set of GPU benchmarking for the iPad 2, courtesy of AnandTech…

Developers with existing titles on the iPad could conceivably triple geometry complexity with no impact on performance on the iPad 2.

With a lit triangle, a simple test for fragment shader performance…

While the PowerVR SGX 535 in the A4 could barely break 4 million triangles per second in this test, the PowerVR SGX 543MP2 in the A5 manages just under 20 million.

Texture fetch…

5x increase in texture fetch performance. This has to be due to more than an increase in the amount of texturing hardware. An improvement in throughput? Increase in memory bandwidth? It’s tough to say without knowing more at this point.

Those of us working with multiple FBOs for post processing effects, should be very pleased with both the increase in fragment shader performance and the apparent increase in texture fetch throughput.

GLBenchmark 2.0 app & Infinity Blade tests…

While we weren’t able to reach the 9x figure claimed by Apple (I’m not sure that you’ll ever see 9x running real game code), a range of 3 – 7x in GLBenchmark 2.0 is more reasonable. In practice I’d expect something less than 5x but that’s nothing to complain about.

There are more in depth details in the full article, and they promise follow ups.

But for a simple head to head comparison running GLBenchmark 2.0 Egypt…

Apple iPad 2 (1024 x 768) : 44 FPS

Motorola Xoom (1280 x 800) : 11.8 FPS

Apple iPad (1024 x 768) : 8.1 FPS

Posted: March 13th, 2011
Categories: Analysis, Apple, Benchmarks, ipad
Tags: , ,
Comments: View Comments.

Apple A5 : Sheep in Wolf’s Clothing?

We think the A5 is likely not built around Cortex A9 cores, but instead probably uses two [of] the same custom low-power A8 cores used in the A4. If Apple had indeed used two Cortex A9-based cores, raw performance should be more than double that of a single core A8-based design.

This makes a lot of sense. I noted in February that Apple had received custom silicon for what we expected to be the Apple A5. But that it had not had enough time to transition that silicon into iOS (or ongoing iPad 2 manufacture) for an early 2011 launch. So my best guess was that the iPad 2 would ship with an interim SoC. I suggested something like a beefed up A4, with a faster ARM Cortex A8, and a much better GPU most likely making up an iPad 2 specific Apple A4-and-a-half.

To be honest, until someone (iFixit and friends) rips the silicon in the iPad 2 apart and sticks it under a microscope, none of us will have much more than guesses to go on about what exactly the Apple A5 is. But it seems very likely that Apple has made expedient decisions to maximise performance as well as keep battery life gains.

I’ve have always maintained that Apple’s mobile silicon lineup is more than powerful enough in the CPU department, and what it really needed was a kick on the GPU side. Even the CPU in the original iPhone is still very capable. But the GPUs in all current iOS devices are constantly fighting an uphill battle with fill rate.

It remains to be seen if the iPhone 5 will get this Apple A5, or a further iteration.

Posted: March 8th, 2011
Categories: ARM, Apple, Technical Specs, ipad
Tags: , , , , ,
Comments: View Comments.

Intel’s Sandy Bridge : The Lowdown…

As always two great articles from AnandTech…

Spoiler : Sandy Bridge is a great new CPU, and is going to get a lot of buzz from Apple for its hardware-based transcoding engine; great for ripping movies to mobile devices (yawn).

Unfortunately I don’t see Apple sticking the top of the line 4/8 core versions in any of its laptops anytime soon though; they’re expensive! Expect one of the chips a little lower down the range.

Intel also still has a generation or two to go before it gets a handle on what makes great IGPs, and even in this generation their graphics drivers need work. I haven’t even mentioned Open CL yet; there’s a reason for that!

That’s not to say that Sandy Bridge’s IGP is not a much better IGP than they’ve produced before. It is. But that’s not difficult to achieve. Although their current offering is still only on a par with last gen. IGPs from AMD and NVIDIA. So we’ll still need discrete GPUs for some considerable time yet if we want anything approaching contemporary performance.

With Intel now using 32nm process technology on their IGP and 22nm coming in late 2011, we could actually begin seeing a doubling of IGP performance every ~18 months without increasing power requirements, and at some point we stop needing much more than that. Put it another way: Intel’s HD Graphics 3000 with 114M transistors is now providing about the same level of performance as the PS3 and Xbox 360 consoles, and you pretty much get that “free” with any non-Atom CPU going forward. Maybe the next consoles won’t even need to use anything beyond AMD/Intel’s current integrated solutions?

Posted: January 5th, 2011
Categories: intel
Tags: , , ,
Comments: View Comments.

NVIDIA’s Future Roadmap…

“We are not building any more chipsets, we are building SoCs now. We are building Tegra SoCs, and so we are going to take integration to a new level. [...] The chipset business [has] not grown largely this year because we have not really been expanding the sales of it.”

I have been trying to wrap my head around these comments attributed to Jen-Hsun Huang, chief executive officer of Nvidia. Ultimately SoCs, and a merge of OpenGL and OpenGL ES (at least in terms of API compatibility with stepped feature sets) is where we are heading. But I was quite surprised at how fast NVIDIA had come to this conclusion, when you consider they are in the business of shipping GPUs today, and not just R&D or licensing of the technology.

This article seems to illuminate NVIDIA’s strategy to some degree…

NVIDIA has spent the better part of a decade establishing itself as a major GPU player in everything from notebooks to workstations, but the imminent introduction of new products and technologies from competitors like Intel could detrimentally impact the company’s bottom line, particularly as these competing products transition to smaller process nodes and more advanced designs. Up until now, every CPU in existence had to be paired with a GPU that was either integrated into the motherboard or sold separately as a discrete solution; NVIDIA competes in both of these markets with its various integrated chipsets and discrete cards.

If Intel successfully establishes itself as a major player in the discrete GPU market, both NVIDIA and AMD will be faced with an unwelcome third opponent with financial resources that dwarf the two of them combined. As the dominant company in both desktop and workstation graphics, NVIDIA literally has more to lose from such a confrontation, and it’s the only one of the three that does not possess an x86 license or an established CPU brand. This leaves the corporation at a distinct disadvantage compared to AMD; the latter can combine a CPU and GPU into a single package and / or design itself a graphics core based on the x86 architecture. With no simple way to address these issues, NVIDIA is exploring a separate market altogether, and that’s where Tegra comes in.

The last twenty-five years are littered with examples of companies who claimed Intel (and, by extension, the x86 architecture) couldn’t possibly challenge the performance or scalability of their various processors or products. Faced with a future where integrated CPU / GPU hybrids chip away at its budget products and Larrabee challenges the midrange (at least), NVIDIA is pursuing the barely tapped market for smartbooks, UMPCs, MIDs, and next-generation smart phones. The company’s lack of an x86 license could prove to be a disadvantage, but the market space Tegra is targeting is the only one where a non-x86 architecture actually has a chance of succeeding.

Wars aren’t won by sitting at home and waiting for the enemy to come to you, especially when your foe has ten times your revenue and far-reaching connections. Graphics and GPU design will remain a critical part of the company’s future—you don’t pump two years into creating the concept of “visual computing” only to quit—but NVIDIA’s decision to capitalize on the on the same market opportunities Intel is working to create in Atom’s target market is, at the very least, strategically sound.

Posted: December 21st, 2010
Categories: ARM, GPU, Nvidia
Tags: , , , , ,
Comments: View Comments.

AMD’s Radeon HD 6970 & Radeon HD 6950

AMD’s own internal database of games tells them an interesting story: the average slot utilization is 3.4 – on average a 5th streaming processor is going unused in games. VLIW5, which made so much sense for DX9 vertex shaders is now becoming too wide, while scalar and narrow workloads are increasing in number. The stage is set for a narrower Streaming Processor Unit; enter VLIW4.

As you may recall from a number of our discussions on AMD’s core architecture, AMD’s architecture is heavily invested in Instruction Level Parallelism, that is having instructions in a single thread that have no dependencies on each other that can be executed in parallel. With VLIW5 the best case scenario is that 5 instructions can be scheduled together on every SPU every clock, a scenario that rarely happens. We’ve already touched on how in games AMD is seeing an average of 3.4, which is actually pretty good but still is under 80% efficient. Ultimately extracting ILP from a workload is hard, leading to a wide delta between the best and worst case scenarios.

Very interesting reading the reasoning behind the switch from VLIW5 to VLIW4.

Posted: December 19th, 2010
Categories: Technical Specs
Tags: , , ,
Comments: View Comments.

PowerVR Buys Ray Tracing Company Caustic

Imagination Technologies on Tuesday announced the acquisition of Caustic Graphics for $27 million. Caustic creates ray tracing technology, a technique for rendering three-dimensional graphics with complex and more natural lighting models….

“Ray tracing is a key additional technology that traditionally has been regarded as the exclusive domain of specialized markets and non real-time applications,” said Hossein Yassaie, chief executive of Imagination. “We intend to change that.”

“Our vision is to enable cinema quality computer graphics at new cost and power consumption design points,” said Caustic Chief Executive Chip Stearns. “We are excited at the prospect of becoming part of the Imagination team as we bring ray tracing to a much broader base, and utilize their extensive resources and partnerships to bring our technology to every consumer screen.”

This is one to watch. I’ve been watching their share price since June with great interest.

Posted: December 14th, 2010
Categories: Apple, News
Tags: , ,
Comments: View Comments.

Apple to tap Intel’s graphics for future MacBooks?

MacBook models with screen sizes of 13 inches and below are expected to switch to Sandy Bridge-only graphics, while higher-end MacBook Pros are expected to use graphics from Advanced Micro Devices, according to sources. Whether Nvidia will still be present in higher-end models is unclear.

Sandy Bridge is a watershed processor for Intel because, for the first time in a mainstream product, the graphics chip is grafted directly onto the main processor, boosting performance, while essentially providing the graphics function for free. And the step up in performance may be enough for Apple to rely on Intel’s graphics in some lower-end MacBooks.

Seems very unlikely to me. Intel has not got a good past history of delivering on GPU performance  promises.

Anyone remember how unpopular the GMA 950 IGPs from Intel were? I am not sure Apple want to go through that pain again. Their customers certainly don’t. Nor are they particularly happy with the current GPU restrictions imposed by Intel on much of Apple’s product line.

AMD is a likely shoe in for more components in future. And not necessarily just GPUs.

Apple’s plan for the future is most definitely to be as silicon agnostic as possible. It should be any manufacturers aim at this point in our industry’s history. For that reason I think Apple is likely to keep more manufacturers on tap, rather than less. Dropping NVIDIA means that they are less likely to be there in the future for Apple’s future needs.

Right now Intel are still being coy about how far they plan to go with OpenCL support. Without that any deal arrived at out of choice is simply not going to happen with Apple.

Posted: December 9th, 2010
Categories: Apple, Nvidia, Technical Specs, intel
Tags: , , , , , ,
Comments: View Comments.

Apple Acknowledges Mac Pro ATI X1900 XT Problem

Apple has determined that certain ATI X1900 XT cards sold for use in Mac Pro and Mac Pro (8x) computers between approximately August 2006 and January 2008 may experience distorted video. Affected graphics cards have “V6Z” in the last 4 digits of the card’s serial number.

Would be nice if they would also acknowledge the problem with the ATI X1600 in Mac Book Pros from the same period.

Posted: September 29th, 2010
Categories: Apple, Mac
Tags: , , ,
Comments: View Comments.

Jon Peddie on the Nintendo 3DS, DMP and its GPU…

The company has been investigating various semiconductor and IP suppliers since 2006 having looked at their partner ATI (Wii), ARM (DS), Imagination Technologies, Nvidia, and others. The decision to use DMP’s PICA200 design was made over a year ago and testing and development have been going on for some time; it’s not as easy as it may seem to license a core and integrate it into an SoC and get the costs (die size), power consumption (has to run forever on small batteries), and performance (clocks and memory management) balance. So as you learn more about this device if you wonder why it took them so long, keep all that in mind.

Founded in 2002, DMP, a graphics IP core supplier in Japan, has adopted a business strategy of focusing on the digital consumer market.

DMP first told me about the PICA architecture in early 2005 which was their first IP core based on Ultray architecture. The president and CEO of DMP, Tatsuo Yamamoto, told me then the Ultray allows real-time photo realistic rendering with physically correct lighting and shadowing such as soft shadow casting and position dependent environmental mapping.

Ultray is unique in that it uses hardware parametric engines for certain graphics features rather than shaders. With this approach, clouds, smoke, gas and other fuzzy objects can be shaded and rendered at an interactive rate.

This is what will make or break the software we see.

At Siggraph 2005 (LA) DMP revealed in more detail some of their techniques for hair, skin, and gaseous shapes. Yamamoto said then that the Ultray could boast lower power consumption due to hardware pipelines, and smaller number of polygons to achieve high-quality graphics based on pixel-level shading (Phong, BRDF, etc.) vs. vertex-level and polygon subdivision.

Posted: June 23rd, 2010
Categories: Analysis, Nintendo
Tags: , , ,
Comments: View Comments.
Get Adobe Flash playerPlugin by wpburn.com wordpress themes