As I've thought more about Larrabee, I have some additional observations:
I'd like to clarify my point about the x86 instruction set. It's not innately bad, but it doesn't progress the architecture. The x86 ISA has too few registers, an irregular instruction length, and highly irregular instruction decoding - none of which help with architectural efficiency. It's just all the work and R&D put into software for the x86 architecture that compensates for its shortcomings. But if given a clean slate, few would chose the x86 ISA for maximum performance or efficiency. My point is that x86 is largely irrelevant for a GPU. The x86 will help Intel in the HPC market.
With regard to the wide SIMD (SSE) structure - we'll have to see how effective Intel is in packing useful operations. It seems that the width was chosen in order to hit certain theoretical performance metrics, not for realistic workloads. The same ambition infected the Cell processor and we're still waiting for software to approach those highly promoted peak numbers.
I remain convinced that Intel doesn't have the formula to produce a world class GPU. I see no sign they have the specialized talents to make it succeed. At best, it will be a reasonably sized die with mediocre performance, but it will have Intel's brand name behind it. That will sell some units, but I hope Intel is ready for the realities of GPU ASPs in the mainstream of the market. At least it should be better than the existing Extreme (-ly bad) Graphics they have today.
I also realized that most PC buyers have no idea how bad Intel's integrated graphics really is. The gamer sites never test Intel graphics because they know how bad it is and it's not even worth the effort. But the mainstream consumers don't know how bad it is because NO ONE TELLS THEM! Both ATI and NVIDIA are guilty here because we still have to work with the Intel chipset group and neither company has been willing to take Intel on directly. Shame on us. It's also not a topic that consumer publications make a big point about, as many in the industry consider it common knowledge - its not new news to them. So Intel gets a free ride. I was waiting for AMD to push it's platform strategy and call out Intel graphics, but I haven't seem it yet. Maybe when its new DX10 chipset ships in Q1.
With regard to Larrabee's supposed process advantage, I find it difficult to image Intel will use the leading edge process for Larrabee. I expect the chip and its successors will be 6 months to a year behind the leading edge of the process. If that's the case, then TSMC will be very competitive. In addition, our design process is great at turning around designs quickly, Intel's process is geared more toward die optimizations with its custom circuit design. Intel will likely make Larrabee coherent so that multi-chip scaling will allow Intel to make the one die and still create multiple graphics card solutions. We tend toward multiple design solutions with one die optimized for a specific price point. AMD may also be moving to the same idea as Intel. We'll have to follow that development closely.
So, if anyone else would like to comment, please add your voice. I do moderate the comments, but I promise to publish any reasonable response.
Tuesday, December 11, 2007
Friday, December 07, 2007
My last post did generate a response, so check it out below and my response to the comment.
I should add that I was talking from my perspective. NVIDIA does take the Larrabee program very seriously. I see it more as a research project gone amok.
What I also find amusing is that Intel, for many years, tried to run away from the x86 instruction set with i960, i860, i432, EPIC (Itanium), and xScale, and now x86 is the answer to all applications. Fred Weber when he was CTO of AMD was the first I know to active promote the "x86 everywhere" strategy. While AMD still embraces this notion, I haven't seen them approach graphics directly with the x86 instruction set.
I think that AMD's Fusion is a wiser move overall.
I should add that I was talking from my perspective. NVIDIA does take the Larrabee program very seriously. I see it more as a research project gone amok.
What I also find amusing is that Intel, for many years, tried to run away from the x86 instruction set with i960, i860, i432, EPIC (Itanium), and xScale, and now x86 is the answer to all applications. Fred Weber when he was CTO of AMD was the first I know to active promote the "x86 everywhere" strategy. While AMD still embraces this notion, I haven't seen them approach graphics directly with the x86 instruction set.
I think that AMD's Fusion is a wiser move overall.
Wednesday, December 05, 2007
It's a bit late (actually you may note I tend to blog late at night anyway), but I've been wanting to take on a topic that's of interest to me from both an NVIDIA point of view and my Microprocessor Report analyst background. It's the chances that Intel's Larrabee will be a successful GPU.
I would like to offer the opinion that the deck is stacked against it, despite Intel's hype machine pushing it. For one, the design looks more like the IBM Cell processor or the Sun Niagara (UltraSPARC T2) than a graphics chips. The Cell was proved to be ineffectual as a GPU (which explains why we got the graphics chip in the PS3) and Niagara is focused on Web services (lots of threads to service).
Let me also ask this question: when has Intel EVER produced a high performance graphic chip? (Answer: never)
Oh, and when did we all decide that x86 was the most perfect instruction set for graphics? Hmmmm, it's not. The example where the x86 is the preferred instruction set for either graphics or high performance computing doesn't exist. The x86 instruction is an burden on the chip, not an advantage, for graphics. In fact, graphics instruction sets are hidden by the OpenGL and DirectX API, so the only people who care are the driver programmers and we are free to design whatever instruction set is most efficient for the task, while Intel has handcuffed themselves with a fixed instruction set. Not the smartest move for graphics. Intel's plans may work better for HPC though. Still, there are many alternatives to x86 that are actually better (shocking isn't it)!
Larrabee is based on a simplified x86 core with an extended SSE-like SIMD processing unit. In order to be efficient, the extended-SSE unit needs to be packed efficiently. How is Intel going to do that? The magic is all left to the software compiler, which is going have a tough time finding that much parallelism efficiently.
Larrabee will have impressive theoretical specs that will wow the press, but it's going to be very hard to use the chip up to its potential, which sounds exactly like Cell. And Cell has not lived up to its hype. So my corollary is Larrabee = Cell; Cell not = to hype; Larrabee will not = hype.
Now we wait until sometime in mid-2008 to see some silicon. But you heard it hear first: Larrabee will not reach the potential performance capability (by a long shot) and will not displace NVIDIA as GPU leader.
I would like to offer the opinion that the deck is stacked against it, despite Intel's hype machine pushing it. For one, the design looks more like the IBM Cell processor or the Sun Niagara (UltraSPARC T2) than a graphics chips. The Cell was proved to be ineffectual as a GPU (which explains why we got the graphics chip in the PS3) and Niagara is focused on Web services (lots of threads to service).
Let me also ask this question: when has Intel EVER produced a high performance graphic chip? (Answer: never)
Oh, and when did we all decide that x86 was the most perfect instruction set for graphics? Hmmmm, it's not. The example where the x86 is the preferred instruction set for either graphics or high performance computing doesn't exist. The x86 instruction is an burden on the chip, not an advantage, for graphics. In fact, graphics instruction sets are hidden by the OpenGL and DirectX API, so the only people who care are the driver programmers and we are free to design whatever instruction set is most efficient for the task, while Intel has handcuffed themselves with a fixed instruction set. Not the smartest move for graphics. Intel's plans may work better for HPC though. Still, there are many alternatives to x86 that are actually better (shocking isn't it)!
Larrabee is based on a simplified x86 core with an extended SSE-like SIMD processing unit. In order to be efficient, the extended-SSE unit needs to be packed efficiently. How is Intel going to do that? The magic is all left to the software compiler, which is going have a tough time finding that much parallelism efficiently.
Larrabee will have impressive theoretical specs that will wow the press, but it's going to be very hard to use the chip up to its potential, which sounds exactly like Cell. And Cell has not lived up to its hype. So my corollary is Larrabee = Cell; Cell not = to hype; Larrabee will not = hype.
Now we wait until sometime in mid-2008 to see some silicon. But you heard it hear first: Larrabee will not reach the potential performance capability (by a long shot) and will not displace NVIDIA as GPU leader.
Monday, December 03, 2007
Hi Ho Everybody, I'm back. I haven't had a lot of time with the Eee PC, but it's still reasonably nice. I'll like it a lot better when ASUS increases the screen resolution. It would also be nice to move up to a Core 2 processor with real power management, but these are the trade-offs you make when its only $400.
I've been in the middle of thinking about AMD, Intel, and NVIDIA as we all do this dance around the ring - its like a three-way fight where we take turns helping and hurting each other. I'd call us the Good, the Bad, and the Ugly (I'd let you all decide who is who, but I know my choices).
On the topic of ASUS, I'm beginning to wonder about the quality of their motherboards. Both my stepson and I have problems with the motherboards not booting properly and these are two completely different motherboards. How hard can that be to get right?
One project I recently completed was taking video at the memorial for the mother of my wife's brother in-law. I used Pinnacle Studio 10.5 and found it pretty easy to use and create a good basic video with fades and titles. I want to do more work with it over the holidays.
I've been in the middle of thinking about AMD, Intel, and NVIDIA as we all do this dance around the ring - its like a three-way fight where we take turns helping and hurting each other. I'd call us the Good, the Bad, and the Ugly (I'd let you all decide who is who, but I know my choices).
On the topic of ASUS, I'm beginning to wonder about the quality of their motherboards. Both my stepson and I have problems with the motherboards not booting properly and these are two completely different motherboards. How hard can that be to get right?
One project I recently completed was taking video at the memorial for the mother of my wife's brother in-law. I used Pinnacle Studio 10.5 and found it pretty easy to use and create a good basic video with fades and titles. I want to do more work with it over the holidays.
Subscribe to:
Posts (Atom)