Haswell to include a L4 cache?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Fjodor2001

Diamond Member
Feb 6, 2010
3,953
416
126
semi-accurate has a pretty in-depth article about Haswell's Graphics..

http://semiaccurate.com/2012/04/02/haswells-gpu-prowess-is-due-to-crystalwell/

"We are told the GT3 variants of Haswell will have 64MB of on-package memory connected through an ultra-wide bus."

Will that really be enough? Discrete GFX cards have like 1-3 GB of RAM!

Seems like the worst case scenario will be really bad with only 64 MB. Then you'll sometimes have to swap out the content of that 64 MB memory, and populate it from normal system RAM, i.e. you fall back to the same memory bandwidth as today with Sandy/Ivy Bridge? So the worst case will not be any better than with those CPUs, or?

So you might get very high performance for some time while gaming, and then from time to time the performance drops radically when the 64 MB cache memory will have to be swapped out? So the rendering will stutter?

"In the end, the massive bandwidth, coupled with the 5x increase in shader performance, will mean Haswell is a real graphics monster."

Can that really be true? 5x performance increase compared to Ivy Bridge HD4000 IGP?

The VR-Zone article estimates a 2-3x increase instead (see: http://vr-zone.com/articles/mystery...up-the-graphics-ante-further-again/15272.html).

Are either realistic? :hmm:
 

blckgrffn

Diamond Member
May 1, 2003
9,214
3,237
136
www.teamjuchems.com
"We are told the GT3 variants of Haswell will have 64MB of on-package memory connected through an ultra-wide bus."

Will that really be enough? Discrete GFX cards have like 1-3 GB of RAM!

Seems like the worst case scenario will be really bad with only 64 MB. Then you'll sometimes have to swap out the content of that 64 MB memory, and populate it from normal system RAM, i.e. you fall back to the same memory bandwidth as today with Sandy/Ivy Bridge? So the worst case will not be any better than with those CPUs, or?

So you might get very high performance for some time while gaming, and then from time to time the performance drops radically when the 64 MB cache memory will have to be swapped out? So the rendering will stutter?

"In the end, the massive bandwidth, coupled with the 5x increase in shader performance, will mean Haswell is a real graphics monster."

Can that really be true? 5x performance increase compared to Ivy Bridge HD4000 IGP?

The VR-Zone article estimates a 2-3x increase instead (see: http://vr-zone.com/articles/mystery...up-the-graphics-ante-further-again/15272.html).

Are either realistic? :hmm:

If its used like the cache on the xbox 360 (20MB), it could really help.
 

beginner99

Diamond Member
Jun 2, 2009
5,224
1,598
136
"We are told the GT3 variants of Haswell will have 64MB of on-package memory connected through an ultra-wide bus."

Will that really be enough? Discrete GFX cards have like 1-3 GB of RAM!

Exactly RAM, not cache. Cache is a lot faster.
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,953
416
126
Exactly RAM, not cache. Cache is a lot faster.

True, but the RAM on discrete GFX cards is a lot faster than regular PC RAM. That's why the bandwidth is so much faster on discrete graphics cards, and why it is a bottle neck on Sandy/Ivy Bridge IGP.

So, the question is still valid. Since the 64 MB cache cannot hold as much data as the 1-3 GB fast RAM on discrete graphics cards, the worst case scenario when the IGP will have to fetch data from regular PC RAM instead of cache it will not be any better on Haswell than Ivy/Sandy Bridge. Right?

And with a discrete graphics cards that worst case scenario will never happen, since all GFX data will normally fit in the 1-3 GB fast on-board GFX RAM while gaming.
 
Last edited:

Khato

Golden Member
Jul 15, 2001
1,225
281
136
If its used like the cache on the xbox 360 (20MB), it could really help.

The xbox 360 edram is 10MB, no? Have to remember that it was originally built on 90nm, and at that node it was a pretty fair sized die.

Regardless, the smaller size just results in it being unable to store all game textures... but textures aren't the only source of bandwidth consumption. Unfortunately it's annoying to find current figures for the various sources of bandwidth consumption, it used to account for around 75%. Even at high resolutions, Z, color, and render target buffers should fit within a 64MB edram. If those buffers still account for a fair amount of bandwidth, then removing them and having only textures in main memory could result in a marked difference.
 

blckgrffn

Diamond Member
May 1, 2003
9,214
3,237
136
www.teamjuchems.com
The xbox 360 edram is 10MB, no? Have to remember that it was originally built on 90nm, and at that node it was a pretty fair sized die.

Regardless, the smaller size just results in it being unable to store all game textures... but textures aren't the only source of bandwidth consumption. Unfortunately it's annoying to find current figures for the various sources of bandwidth consumption, it used to account for around 75%. Even at high resolutions, Z, color, and render target buffers should fit within a 64MB edram. If those buffers still account for a fair amount of bandwidth, then removing them and having only textures in main memory could result in a marked difference.

Ah, perhaps, I am probably wrong...

http://www.gamespot.com/forums/topic/26116823

Ah yes, I am

So, if 10MB is good for ~640P, what resolution will 64MB be good for? 1080P? I am guessing it is not a linear scale?
 

Khato

Golden Member
Jul 15, 2001
1,225
281
136
So, if 10MB is good for ~640P, what resolution will 64MB be good for? 1080P? I am guessing it is not a linear scale?

Should be a linear scale with resolution. Though depending upon what all Intel uses it for in comparison to the xbox 360, the scale could well be different. Regardless, the real question is how much bandwidth the buffers they can fit in there consume compared to the total. Intel's inclusion of an edram implies that there is.
 

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
Processors are becoming more and more caches with some logic chunked on the side for good measure. On die stacked RAM should be considerably quicker than the RAM attached to a GPU, but then the bandwidth to main RAM is considerably lower. I doubt the cache will compensate enough, but then they aren't yet aiming for discrete GPU performance, just to eat more and more into the volume market.
 

Mr. Pedantic

Diamond Member
Feb 14, 2010
5,039
0
76
It's about the workloads. The typical cpu is interested in low-latency accesses to a small subset of the memory, and is thus well served with a good cache hierarchy. The SNB cache system has a total hitrate well in excess of 95%, which means you get some 20 times more realized bandwidth than what your memory provides.

The typical GPU workload consists of rapidly streaming through large data sets. This is essentially uncacheable, as accessing an item of memory makes it the least likely one to be accessed again in the near future. So what you want is just raw bandwidth.
But GPUs have L1 and L2 caches now too...
 

pelov

Diamond Member
Dec 6, 2011
3,510
6
0
How much would something like this cost? 64MB on-package memory, L4 cache... I'm afraid to ask.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
Can that really be true? 5x performance increase compared to Ivy Bridge HD4000 IGP?
The VR-Zone article estimates a 2-3x increase instead (see: http://vr-zone.com/articles/mystery...up-the-graphics-ante-further-again/15272.html).
Are either realistic? :hmm:

errr...in some older leak, semiaccurate said it was 5x the sandy bridge, and fit's very well the the VR-Zone performance numbers

I was under the impression that the caches on GPUs today were more for HPC tasks than for graphics tasks. I could be wrong however

you are, caches are important, mainly ROPs
 

Fjodor2001

Diamond Member
Feb 6, 2010
3,953
416
126
How much would something like this cost? 64MB on-package memory, L4 cache... I'm afraid to ask.

Good question. Also the Haswell with the 64 MB cache will be a multi-chip-module (from what has been said), which is more expensive.

So for cost efficiency, if you're building a desktop system wouldn't it be better to buy a cheaper 4 core Haswell with the lowest performing IGP (GT1?, which doesn't have any 64 MB cache) and a discrete GFX card for about $100 instead?
 
Last edited:

Fjodor2001

Diamond Member
Feb 6, 2010
3,953
416
126
errr...in some older leak, semiaccurate said it was 5x the sandy bridge, and fit's very well the the VR-Zone performance numbers

Which Sandy Bridge IGP did they compare it to when concluding Haswell had 5x the performance? HD2000 or HD3000? There's a huge difference between those two.
 

denev2004

Member
Dec 3, 2011
105
1
0
How much would something like this cost? 64MB on-package memory, L4 cache... I'm afraid to ask.
It wouldn't cost too much if they do not use traditional SRAM technology. Remark that Power 7 use 45nm, still get 32nm eDRAM on die
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,429
1,780
136
But GPUs have L1 and L2 caches now too...

Edrick is right, they are mostly useful for GPGPU tasks. They do have *some* use for graphics too, as they act as write/read-combining caches -- gpu loads have no time locality (that is, when you look at a piece of memory, you are very unlikely to look at it again in the near future), but they do have bucketloads of space locality (that is, when you look at a piece of memory, you are very likely to look at nearby pieces of memory soon). So when you do a texture lookup, you don't just get the texel you want, you get everything in the vicinity and hope that you are going to need them to render nearby pixels. This allows for some gain of efficiency from caching, but the gains are more like 1.5 times to 2 times, compared to the 20 times and more that cpus get.

Again, this is not a function of the structure of the devices, this is a function of the loads they run. If you build a software renderer that works like a gpu, the fat caches on the CPU will give it no advantage. The devices have just evolved to fit their purpose well.

Regardless, the smaller size just results in it being unable to store all game textures... but textures aren't the only source of bandwidth consumption.
The XB360 edram is never used to store any textures. It is strictly a render target. (The rops are actually on the daughter die, and they are tightly integrated in the memory pool.)
Unfortunately it's annoying to find current figures for the various sources of bandwidth consumption, it used to account for around 75%.
Deferred rendering and deferred texturing have switched this around -- they make the texture lookups from the big pools relatively rare, but massively increase the writing to (and reading from) the render targets. Texturing (from the big pool) would then typically be less than 30% of the total bw, and most of the "texture lookups" would actually happen from the render targets of the previous phases.

Even at high resolutions, Z, color, and render target buffers should fit within a 64MB edram. If those buffers still account for a fair amount of bandwidth, then removing them and having only textures in main memory could result in a marked difference.

So, if 10MB is good for ~640P, what resolution will 64MB be good for? 1080P? I am guessing it is not a linear scale?

This is only for traditional rendering schemes. Deferred rendering techniques write a lot more than just a few color values into the render targets. Think more like a few stages writing 16 bytes per stage (per pixel, per frame). 64MB would probably hurt BF3 at 1080p.

This is all related a little in that one of the reasons of going deferred rendering is that it gains relatively a lot from bigger caches on the GPU.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,106
136
<snip>
This is only for traditional rendering schemes. Deferred rendering techniques write a lot more than just a few color values into the render targets. Think more like a few stages writing 16 bytes per stage (per pixel, per frame). 64MB would probably hurt BF3 at 1080p.

This is all related a little in that one of the reasons of going deferred rendering is that it gains relatively a lot from bigger caches on the GPU.

Thanks for the very informative post :thumbsup:
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |