How can too much cache memory be a bad thing?

Yanagi

Golden Member
Jun 8, 2004
1,678
0
0
I have an analogy, it might put things in a too of simplistic term but someone can gladly correct me on this one then.

If you think of the cache as a library, all the books are a byte, aisles are column addresses and so on.

Now, if you were to find a specific book for a specific task, in which of the two libraries would you find out the fastest if the book was actually there or not, the big one or the small one (assuming you will search manually)?

Thats about the only thing I can come up with really and if I oversimplified it so much that it has become incorrect someone will let us know
 

dalearyous

Senior member
Jan 8, 2006
836
0
0
i found the answer that i can understand FINALLY

But, to make use of cache, the CPU has to add a step where it has to 'look up' then address stored in the cache first to see if an address' data is _already_ in the cache before fetching it from RAM. This 'lookup' takes more time than direct access to RAM and can have an adverse affect on speed if cache is too large for the processor or it is mismanaged. In the old days of desktop computing, the mid '80s and early '90s, cache was purchased and installed separately in little slots next to the CPU on the mainboard, and it was possible to install too much cache.
 

faxon

Platinum Member
May 23, 2008
2,109
1
81
rofl that article you linked is HORRIBLE. i couldnt make heads or tails of it half the time because it literally DID NOT make any sense. they completely abused the term cache to refer to the system's RAM and swap file (page file, virtual memory, whatever you want to call it its all the same), which is going to severely confuse less than informed readers when they are trying to upgrade their Dell because it isnt running as fast as they want it. they are going to think they need more cache, and are going to call dell and tell them as such, only to buy a significantly more expensive CPU which probably isnt going to fix any of their problems nearly as much as adding another 2GB of RAM would, which is what the article is actually about, your system needing more physical RAM than it actually has. i left a pretty long comment there about this so if anyone else wants to chip in go ahead
 

Idontcare

Elite Member
Oct 10, 1999
21,118
59
91
Originally posted by: dalearyous
i found the answer that i can understand FINALLY

But, to make use of cache, the CPU has to add a step where it has to 'look up' then address stored in the cache first to see if an address' data is _already_ in the cache before fetching it from RAM. This 'lookup' takes more time than direct access to RAM and can have an adverse affect on speed if cache is too large for the processor or it is mismanaged. In the old days of desktop computing, the mid '80s and early '90s, cache was purchased and installed separately in little slots next to the CPU on the mainboard, and it was possible to install too much cache.

I guarantee you looking up the address of the data in cache is ridiculously faster than looking up the address of that data in ram.

Think about it. Accessing ram is not the same as looking up the address in ram. Just as accessing cache is not the same as looking up the address in cache.

You (or your source) are comparing apples to oranges.

The round-trip time to access cache, find data address, send data to CPU's L1 cache is always faster than doing the same thing with DRAM.

Just as doing a round-trip thru the DRAM loop is always faster than doing the same round-trip to get data off your hard-drive.

Think about it.
 

faxon

Platinum Member
May 23, 2008
2,109
1
81
IDC i think you are missing the point about how when the CPU cache wasnt on the CPU die, it was actually significantly slower than the actual operational speed of the processor, as it ran at some multiple of the motherboard speed. of course, this was only true of the L2 cache, as the L1 cache has always been on the CPU die as far as i know. if the cache was large enough, cycling through the addressing on the cache could in theory be slower than cycling through a certain quantity of ram where the data you need might be located. this is obviously NOT true though if the cache is on the CPU die itself since it is operating at the same frequency of the CPU (in most cases) and doesnt have the latency of going out to DRAM. also, if there was enough cache available to store all your data, making it unnecessary to use RAM instead of just cache in the first place, we would never run into this problem, and this will typically only occur if you have way more cache than your CPU can utilize relative to the amount of RAM the system has, since there will always be files that wont fit in CPU cache at the current cache/memory ratios. if anyone would like to learn more about exactly how cache and memory performance works, it is covered in detail in chapter 3 and 6 of Scott Mueller's Upgrading and Repairing PCs, 18th edition, ISBN-13: 978-0-7897-3697-0 or ISBN-10: 0-7897-3697-7 (which im currently holding). also a great resource to have since it covers some of the most highly technical aspects of the business that most tech classes wont cover, but which can be extremely beneficial in troubleshooting issues you may uncover when working with PCs
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Ummm...The linked article is talking about operating system disk caching, not cpu cache.


 

VirtualLarry

No Lifer
Aug 25, 2001
56,450
10,119
126
"The system cache's working set limit can and typically does exceed how much memory is installed in the system," explained the member of the Microsoft NTDebugging team."

That sounds like a severe bug. The whole point of RAM-based caching is it's faster than disk. But if it grows so much that some of it gets paged out to disk, then it's not an advantage any more. You would think Microsoft would be more careful than this.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,450
10,119
126
I would move this thread to Operating Systems, where some of the heavyweights there can chime in on this article.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
That article is retarded. You can't have too much filesystem cache. How the kernel deals with memory when pressure gets high is a completely separate issue.

And the article is full a lot of incorrect details.

Write I/O will be buffered and the working set of the system cache will expand. Ultimately, system threads will be dumped to the hard disk, and this is where the problem will occur, because of the differences in speed management between the disk's speed and that of the I/O write. The exception typically occurs when the cache manager, because of a faulty application or service, will be unable to handle throttling I/O. But at the same time, excessive cached read I/O will also present problems.

If you've got a lot of write I/O then you're going to have I/O contention because disks are so slow, there's no way around that. If you've got 4G of memory and some process is trying to write out 16G of data you want some of it to be buffered in memory otherwise the disk will be too busy writing to service anything else and he machine will be just as non-responsive.

And threads don't get dumped to the disk. There may be some pages in a certain process/thread's working set that is pushed to disk because they're modified but the process itself, shared libraries, etc won't because they're already backed by the files they were read in from originally.

If an application or driver opens many files and actively reads from them continuously through the cache manager, then the memory manger will move more physical pages to the cache manager. If this demand continues to grow, the cache manager can grow to consume physical memory and other process (with less memory demand) will get paged out to disk. The system cache's working set limit can and typically does exceed how much memory is installed in the system," explained the member of the Microsoft NTDebugging team.

Again if you've got some errant process reading gigs and gigs of data of course it's going to cause I/O contention. Even if the OS wasn't trying to cache the reads that process would still monopolize the drive by just constantly reading data. And again, whole processes don't get paged out.

But, to make use of cache, the CPU has to add a step where it has to 'look up' then address stored in the cache first to see if an address' data is _already_ in the cache before fetching it from RAM. This 'lookup' takes more time than direct access to RAM and can have an adverse affect on speed if cache is too large for the processor or it is mismanaged.

You've got the process correct but a lookup in the TLB is definitely not slower than reading from memory. If that were the case OSes wouldn't use the TLB anymore. In fact one of the things I've gleaned from reading lkml is that TLB flushes are to be avoided like the plague for performance reasons on x86 (not sure about AMD64) because lookups going directly to the pagetables in memory take so much longer.

That sounds like a severe bug. The whole point of RAM-based caching is it's faster than disk. But if it grows so much that some of it gets paged out to disk, then it's not an advantage any more. You would think Microsoft would be more careful than this.

Actually since the pagefile is an indexed entity with it's extents close together or contiguous it would probably end up being faster most of the time to get data from there than to look it up in the MFT then go find it in the filesystem.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |