Performance-oriented Windows tweaking

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Velk

Senior member
Jul 29, 2004
734
0
0
The advice on pagefile settings and location is solid, it's just that you aren't testing what it improves.

If everything is paged out except what you are running, and what you are running fits in available memory, then there will be no difference between any of them, which is more or less what you saw - The difference comes when you try to pull stuff back off disk while using it.

An example of this would be running a game that consumes all available memory, then tabbing to a fully paged background process and seeing how long it takes to become usable.

Some of the parts are even more subtle as well - such as the automatic resizing of the pagefile, which can lead to the program accessing the disk to be loaded, windows accessing the pagefile to swap out programs to make space for the program loading, and windows accessing the disk to increase the pagefile size, all at the same time. I am sure many people here have experienced the staggering disk grind that occurs when this happens.

Perhaps a better test would be to open up your various applications to max out physical ram, and then start your game, and see how long it actually takes to start and run the first benchmark result.
 

Fresh Daemon

Senior member
Mar 16, 2005
493
0
0
The advice on pagefile settings and location is solid, it's just that you aren't testing what it improves.

It isn't solid at all. If you think about the theory behind it, you will see this.

DDR-SDRAM has a typical access time of 2.5 nanoseconds. A hard drive has a typical access time of 9 milliseconds. That's 0.0000000025 seconds for RAM and 0.009 seconds for the hard drive. The hard drive is 3.6 million times slower than the RAM!

So how is a very small increase in hard drive performance going to have any noticeable impact on this incredible performance gulf? Moreover, if this huge performance gulf results in a quite small change in real-world performance, how are you going to notice a tiny change in it?

the automatic resizing of the pagefile, which can lead to the program accessing the disk to be loaded, windows accessing the pagefile to swap out programs to make space for the program loading, and windows accessing the disk to increase the pagefile size, all at the same time. I am sure many people here have experienced the staggering disk grind that occurs when this happens.

Yes, back in Windows 3.1 days when 1MB of RAM was common and virtual memory was a primitive science!

If you reach this point where overuse of the pagefile is causing disk grinding and stuttering performance, let me say this very clearly: No amount of pagefile tweaking is going to alleviate this situation. Performance will be terrible. Maybe it can be very slightly less terrible, but very slightly less than terrible is still pretty terrible!

The only thing that helps is more RAM. Either run more lightweight programs, alter your computing habits, or buy some more.

Tweaking the pagefile to "help" this situation is like getting shot in the chest instead of the head. You may find some improvement, but it will be overshadowed by the catastrophic results of the situation as a whole anyway.

That's assuming there's some improvement to be had. I've yet to see a benchmark where someone has proven that pagefile tweaking does actually produce a measurable performance increase.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
An example of this would be running a game that consumes all available memory, then tabbing to a fully paged background process and seeing how long it takes to become usable.

Unless that app is something like Photoshop with a large image open, all of the paging required to get it back into memory will happen from whichever disk Photoshop is installed on and the disk the OS is on. The only pages that get put in the pagefile are process private data and modified data that has no other backing on disk. Windows is smart enough not to put the photoshop binary and all of it's support libraries in the pagefile.

Some of the parts are even more subtle as well - such as the automatic resizing of the pagefile, which can lead to the program accessing the disk to be loaded, windows accessing the pagefile to swap out programs to make space for the program loading, and windows accessing the disk to increase the pagefile size, all at the same time. I am sure many people here have experienced the staggering disk grind that occurs when this happens.

Windows only grows the pagefile in extreme circumstances, if you made the pagefile small enough that it happens a lot that's your own fault.
 

Rilex

Senior member
Sep 18, 2005
447
0
0
Unless that app is something like Photoshop with a large image open, all of the paging required to get it back into memory will happen from whichever disk Photoshop is installed on and the disk the OS is on.

Photoshop doesn't use the page file, it uses its own backing store (a "scratch disk") for images themselves. According to Chris Cox, the way the VMM manages the page file isn't appropriate for PS.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Photoshop doesn't use the page file, it uses its own backing store (a "scratch disk") for images themselves. According to Chris Cox, the way the VMM manages the page file isn't appropriate for PS.

PS was just one example, albeit a poor one. I forgot about the scratch disk crap, but assuming that they use their 'scratch files' in a way that NT can tell that it's the backing store for the data it'll be paged in from there instead of the pagefile so tweaking the pagefile will do even less.

But for other, normal, apps the original comment still applies.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: Nothinman
Photoshop doesn't use the page file, it uses its own backing store (a "scratch disk") for images themselves. According to Chris Cox, the way the VMM manages the page file isn't appropriate for PS.

PS was just one example, albeit a poor one. I forgot about the scratch disk crap, but assuming that they use their 'scratch files' in a way that NT can tell that it's the backing store for the data it'll be paged in from there instead of the pagefile so tweaking the pagefile will do even less.

But for other, normal, apps the original comment still applies.

Photoshop's 'scratch files' are probably a legacy effect that it was originally developed for Mac OS which didn't have crap for memory management.
 

Rilex

Senior member
Sep 18, 2005
447
0
0
Photoshop's 'scratch files' are probably a legacy effect that it was originally developed for Mac OS which didn't have crap for memory management.

Nah, they were even relevant on OS X and NT/2000/XP. I forget the exact details, but it boiled down to FIFO vs LIFO etc.
 

imported_BikeDude

Senior member
May 12, 2004
357
1
0
Originally posted by: Rilex
Photoshop doesn't use the page file, it uses its own backing store (a "scratch disk") for images themselves.

It will allocate as much virtual memory you specify under Edit->Preferences->Memory (or wherever that setting resides). Granted, the slider is limited to available physical memory, but once it has allocated that memory, I bet it won't de-allocate it any time soon if another application's memory usage suddenly surges. (It might try, but as someone already mentioned, it's all in how you design your benchmark)

That said, the "no-pagefile" groupies could have one point going for them. If you minimize an application, Windows will reduce the app's working set. I.e. given any activity from other processes, that app will have most of its data paged out. Even though you have plenty of available memory, the OS will try to increase its system cache, thus further increasing the chance that the app in question will experience page faults when re-activated. The trade-offs (no memory dump in case of BSODs, reduced memory available for system cache, etc) makes life without a pagefile an interesting one (not good).

Finally, performance monitor is our friend, it will show how many pages are fetched from the disks -- if anyone was curious about that.
 

PhyuckHughe

Banned
Sep 23, 2005
4
0
0
Originally posted by: Fresh Daemon
It isn't solid at all. If you think about the theory behind it, you will see this.

DDR-SDRAM has a typical access time of 2.5 nanoseconds. A hard drive has a typical access time of 9 milliseconds. That's 0.0000000025 seconds for RAM and 0.009 seconds for the hard drive. The hard drive is 3.6 million times slower than the RAM!

So how is a very small increase in hard drive performance going to have any noticeable impact on this incredible performance gulf? Moreover, if this huge performance gulf results in a quite small change in real-world performance, how are you going to notice a tiny change in it?
Wow, that is so freaking retarded.

First of all, you ignored how he mentioned your testing failed. You didn't thoroughly test it in the right way. It's easy to see this.

And since you say that the hard drive is millions of times slower, wouldn't that be considered a bottleneck? Well, duh, doofus, making the part faster where the bottleneck is, is usually the best thing to do for performance. If you have to wait for the hard drive, because it's so much slower, than any increase in HD performance would just lessen the time you would have to wait for the hard drive.

Why are you still trying to act as if you know what you're talking about? The only thing I can think of for why you would have such a huge mental fallacy is if you are saying that this wouldn't make the functions of the program work faster, but that isn't even the point that's trying to be made. This is exactly why your tests fail. We're talking REAL WORLD performance. Switching between programs, or sub-parts to programs, is also part of that.
Yes, back in Windows 3.1 days when 1MB of RAM was common and virtual memory was a primitive science!

If you reach this point where overuse of the pagefile is causing disk grinding and stuttering performance, let me say this very clearly: No amount of pagefile tweaking is going to alleviate this situation. Performance will be terrible. Maybe it can be very slightly less terrible, but very slightly less than terrible is still pretty terrible!
Jesus, even more blatant stupidity. Do you even understand how hard drives or paging files work? It sure doesn't look like it.

Like mentioned already by several people (even including "BV"), having the paging file on the sytsem disk (or partition, even) is the worst option. Considering how it's the disk that the most used, it's clearly self evident that having the pagefile there would be the worst place. And also since it's the one that most used, the pagefile would also be more likely to become fragmented.

Do you know anything about Linux? Well from what I've seen, the default option is always to put the swapfile on its own partition. Separating it like this, even though it may still be on the same disk, already is helping performance. Windows doesn't do this by default because it's not the simplest solution (from the end user's perspective).
The only thing that helps is more RAM. Either run more lightweight programs, alter your computing habits, or buy some more.
Haha, again "just upgrade to get better performance, instead of getting the most out of what you have." Don't you see how you're sounding? The only solution you are giving is to spend more and upgrade. That's absolutely the worst kind of advice, considering what the topic is about. Upgrading isn't "performance tweaking".

And you say that there's basically nothing to do to help this, but if you have a 3 drive RAID 0 array with the pagefile, it's obvious that the hit of using so-called "virtual memory" would be greatly reduced.
Tweaking the pagefile to "help" this situation is like getting shot in the chest instead of the head. You may find some improvement, but it will be overshadowed by the catastrophic results of the situation as a whole anyway.
That just makes no sense. Since you're such an advocate of using a paging file, it naturally would make sense to adjust its setting for it to get better performance. If you're acting like it's so much damn slower, then wouldn't the best thing be disabling the paging file? Then that would eliminate what you say is several "million times slower".
That's assuming there's some improvement to be had. I've yet to see a benchmark where someone has proven that pagefile tweaking does actually produce a measurable performance increase.
Again, this falls back on your bad testing. The tests should be done in an environment where the tested thing is being used, not some arbitrary time of your choosing. Testing Quake 3 or some other nonsense surely isn't going to test the paging file and how it's related to real world performance.
 

gsellis

Diamond Member
Dec 4, 2003
6,061
0
0
Originally posted by: PhyuckHughe
Originally posted by: Fresh Daemon
It isn't solid at all. If you think about the theory behind it, you will see this.

DDR-SDRAM has a typical access time of 2.5 nanoseconds. A hard drive has a typical access time of 9 milliseconds. That's 0.0000000025 seconds for RAM and 0.009 seconds for the hard drive. The hard drive is 3.6 million times slower than the RAM!

So how is a very small increase in hard drive performance going to have any noticeable impact on this incredible performance gulf? Moreover, if this huge performance gulf results in a quite small change in real-world performance, how are you going to notice a tiny change in it?
Wow, that is so freaking retarded.

First of all, you ignored how he mentioned your testing failed. You didn't thoroughly test it in the right way. It's easy to see this.

And since you say that the hard drive is millions of times slower, wouldn't that be considered a bottleneck? Well, duh, doofus, making the part faster where the bottleneck is, is usually the best thing to do for performance. If you have to wait for the hard drive, because it's so much slower, than any increase in HD performance would just lessen the time you would have to wait for the hard drive.

Why are you still trying to act as if you know what you're talking about? The only thing I can think of for why you would have such a huge mental fallacy is if you are saying that this wouldn't make the functions of the program work faster, but that isn't even the point that's trying to be made. This is exactly why your tests fail. We're talking REAL WORLD performance. Switching between programs, or sub-parts to programs, is also part of that.
Yes, back in Windows 3.1 days when 1MB of RAM was common and virtual memory was a primitive science!

If you reach this point where overuse of the pagefile is causing disk grinding and stuttering performance, let me say this very clearly: No amount of pagefile tweaking is going to alleviate this situation. Performance will be terrible. Maybe it can be very slightly less terrible, but very slightly less than terrible is still pretty terrible!
Jesus, even more blatant stupidity. Do you even understand how hard drives or paging files work? It sure doesn't look like it.

Like mentioned already by several people (even including "BV"), having the paging file on the sytsem disk (or partition, even) is the worst option. Considering how it's the disk that the most used, it's clearly self evident that having the pagefile there would be the worst place. And also since it's the one that most used, the pagefile would also be more likely to become fragmented.

Do you know anything about Linux? Well from what I've seen, the default option is always to put the swapfile on its own partition. Separating it like this, even though it may still be on the same disk, already is helping performance. Windows doesn't do this by default because it's not the simplest solution (from the end user's perspective).
The only thing that helps is more RAM. Either run more lightweight programs, alter your computing habits, or buy some more.
Haha, again "just upgrade to get better performance, instead of getting the most out of what you have." Don't you see how you're sounding? The only solution you are giving is to spend more and upgrade. That's absolutely the worst kind of advice, considering what the topic is about. Upgrading isn't "performance tweaking".

And you say that there's basically nothing to do to help this, but if you have a 3 drive RAID 0 array with the pagefile, it's obvious that the hit of using so-called "virtual memory" would be greatly reduced.
Tweaking the pagefile to "help" this situation is like getting shot in the chest instead of the head. You may find some improvement, but it will be overshadowed by the catastrophic results of the situation as a whole anyway.
That just makes no sense. Since you're such an advocate of using a paging file, it naturally would make sense to adjust its setting for it to get better performance. If you're acting like it's so much damn slower, then wouldn't the best thing be disabling the paging file? Then that would eliminate what you say is several "million times slower".
That's assuming there's some improvement to be had. I've yet to see a benchmark where someone has proven that pagefile tweaking does actually produce a measurable performance increase.
Again, this falls back on your bad testing. The tests should be done in an environment where the tested thing is being used, not some arbitrary time of your choosing. Testing Quake 3 or some other nonsense surely isn't going to test the paging file and how it's related to real world performance.

And before you were banned, you were who?

No, MS handles the page file better than others. He suggests leaving it alone.

BTW, why should we take your advice? Do you know what the system drive is?
 

imported_BikeDude

Senior member
May 12, 2004
357
1
0
Originally posted by: PhyuckHughe
having the paging file on the sytsem disk (or partition, even) is the worst option. Considering how it's the disk that the most used, it's clearly self evident that having the pagefile there would be the worst place. And also since it's the one that most used, the pagefile would also be more likely to become fragmented.

If the pagefile is set to a fixed size -- how exactly do you manage to fragment it? (beyond what fragmentation existed at the time of creation -- usually none if created on a freshly installed system)

The majority of systems out there have only a single drive. Where I work we now install two drives, but the second is just a mirror of the first... Servers usually have more drives, but they should be tuned to avoid as much paging as possible (i.e. an abundance of physical memory). So in 90% of the cases, what you speak of (seperating system drive from the pagefile drive) is academic at best.

As the user switches to an application with a reduced working set, Windows will have to page in code from the memory-mapped executable (and dlls), plus page data back from the pagefile (worst case scenario). Obviously putting the pagefile on a different physical drive will help, but I have never bothered to benchmark this -- have you?

For servers, the choices are limited. I just ordered two 1U rackmounted servers that can hold four hotswap drives each. I'm going to install four SCSI drives in a RAID-5 configuration. I.e. there will be no seperate drive for the system files, nor will there be a seperate drive for the pagefile. It is just not practical -- even if someone promised me a 2X performance improvement, there's just no physical room left.

Time is money, spending $200 on memory is way cheaper than spending an hour tuning Windows' pagefile.
 

PhyuckHughe

Banned
Sep 23, 2005
4
0
0
Originally posted by: BikeDude
If the pagefile is set to a fixed size -- how exactly do you manage to fragment it? (beyond what fragmentation existed at the time of creation -- usually none if created on a freshly installed system)
That I'm not too sure of for all cases, but it happens. Maybe not often, but it does. And I wouldn't rest too easy in the simple fact that it's a newly installed system; even a formatted drive gets lots of fragments as soon as you put multiple files on it or install an OS.

The majority of systems out there have only a single drive. Where I work we now install two drives, but the second is just a mirror of the first... Servers usually have more drives, but they should be tuned to avoid as much paging as possible (i.e. an abundance of physical memory). So in 90% of the cases, what you speak of (seperating system drive from the pagefile drive) is academic at best.
That's pure conjecture. And after reading this paragraph, you don't even give a good reason for that conclusion. Who cares about your computers or what the average is? The point is that there are some options that are better than others. If you have the extra option of using a second drive, then it's the best thing to use. If you don't, then you still have a couple options to choose from. I don't get what how there's anything to question here.

As the user switches to an application with a reduced working set, Windows will have to page in code from the memory-mapped executable (and dlls), plus page data back from the pagefile (worst case scenario). Obviously putting the pagefile on a different physical drive will help, but I have never bothered to benchmark this -- have you?
Why even question this. There's absolutely no point to saying that. It's obvious you would get better performance with a second drive; I don't need to waste my time performing a benchmark right now, and it's well documented.
For servers, the choices are limited. I just ordered two 1U rackmounted servers that can hold four hotswap drives each. I'm going to install four SCSI drives in a RAID-5 configuration. I.e. there will be no seperate drive for the system files, nor will there be a seperate drive for the pagefile. It is just not practical -- even if someone promised me a 2X performance improvement, there's just no physical room left.
Uhh, don't you see how you're choosing how to use them? It's not like the PC Gods have fated that you must use those 4 drives, and only 4 drives, in that manner.

And again, who cares, and why does it matter? Really, your anecdotes here aren't contributing anything.
Time is money, spending $200 on memory is way cheaper than spending an hour tuning Windows' pagefile.
Wow, more conjecture. Really, this is just a stupid thing to say. And if you don't care, then that's fine. Go ahead and waste performance and/or money. Although, setting things up properly will lasts its lifetime, and will provide you with the experience to save time and performance later.
 

PhyuckHughe

Banned
Sep 23, 2005
4
0
0
Originally posted by: Nothinman
An example of this would be running a game that consumes all available memory, then tabbing to a fully paged background process and seeing how long it takes to become usable.
Unless that app is something like Photoshop with a large image open, all of the paging required to get it back into memory will happen from whichever disk Photoshop is installed on and the disk the OS is on.
Why do you say this? Any "paging" that occurs will only happen where the paging file is set. There's no other way of it happening. It doesn't matter where the program or OS is installed.
The only pages that get put in the pagefile are process private data and modified data that has no other backing on disk. Windows is smart enough not to put the photoshop binary and all of it's support libraries in the pagefile.
Even if you're right, so what? It doesn't matter for the topic. The point is to test how the system would perform under heavy paging, it doesn't matter how or what is happening.

Originally posted by: n0cmonkey
Originally posted by: Tsosczb
You can't even read with comprehension, there would be no point.

In fact, I guess this post was useless.
I read just fine. You said you turned off Virtual Memory, provided pictures of your settings which did not illustrate your point, claimed Microsoft calls it VM so it must be true, and act like a total bitch. Whatever kid, come back when you grow a pube.
You don't read with comprehesion because you fail to see the context, you fucking dumb awesome people.

The pictures were to point out the context, not to prove that the line (taken out of context) was true.

Please only talk again when you learn to not be a total fucking moron.
 

PhyuckHughe

Banned
Sep 23, 2005
4
0
0
Originally posted by: gsellis
And before you were banned, you were who?

No, MS handles the page file better than others. He suggests leaving it alone.

BTW, why should we take your advice? Do you know what the system drive is?
Advice? Why does it have to be advice? First of all, if you think about it, it should just make sense. And it's a matter of being right or wrong, not advice. If you're looking for advice, maybe you should go read a book.

The rest of the stuff you said didn't even make sense, so I'll let it be.




Gone...

Mod






 

KoolDrew

Lifer
Jun 30, 2004
10,226
7
81
Any "paging" that occurs will only happen where the paging file is set.

Do you honestly think the pagefile is the only file involved with paging? NT requires everything in memory to have a backing store on disk so that it can free up memory. Most things are paged back into their original respective files, but any data that has been altered needs to be paged to the pagefile. So dlls and exes are also involved with paging, thus by disabling the pagefile you are still paging. If you were to really disable paging you would lose both per-process protected address spaces and protection of kernel mode pages from user mode access.

Do you know anything about Linux? Well from what I've seen, the default option is always to put the swapfile on its own partition. Separating it like this, even though it may still be on the same disk, already is helping performance. Windows doesn't do this by default because it's not the simplest solution (from the end user's perspective).

Actually it would degrade performance. This is because it would increase your average seeking distance. It is best to have the pagefile on the most-used partition and the least-used drive. If you have only one drive the pagefile should be on the same partition with your OS along with your applications. Then the pagefiles initial size should be set large enough so it does not become fragmented.

Also fragmentation of the pagefile doesn't really hurt performance in any tangible way. It would take a very extreme degree of fragmentation to hurt performance because Windows never read or writes more then 64KB per buffer to the pagefile and they are almost never in sequential 64KB chunks. So after reading or writing one such buffer it will have to move the heads, regardless if the pagefile is fragmented or not. These pagefile IOs will also be interspersed with IOs to many other files. In between all such IOs the heads will have to move anyway. This is also why fragmentation of invididual files matters far less then many want you to believe.

And you say that there's basically nothing to do to help this, but if you have a 3 drive RAID 0 array with the pagefile, it's obvious that the hit of using so-called "virtual memory" would be greatly reduced.

Actually, RAID-0 would be bad for the pagefile. This is because each disk has to seek to their portion of the data, thus increasing the average seek time.

Also, Virtual Memory and the pagefile (which you were probably refering to) are two totally different things. VM is the virtualization of memory adresses. On a 32-bit system each process see its own 4GB set of memory adresses. 2GB is reserved for the NT kernel, while the other 2Gb is used by the process. The pagefile is just a backing store for some data in memory.

If you're acting like it's so much damn slower, then wouldn't the best thing be disabling the paging file? Then that would eliminate what you say is several "million times slower".

No, it wouldn't because you would still be paging to disk. All you would be doing is forcing all "private" virtual memory to stay in RAM and only allowing code and mapped files to be paged. Even if some of the "private" stuff has not been touched for hours and will not again, it will have to stay in RAM. This will cause more paging of code, for a given workload and RAM size.

The bottom line is, you should never disable the pagefile.

That I'm not too sure of for all cases, but it happens. Maybe not often, but it does. And I wouldn't rest too easy in the simple fact that it's a newly installed system; even a formatted drive gets lots of fragments as soon as you put multiple files on it or install an OS.

The pagefile cannot become fragmented if it is a fixed size. This does not make it a good idea though. You should have the pagefile set with a high enough initial to prevent fragmentation and have a max even higher, so you still have that "safety net."
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Why do you say this? Any "paging" that occurs will only happen where the paging file is set. There's no other way of it happening. It doesn't matter where the program or OS is installed.

If you believe that, you really don't understand how VM works.

Even if you're right, so what? It doesn't matter for the topic. The point is to test how the system would perform under heavy paging, it doesn't matter how or what is happening.

Yes it does matter how and why, otherwise you might as well just benchmark the disk and memory and call those your low and high points. Hell heavy paging doesn't even mean there's any disk access involved at all, on this machine I have 0 data in swap but I've averaged 135 pagefaults/s.

 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: PhyuckHughe
Originally posted by: n0cmonkey
Originally posted by: Tsosczb
You can't even read with comprehension, there would be no point.

In fact, I guess this post was useless.
I read just fine. You said you turned off Virtual Memory, provided pictures of your settings which did not illustrate your point, claimed Microsoft calls it VM so it must be true, and act like a total bitch. Whatever kid, come back when you grow a pube.
You don't read with comprehesion because you fail to see the context, you (Word the French detest) dumb (Word that only blacks and men that secretly love blacks (behind them, thrusting violently) use).

The pictures were to point out the context, not to prove that the line (taken out of context) was true.

Please only talk again when you learn to not be a total (Word the French don't like) moron.

Tisk tisk. Do you need to show your roots so well? Lighten up. Show me proof you turned off Virtual Memory and not the pagefile. The screen shot showed that you had turned off the pagefile in the Virtual Memory section. So, you were wrong. It's ok, we all make mistakes. :milk;

Please, seriously, provide a good test to prove your points. You can provide the results, or we can do it. It's not a big deal to me. A lot of what you say makes sense, in theory, but doesn't seem to work out in practice. :light:

It would also help if you calmed down and didn't act like the trashy white drunk guy sleeping on the bench outside of a trailer park in the middle of Georgia.
 

Rilex

Senior member
Sep 18, 2005
447
0
0
Do you know anything about Linux? Well from what I've seen, the default option is always to put the swapfile on its own partition.

That is because Linux' VMM is not the same as Windows' VMM. Linux will tend to use near all available physical memory before touching the swap partition. Windows does not take this approach (and yes, they both have their trade-offs).

---------
Also, Virtual Memory and the pagefile (which you were probably refering to) are two totally different things. VM is the virtualization of memory adresses.

Lets not confuse Virtual Memory with Virtual Address Space. What you refer to below is VAS.

On a 32-bit system each process see its own 4GB set of memory adresses. 2GB is reserved for the NT kernel, while the other 2Gb is used by the process.

The VMM handles the memory allocation for each process by taking that VAS and mapping it appropriately to physical memory. It also is responsible for paging contents to disk when physical memory is overcommitted as well as bringing pages back from the page file into physical memory.

The term "Virtual Memory" typically means all memory virtualized. So the page file is just a portion of "VM" and what the VMM implements.

And Microsoft only made the change to be able to turn off the page file in XP because of Windows Embedded where it can make complete sense to do so. Too bad they left this change in the standard client versions.

-------

Hell heavy paging doesn't even mean there's any disk access involved at all, on this machine I have 0 data in swap but I've averaged 135 pagefaults/s.

Yep, it all depends if it was a soft page (within physical memory) or a hard page (in the paging file and/or backing store).

And for those that care, the term "Virtual Memory" is still used and abused in Longhorn Beta 2
 

gsellis

Diamond Member
Dec 4, 2003
6,061
0
0
Originally posted by: n0cmonkey
It would also help if you calmed down and didn't act like the trashy white drunk guy sleeping on the bench outside of a trailer park in the middle of Georgia.
Hey! I live in Georgia...
 

gsellis

Diamond Member
Dec 4, 2003
6,061
0
0
Originally posted by: PhyuckHughe
Originally posted by: gsellis
And before you were banned, you were who?

No, MS handles the page file better than others. He suggests leaving it alone.

BTW, why should we take your advice? Do you know what the system drive is?
Advice? Why does it have to be advice? First of all, if you think about it, it should just make sense. And it's a matter of being right or wrong, not advice. If you're looking for advice, maybe you should go read a book.

The rest of the stuff you said didn't even make sense, so I'll let it be.
That is because you have no clue what you are talking about. You don't even know which is the system drive and which is the boot drive. And you think you can give advice or disagree with tuning?

Bye-bye. You will be banned for forum violations.
 

okb

Member
Mar 9, 2005
126
0
0
Did anyone else notice the No Troll Feeding sign on the way in? As amusing as this is, you have to admit this is pointless. Ignorant morons will never admit fault, and never take advice tey consider inferior. That's why they're ignorant morons.
 

Fresh Daemon

Senior member
Mar 16, 2005
493
0
0
Since PhyuckHughe's time here will be very short I see no need to respond. In the meantime I have repaired the screenshot links. I don't seem to be able to find the shots from System II right now but they will follow later.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |