How my virtual memory am I using?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

code65536

Golden Member
Mar 7, 2006
1,006
0
76
In a decade of not using Pagefile, all the issues I ever encountered was due the lack of RAM that caused applications to crash if I tried to go over my RAM limit. I don't recall having encountered any special snowflake that refused to run without it enabled.

Exactly. I suspect that the people who say, "there are programs that require a page file" are either just repeating what they've read elsewhere (like how many people still say that the difference between the advertised size of a disk and the Windows-reported size is due to "formatting" instead of 10^3 vs. 2^10) or because they've encountered an error and just assumed that it was the lack of a page file and didn't verify that it wasn't the result of something else (like a genuinely high memory load).
 

zir_blazer

Golden Member
Jun 6, 2013
1,184
459
136
Exactly. I suspect that the people who say, "there are programs that require a page file" are either just repeating what they've read elsewhere (like how many people still say that the difference between the advertised size of a disk and the Windows-reported size is due to "formatting" instead of 10^3 vs. 2^10) or because they've encountered an error and just assumed that it was the lack of a page file and didn't verify that it wasn't the result of something else (like a genuinely high memory load).
Pretty much, all the cases where enabling the Pagefile solved my issues were when I crossed my physical RAM boundary. For these types of experiments, I had the Task Manager opened while I intentionally opened things like crazy until collapse. Going a bit over the max causes usually the current application to close and throw an error, some more intensive collapses due to a memory hog application could actually cause a system crash.

The part about programs requiring the Pagefile is not even theorically possible at all. The whole idea about Virtual Memory is that no application has an idea of where the actual memory that it is using is actually located, that is supposed to be transparently managed by the OS. This means that a program can NOT behave differently if there is no Pagefile simply because it shouldn't be aware of that little detail.


Some lecture from here:


1. Basic Concepts
1.1. Virtual Memory

In most modern operating systems, including Windows, application programs and many system processes always reference memory using virtual memory addresses which are automatically translated to real (RAM) addresses by the hardware. Only core parts of the operating system kernel bypass this address translation and use real memory addresses directly. The virtual memory management component of the operating system maintains the tables used by the hardware to provide the mapping of virtual addresses into real addresses. A small amount of RAM is used by those parts of the operating system that can not be paged out (for example, the virtual memory manager) and is thus not available for assignment to virtual memory.

2. Additional Detail

2.1. Virtual Memory

There are several advantages to Virtual Memory implementations with per process virtual address spaces, some of which are:

Application writing is simplified. The application developer doesn’t have to worry about how much RAM is available or what addresses are available to it. The application writer (and compiler) knows that every application will get its own set of virtual addresses (called an Address Space) that it can use any way it wishes (except for the portion of the address space reserved by the operating system) without regard to what other applications are doing.
The system is more robust and secure because applications can not overwrite or read memory belonging to another application – there is no addressability to another application’s address space.
Management of RAM allocation to processes is centralized in a core part of the OS. The algorithms used by the Virtual Memory management part of the OS have a long history, are well understood, quite sophisticated and highly optimized. The knowledge and skills required to do this well are very specialized and not usually present in application developers whose focus is on the application/business requirements.

I disagree with some concepts like the min-max Pagefile size and some other things, but the theoric concepts are those.
 
Last edited:

thedosbox

Senior member
Oct 16, 2009
961
0
0
Before disabling the page file, read up on why you shouldn't:

http://arstechnica.com/civis/viewtopic.php?p=21013929#p21013929

In other words, leave it alone.

In a decade of not using Pagefile, all the issues I ever encountered was due the lack of RAM that caused applications to crash if I tried to go over my RAM limit. I don't recall having encountered any special snowflake that refused to run without it enabled.

Well, you apparently don't game. DOW II required a page file of at least 1.5GB (though that check can now be bypassed). I remember older versions of Adobe Premiere would also crash and burn without a pagefile.
 
Last edited:

BonzaiDuck

Lifer
Jun 30, 2004
15,785
1,500
126
Exactly. I suspect that the people who say, "there are programs that require a page file" are either just repeating what they've read elsewhere . . .

zir_blazer said:
The part about programs requiring the Pagefile is not even theorically possible at all.

I distinctly remember saying something like that, perhaps for being in a hurry to argue the usefulness or need for VM.

Anyway, the relevant explanation has been posted and quoted, so . . .
 

code65536

Golden Member
Mar 7, 2006
1,006
0
76
Before disabling the page file, read up on why you shouldn't:

http://arstechnica.com/civis/viewtopic.php?p=21013929#p21013929

In other words, leave it alone.
I don't disagree with that post, nor does that post disagree with what I've said. I've acknowledged earlier that mapped file data can be evicted (though unlike regular paging, this sort of eviction does not require disk writes).

First, killing the page file does not eliminate paging. That was never in dispute.

Second, killing the page file saves disk space when you're booting from a small 64GB or smaller drive (but so will reducing the page file to a minimal size to enable BSOD crashdumps).

Third, the scenario described in that post--where Windows resorts to evicting out pages backed by mapped files--means that there is memory pressure. I think it goes without saying that the page file should never be disabled unless your physical size is sufficiently far above your maximum commit. Since, well, the primary purpose of the page file is to supplement your physical RAM, and if your physical RAM relative to your usage is small enough that it requires this supplementing, you absolutely should not disable the page file. My points about the page file are all assuming scenarios in which this is not an issue.

The main objection in that Ars post (that Windows will evict file-mapped pages--and there's lots of it, like several GB worth after I load and play a large game) is applicable even when you have a token-sized page file (to enable BSOD dumps), which is exactly what I have on my 16GB machines because memory pressure leading to evictions is not something I worry about.

Well, you apparently don't game. DOW II required a page file of at least 1.5GB (though that check can now be bypassed).
As I said in the other thread (different subforum here at Anandtech) about the myth of page files, "Yes, it's possible for a program to check the existence of pagefile.sys and then do something nonsensically stupid if that file doesn't exist."

It sounds like DoW2 does exactly that, since you can "bypass the check" with a simple command switch and the program works file.

This is likely a case where the program should've tested how much commit your system can handle (i.e., total virtual memory available) but some incompetent programmer decided to check only for the disk-backed portion of that. It's not hard to imagine how this might have happened: Probably since in the default configurations of that era, the page file was initially sized the same as the physical RAM, so looking at the size of the page file and doubling that will, on most systems, give you your total system commit, which is a lot faster and easier to do than slogging through the MSDN documentation to figure out how to do it right (which would've also taken into account things like reserved memory and non-pageable memory to tell the process how much memory is available to that process).

Again, there is absolutely no technical basis for requiring a page file to exist or to be of a certain size. Programmer incompetence not withstanding.
 
Last edited:

zir_blazer

Golden Member
Jun 6, 2013
1,184
459
136
Before disabling the page file, read up on why you shouldn't:

http://arstechnica.com/civis/viewtopic.php?p=21013929#p21013929

In other words, leave it alone.
Do people still believe in the boogeyman? Soo many overclockers around, scared of a simple OS tweak. Makes me sad. Everyone should disable it, test how it works without it for some time, then you can make your own conclusions.
The guy you linked also mentions than the system reacts faster if you leave things Idle overnight. That's exactly what made me start to disable the Pagefile, so I was able to force Windows XP to NOT send things to the HD when it had tons of RAM available, yet still did so. Was quite annoying to have the HD working like crazy before I had all my stuff responsive again. Windows is dumb, I know my own needs and priorities better than anyone else.
Besides, Mark Russinovich, in several articles he wrote, not even once he stated than disabling the Pagefile is cataclismic - he said that for some use cases it could be faster, and I agree with him.

Some feel having no paging file results in better performance, but in general, having a paging file means Windows can write pages on the modified list (which represent pages that aren’t being accessed actively but have not been saved to disk) out to the paging file, thus making that memory available for more useful purposes (processes or file cache). So while there may be some workloads that perform better with no paging file, in general having one will mean more usable memory being available to the system (never mind that Windows won’t be able to write kernel crash dumps without a paging file sized large enough to hold them).


There is also a guy that seems to have been involved on Vista development that said the same:

3. Configuring a system with lots of RAM to run without pagefile may have either negative or positive perf impact depending on what the system is doing. The general recommendation in this case is to create a reasonably sized pagefile (for example, 4 GB) and increase it if the Paging file\% Usage counter gets close to 100%.

Note that this counter is completely different from what task manager calls "pagefile usage" (which is actually the system commit charge). Paging file\% Usage of 100% would mean that some unused pagefile-backed pages are sitting on the modified page list, unnecessarily taking up RAM. If pagefile was larger, those pages could have been written to disk, resulting in more RAM available for other purposes.

When you sit on 32 GB of RAM, you can waste a few GBs as a luxury to not run a Pagefile. It also has some very positive pros. You don't bother the HD, could possibly sleep for longer periods. Same with SSD, you're reducing wear on storage. Indeed, seems that I always had more RAM than Joe Average, and while everyone else wants to free some RAM, I want to fill it up.



Well, you apparently don't game. DOW II required a page file of at least 1.5GB (though that check can now be bypassed). I remember older versions of Adobe Premiere would also crash and burn without a pagefile.
I do game, most of the day actually. On a decade running without Pagefile you do lots of other things, too. Never encountered issues that weren't directly caused for trying to fit more things than RAM I had, I hit those issues quite often when during 2004-2005 I had just 640 MB RAM, after upgrading to 2 GB during 2006 never again had issues.
If there is any sort of developer that FORCES an artificial check for the existence and size of the Pagefile regardless of actual physical RAM, the dev team should be lined up in front of a wall and executed with a firing squad. Its a stupid and needless design decision made by people that doesn't thinks that tomorrow computers (And even today computers, for high end users) will have much more RAM than can usefully be filled, and forces you to use a legacy feature from an old era. What is next, a game that refuses to run if you overclock your Core i7 beyond 4 GHz?
 
Last edited:

thedosbox

Senior member
Oct 16, 2009
961
0
0
Do people still believe in the boogeyman? Soo many overclockers around, scared of a simple OS tweak.

I provided evidence that the lack of a pagefile can cause problems, so your boogeyman is real, even if it doesn't affect you personally.

I always had more RAM than Joe Average, and while everyone else wants to free some RAM, I want to fill it up.

In other words, you and the OP paid over the odds for something you don't really need :awe:

It also has some very positive pros. You don't bother the HD, could possibly sleep for longer periods. Same with SSD, you're reducing wear on storage.

When consumer level SSD's have been shown to tolerate hundreds of terabytes of writes, that's not really a concern, especially as accesses to the pagefile are read biased.

And if you're still using a hard drive, perhaps you ought to have considered saving the money on that unnecessarily large amount of RAM and putting it towards an SSD?

use a legacy feature from an old era.

I'm clearly not going to persuade you from reconsidering your tweaking, but for anyone else interested, here's another version of the explanation - this time with pictures:

http://azius.com/blog/pagefile-yes1/

It sounds like DoW2 does exactly that, since you can "bypass the check" with a simple command switch and the program works file.

Yep, it was clearly bad decision making on the part of the programmer. However, anyone wanting to play the game when it was released had to wait until news of the switch (or the patch to enable it) became available.

My point is that for most people, leaving the pagefile alone has no meaningful cost.
 
Last edited:

CakeMonster

Golden Member
Nov 22, 2012
1,428
535
136
We have the storage space that enthusiasts need, we have the storage performance enthusiasts needs, you can get more RAM than you'll ever need for an acceptable price, and SSDs can easily endure all the writes you can throw at them.

I guess this causes people to desperately start hunting for every pseudo scientific and meaningless tweak they can do, and then to defend it out of boredom. Because people can not stop tinkering, it seems to be an obsession. They can not leave any setting alone, especially not if they can't buy something to fix the imaginary problem in the first place. They need to DO something. I find the storage forum getting stranger and stranger these days...
 

Dufus

Senior member
Sep 20, 2010
675
119
101
Things have progressed since the days of using a 256MB graphics aperture. Here is a post I made on the paging and VRAM usage, actual testing and not "read it somewhere".

http://forum.notebookreview.com/gaming-software-graphics-cards/752160-how-much-vram-enough.html

An example where use of 4GB of DRAM is lost from using 2GB of VRAM and having the pagefile.sys disabled because there is only as much page as there is DRAM.




And the use of the pagefile.sys in this case has absolutely nothing to do with swapping out to disk.

I myself have run with pagefile.sys disabled for many years but I would not categorically tell people they should or should not disable it. It all depends on the individual system and persons requirements. The VRAM example was for peoples awareness, nothing more.
 
Last edited:

code65536

Golden Member
Mar 7, 2006
1,006
0
76
My point is that for most people, leaving the pagefile alone has no meaningful cost.

Yes, I agree. In fact, my systems all have a token page file because I like for there to be crashdumps if the system ever BSoDs.

It's just that the old, "some programs need a page file to run" is a commonly-perpetuated myth (not unlike, "you see less space in Windows because of 'formatting'") that I like to dispel because its perpetuation is a peeve of mine. To set the record straight: No, you absolutely do not need a page file, and aside from crashdumps and the obvious lack of a backstop if you do run out of physical, there is absolutely no point in having a page file.

That doesn't mean that people should go around disabling their page files because those are two a pretty important functions!

An example where use of 4GB of DRAM is lost from using 2GB of VRAM and having the pagefile.sys disabled because there is only as much page as there is DRAM.

The same information is shown in Task Manager; you didn't need that little popup dialog with its ambiguous wording (I would've used "virtual" instead of "page" in that dialog for better clarity). The system has committed 15.8 out of 15.9 of total virtual memory, and that's what matters, because memory is dished out from the virtual memory, and when you get OOM errors, it's the commit vs. total virtual that matters. The commit is always greater than or equal to physical usage because, as noted earlier, not all commits are backed by physical RAM--this is especially true if you've mapped a gigantic file to memory.

So that 4GB of physical RAM that's "free" isn't actually 100% free. It's technically "reserved but currently unused". And yes, a large page file will let you effectively shift that reservation to the pagefile and let you make use that reserved-but-currently-unused RAM (which is fine until whatever reserved that RAM wants to start using it).

In this case, there is no difference between disabling the page file outright and keeping a small token page file (well, okay, there is a small difference equal to the size of your small token page file). And this is a case where there is genuine memory pressure (after all, you have a 16GB commit), and again, people shouldn't be kicking out their page files if they have a large commit.
 
Last edited:

zir_blazer

Golden Member
Jun 6, 2013
1,184
459
136
I provided evidence that the lack of a pagefile can cause problems, so your boogeyman is real, even if it doesn't affect you personally.
Try providing evidence of an application that genuinely needs the Pagefile, not an artificial issue caused by a braindamaged programmer. Luckily, I still saw worse mistakes.


In other words, you and the OP paid over the odds for something you don't really need :awe:
Bold statement for someone that doesn't even know what I use my computer for. I'l give you a hint: It involves virtualization.


And if you're still using a hard drive, perhaps you ought to have considered saving the money on that unnecessarily large amount of RAM and putting it towards an SSD?
Surely I considered so, it was either 32 GB RAM or a 120 GB SSD. I purchased the RAM knowing that DDR3 price were skyrocketing, and the DDR3 standard was already mature enough with not a lot of more space to grow (Neither faster, nor denser), so it was a priority to max slots ASAP. Instead, the SSD market is quite fresh and competitive, I knew that I could sit and wait for ever improving SSDs at cheaper GB/$ prices, so I could purchase a much better SSD given some more time if I later needed one. Guess what, one year later, I can say that I did a good choice.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I'm clearly not going to persuade you from reconsidering your tweaking, but for anyone else interested, here's another version of the explanation - this time with pictures:

http://azius.com/blog/pagefile-yes1/
And see, this is where it's simply not about facts. Seriously.
"Now a process requests 6 GB of virtual address space"
Any process doing that at once deserves to be told to GTFO of my RAM, because it is clearly broken. Usually, it would work, sadly, so I would not only be fine with it failing, but also with a clamp on single allocations that is less than my available memory. IMO, that is the correct behavior; not the OS going, "gee, I think I can commit all that if I need to, so here you go, you pig, you!"
 

KernelMode

Junior Member
Jun 4, 2001
7
0
0
Yes, asking for a single allocation of 6 GB committed would be extreme. (But it can happen. It's not for nothing that 8.1 expanded the per-process virtual space to 128 TiB.)

But the situation described exists anytime your collective workload tries to commit more than what's remaining of your commit limit (and the pagefile can't be expanded, perhaps because you don't have one). You get the "out of virtual memory" popup and at the end of it all your system still seems to have at least some unused commit.

The numbers were just set up to show a case where "some" would seem to be "really plenty".

Another way to get there is that a program gradually allocates committed memory... eventually hits the wall... it dies, the committed space is freed, and now your system seems to have plenty of commit available. Again, you don't see the allocations that caused the problem.

But none of that complexity is needed to justify the claim that not having a pagefile can cause out-of-virtual-memory errors, where you wouldn't have gotten them with the pagefile.
 

thedosbox

Senior member
Oct 16, 2009
961
0
0
Try providing evidence of an application that genuinely needs the Pagefile, not an artificial issue caused by a braindamaged programmer.

Missing the point. Until the workaround was discovered (or the patch to enable it was pushed out), anyone without a page file would have been unable to run the game. Disk space is cheaper than time (at least for me).

Bold statement for someone that doesn't even know what I use my computer for. I'l give you a hint: It involves virtualization.

So running VM's was your justification for purchasing 32GB of RAM. Yet those VM's leave enough "wasted" memory that you need to fill it up? OK, you're clearly a special snowflake whose usage doesn't reflect most users.

And see, this is where it's simply not about facts. Seriously.
"Now a process requests 6 GB of virtual address space"
Any process doing that at once deserves to be told to GTFO of my RAM, because it is clearly broken. Usually, it would work, sadly, so I would not only be fine with it failing, but also with a clamp on single allocations that is less than my available memory. IMO, that is the correct behavior; not the OS going, "gee, I think I can commit all that if I need to, so here you go, you pig, you!"

Do you understand the phrase "for illustrative purposes"?

But fine, here's something else to chew on: http://www.streaminglearningcenter.com/articles/ram-requirements-for-adobe-cs55.html.

Granted, AfterFX is not something most users would run, but it does illustrate that 6GB is within the realm of possibility.

I'm impressed that it took this thread to delurk KernelMode after 13 years
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Do you understand the phrase "for illustrative purposes"?
Yes...
That has nothing to do with, well, anything here, as far as I can see. Where in that article did the program allocate more than the system's RAM, very quickly, causing page thrashing, which, even on an SSD, results in a forced shutdown after several minutes (compared to one or two processes crashing, and just moving on with things)?
Granted, AfterFX is not something most users would run, but it does illustrate that 6GB is within the realm of possibility.
Which was not the point, nor under dispute, but merely an example value. It could have been 3GB, or 2GB. Or, I could give a VM 6GB of RAM and thus quickly and easily make 6GB work as a non-hypothetical example of a quick allocation of that much memory (and, with 10GB available, it would work, too, and probably just page out old stuff).
 
Last edited:

zir_blazer

Golden Member
Jun 6, 2013
1,184
459
136
Missing the point. Until the workaround was discovered (or the patch to enable it was pushed out), anyone without a page file would have been unable to run the game. Disk space is cheaper than time (at least for me).
If you were that desperate to run that game, it takes less than two minutes and not even a restart to make Windows begin using the Pagefile again. Its pretty much a simple simple on-off option. Sure, I know that some people consider "safer" and more "compatible" to have the Pagefile always enabled for that 1-in-1000 scenario, but if you know how to disable it, you also know how to reenable it. So in such scenario, you don't even to wait for a workaround.
In my case, I would also drop the dev an E-Mail telling him what I think about his ridiculous system checks, and requesting for a check bypass option. Seems that several people did so with Dawn of War II because if you google you will find a ton of non-Pagefile users complaining about that stupid error message and refusing to have to enable the Pagefile again just to play it.

There is nothing worse that senseless legacy checks and the like that doesn't works properly as everything else evolves. Ever tried to run some old W9x Software whose 16 Bits installer complains about "not enough Hard Disk space" even through you have tons of free Hard Disk space? Did people stayed on Win 98SE or ME because of that? No, I had to learn to workaround those errors on WXP. When Athlons 64 X2 were first released and there were a lot of issues on tons of games that suffered spontaneous acceleration because of an async timer between Cores that needed tons of OS Hotfixes and the AMD Optimizer, did that discourage people to purchased A64X2 and use only Single Cores for games? Neither. They learned to workaround their issues. Would I still use a Pagefile, a feature intended to workaround RAM scarseness during 90' and early 2000, on a machine with more RAM than my commit usage will ever hit and that as a con makes my machine feel slower after being idle for some time, just for the very rare scenario that a application made by a dinosaur with legacy tendencies doesn't fail to start? Hell no.


So running VM's was your justification for purchasing 32GB of RAM. Yet those VM's leave enough "wasted" memory that you need to fill it up? OK, you're clearly a special snowflake whose usage doesn't reflect most users.
I'm a power user, I know my own needs and I don't copypaste things that I read on generic tweak guides to my system. I prioritize RAM usage over HD usage, as I stated before. So far, so good.


Granted, AfterFX is not something most users would run, but it does illustrate that 6GB is within the realm of possibility.
Neither Dawn of War II for that matter.

6 GB of RAM is actually a conservative amount. Multiboxing MMORPG games may requiere far more than that. I'm ready for that, too.
 
Last edited:

code65536

Golden Member
Mar 7, 2006
1,006
0
76
thedosbox, your arguments are basically, "Bad things will happen if your memory commit exceeds your RAM and there is no page file."

Well, duh. That's the whole point of the page file.

Nobody is advocating that someone with 8GB of RAM facing 8GB of usage go ditch the page file.

But if someone has 16GB of RAM and typically sees only 8GB of usage, there's nothing wrong with adjusting the page file to suit that user's actual usage patterns instead of relying on the best guess of naive software that has no foreknowledge of the user's intended usage. Remember that the page file allocation heuristics err very heavily on the side of caution. If I have a better idea of what my usage is, why the hell shouldn't I take control? Especially when a 16GB page file on a 128GB SSD represents a needless waste of 1/8 of my storage (I've since upgraded that SSD, but you get my point).

Should everyone adjust or disable their page file? No. But if you're someone like me who runs Process Explorer 24/7 (because I always like to have a birds-eye view of everything going on) and thus have years of data of exactly what my memory usage patterns look like, I'm going to trust my own judgment over that of a heuristic that was necessarily designed to accommodate as wide a range of cases as possible.

(And for the record, I advocate setting the page file to suit one's own usage patterns, which necessarily precludes the the kinds of cases you warn about. But I do not advocate for setting the page file to gain some illusory performance or as some sort of dubious SSD optimization as berryracer/G73S/ian82 likes to do.)
 
Last edited:

thedosbox

Senior member
Oct 16, 2009
961
0
0
Yes...
That has nothing to do with, well, anything here, as far as I can see. Where in that article did the program allocate more than the system's RAM

You made a comment doubting the possibility of a process using 6GB, and that such a program would be "broken". That is an example of a real program that requires that amount.

thedosbox, your arguments are basically, "Bad things will happen if your memory commit exceeds your RAM and there is no page file."

No, I'm arguing that time spent troubleshooting a program due to the lack of a pagefile could be better spent doing something else. Like arguing with dorks on AT
 

code65536

Golden Member
Mar 7, 2006
1,006
0
76
No, I'm arguing that time spent troubleshooting a program due to the lack of a pagefile could be better spent doing something else. Like arguing with dorks on AT

Nah, I know what my memory usage is like when I override the automatic page file. Over the years, I've spent a grand total of zero minutes troubleshooting page file problems. Which means that it is insufficient to cover the time I've spent on the AT forum, unfortunately.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
You made a comment doubting the possibility of a process using 6GB, and that such a program would be "broken". That is an example of a real program that requires that amount.
No, I did not. A process requesting 6GB address space (in some short time frame--6GB at once will almost always fail, except maybe right after bootup, or right after closing such a large process) says nothing about how much total address space it uses. IE, it might be using 2GB already, and wanting 8GB. Chances are better than not that, unless it's a VM you set to have that much or more RAM, that either (B) you have a workstation set up with many times that, like 32, 48, or 64GB, just to handle such cases, or (B) something is very wrong. While it has sometimes been my own fault, I can think of no times where it hasn't been B, for me, going as far back as 192MB RAM on NT 4.
 

KernelMode

Junior Member
Jun 4, 2001
7
0
0
No, I did not. A process requesting 6GB address space (in some short time frame--6GB at once will almost always fail, except maybe right after bootup, or right after closing such a large process)
Why do you think that? VirtualAlloc (which is what we're talking about) requires nothing except enough contiguous free v.a.s. in the process + enough remaining commit limit. Funny thing though: it doesn't actually *use* either RAM or pagefile space, until the app actually touches that space. And then it's just one page at a time.

Let's say I VirtualAlloc 100 MB (to use a more sane figure) - this allocates exactly zero RAM. RAM doesn't get involved until page faults happen.

Nor is contiguous RAM or pagefile space ever required for process v.a.s. requests. Part of the whole point of virtual memory is that contiguous v.a.s. can be realized in wildly discontiguous RAM.

Each new process gets 128 TiB of v.a.s. under 8.1 x64, so the first requirement is pretty easy to meet, no?

Try it with MarkR's testlimit tool.

If you were requesting any significant amount of contiguous RAM, then yes, that's tough to do except right after boot-up - and can only be done from kernel mode anyway.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Because Windows will not truly overcommit memory. Without that extra commit, it will fail (and thus a page file that the OS can increase in size on demand), and after a PC runs for awhile, you'll not get back down to very low used RAM without a reboot. IoW, I think we're talking past each other.

My point is this: if a process requests a crapton of address space, it must expect that it might use that much address space, so Windows must either assure that it can, or not. If a process that is not explicitly expected to use a ton of actual memory is requesting a ton of address space, something is wrong with what's going on, generally. Allocating doesn't mean it will use it, but that it could--in particular, that, if it is acting correctly, the developers did not consider that amount of actual RAM to be unreasonable (so, if it is correct program behavior, I need to have that much actual RAM available, plus more for headroom). I have very few tools at my disposable to change the behavior of the OS, should that, "could," turn into, "is doing right now." One of those tools is to limit the committable memory. Doing so ensures what I consider fairly good behavior, in such a case. If you consider correct behavior to be to allow it to allocate what it wants, then that's OK, too, but you then might have to deal with the resulting behavior, should it try to use most or all of that space. You could have 500MB available RAM, but a 20GB PF, and have that succeed, then say, 800MB later, be stuck without an unusable PC. Or, with it clamped, using no or a fixed small PF, just have it fail, even though you might have enough physical RAM for the program's needs (but, if so, it shouldn't be asking for so much at once).

Since unexpected correct large commit guarantees are pretty rare, without specialized applications (mainly VMs and DBMSes), yet 64-bit process memory leaks, or other runaways, using the latest versions of FOSS software a lot, are not nearly as rare, by orders of magnitude, I see it as an obvious choice to make. The objective parts are the same either way you go, though.
 
Last edited:

kevinsbane

Senior member
Jun 16, 2010
694
0
71
Considering SSD's are generally large enough, and are hardy enough to withstand TiB of writes anyways...

What's the benefit nowadays to disabling the pagefile?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |