Blanket statements like these are never a good idea.
+1
Read and take in all of this before you disable the PF:
http://blogs.technet.com/b/markrussinovich/archive/2009/03/26/3211216.aspx
Like many features, dogma on both sides is wrong. Today, a good CPU costs more than maxing out most motherboards (32GB). Right up until Christmas of 2007, that was not the case. From 2006 to 2008, RAM prices dropped at least 90% (maybe more, but I specifically recall paying about $100/GB in 2006).
While many people have gotten along just fine without a page file, it is still recommended to have something available for legacy applications, and just for overall system stability, in case anything wants to use or expects the presence of such a file. In the end, a small amount of space taken up is inconsequential unless you are on really tiny SSD's.
And that is largely preference. I prefer to keep as little broken software on any computer of mine as possible, so I'd call that a feature, warning me to uninstall it and never use it (I have yet to encounter any such software, though). No software that expects a page file* is well written--worse yet, any software that tries to access it or manage its contents.
"You['re] slowing down your system by making a swap file."
I don't see how this can possibly be true. Yes, I know what a page file is for, and as you yourself stated, it is for when the system runs out of RAM. How, then, does your system slow down if you don't actually run out of RAM? Furthermore, in the event that your system DOES run out of RAM, how would not having a pagefile help (BSOD) or having it on a traditional HDD make it faster (it won't.)? I already have my page file set to a static size to cover my needs.
1. When was the last time a crash dump was useful? *crickets* It's really not something you use. It's something Microsoft anf vendors might
occasionally use, when tracking down crashes that
occur frequently for users in the wild, but not their internal computers.
2. Windows pushes out idle pages, leading to poor performance after an application has not been used in awhile. In Linux, FI, it is tunable, so you can have the best of both worlds. In Windows, there is no such option: leave the browser alone for hours, and even with plenty of RAM, it will be sluggish for minutes when you come back, due to random drive accesses for every action you take.
3. BSOD? See #1. You won't BSOD from running out of RAM; you just won't get a crash dump. I still have yet to see a non-overclocked Windows 7 system BSOD from anything but overheating or bad hardware, both of which tend to be easy to track down by means other than looking at a crash dump. I wouldn't even know how to go about utilizing a crash dump.
4. A static sized page file only gives you a crash dump as advantage, versus that much more RAM. If you're maxed out in terms of RAM but might be able to commit more, what else can you do? But, a system with 8GB RAM and 8GB PF is going to kill the application at around the same memory usage as with 16GB RAM and no PF.
You are also making large assumptions about my usage scenarios.
I process and edit large RAW files from my cameras in Lightroom and Photoshop on a regular basis, along with many other tasks. While I have made sure to have a good amount of RAM available (16GB) there may be cases where I may run out, especially if I am working on a RAW file with many layers. In extreme cases, one RAW with 7-8 layers can easily consume 6GB of RAM.
And if it crashes from running out of RAM, you might very well have lost money, am I right? That's a clear case for a big fast SSD and managed auto-expanding swap space, for any reasonable system. If the PF gets big, for that scenario, the best solution is to throw hardware at it--in this case, a bigger SSD, after you've maxed out your RAM.
*
I don't mean RAM-hogging software warning you for CYA purposes; which IIRC, some Adobe software has done. I mean software that can screw up if specific paging behaviors do not occur, or which attempts to manage data in a page file.