Correct me if I'm wrong but disabling the superfetch service does not mean there is no prefetching taking place in the SSD drives by the operating system.
As far as I know superfetch tries to predict and preload specific files/applications before you use them, according to your past habits.
Prefetching is something more generic, for instance let's say an application reads the first 512 bytes of a file and the OS prefetches 128KB or something like that, predicting some kind of sequential access pattern and trying to reduce the total number of I/O requests and the I/O response time if possible. This is especially effective when accessing large files like a movie file.
As far as DRAM caching is concerned, I have measured in a linux VM a throughput of 9GB/s for accessing a DRAM cached 1GB file. This means that in linux, DRAM caching is at least an order of magnitude faster than an SSD drive, and that is not even a native execution!
Now, as far as Windows DRAM caching performance (NTFS) I did the following. I accessed the same file in the linux VM (virtualbox), but forced raw device access in linux. However, the VM image was set in virtualbox to be used as Host I/O cacheable. This resulted in 1.7GB/s throughput. That means, a guest linux VM application accessing directly a virtual disk going through disk emulation and finally ending up as Host NTFS file requests. Even using a VERY heavy I/O path that leads to DRAM caching surpasses speeds of SSDs.
My conclusion is that DRAM caching is not going anywhere in the near future, SSDs are not that fast yet. This is one way that 32GB of RAM can help improve I/O performance. DRAM caching is usually twice as fast as a RAM disk because RAM disks perform an extra memory copy to emulate a real block device. The caveat is, that this applies for read only workloads, because if you're modifying files, at some point they are going to be flushed to the backing storage. This is where RAM disks excel, they postpone writes up until system shutdown and do not involve any real disk IO whatsoever.
As far as I know superfetch tries to predict and preload specific files/applications before you use them, according to your past habits.
Prefetching is something more generic, for instance let's say an application reads the first 512 bytes of a file and the OS prefetches 128KB or something like that, predicting some kind of sequential access pattern and trying to reduce the total number of I/O requests and the I/O response time if possible. This is especially effective when accessing large files like a movie file.
As far as DRAM caching is concerned, I have measured in a linux VM a throughput of 9GB/s for accessing a DRAM cached 1GB file. This means that in linux, DRAM caching is at least an order of magnitude faster than an SSD drive, and that is not even a native execution!
Now, as far as Windows DRAM caching performance (NTFS) I did the following. I accessed the same file in the linux VM (virtualbox), but forced raw device access in linux. However, the VM image was set in virtualbox to be used as Host I/O cacheable. This resulted in 1.7GB/s throughput. That means, a guest linux VM application accessing directly a virtual disk going through disk emulation and finally ending up as Host NTFS file requests. Even using a VERY heavy I/O path that leads to DRAM caching surpasses speeds of SSDs.
My conclusion is that DRAM caching is not going anywhere in the near future, SSDs are not that fast yet. This is one way that 32GB of RAM can help improve I/O performance. DRAM caching is usually twice as fast as a RAM disk because RAM disks perform an extra memory copy to emulate a real block device. The caveat is, that this applies for read only workloads, because if you're modifying files, at some point they are going to be flushed to the backing storage. This is where RAM disks excel, they postpone writes up until system shutdown and do not involve any real disk IO whatsoever.