I don't see the point of using ssd for boot. I don't care how long it takes for the server to boot since it is on 24/7. VMs on the other hand benefit greatly.
I keep my file server on 24/7 so I don't really care about boot time. However, I do know that it makes access to shared resources a lot more responsive. I was blown away when I moved from a HDD to SDD. I'm not talking about responsiveness while using the file server. I'm talking about the responsiveness of other computers/devices connected to the file server. To me it was a big deal and tell everyone about my experience. It was like having the drives locally if that makes any sense.
Mind you yours is a homebuilt while this is a full blown off lease 2u server
I think a lot of difference is the parts, not who puts it together. In my Xeon fantasy I would still build my own. I like doing these things myself. Now, at work it's a lot easier to just plug and play, not to mention support.
I have zeroa interest in buikding machines these days. No time.
Err, I just picked up six eight gb ddr3 rdimm for us$84...
Brand new from amazon marketplaceWell I suppose you can get lucky on deals if you happen to find a seller selling ram for that specific model server, but when building a brand new machine I tend to avoid doing the ebay thing of buying used stuff.
As for SSD I use only SSD for boot now. I figure the odds of random failure is much lower with SSD, so it saves me from having to mess with hardware raid. I use spinners for mass storage as it's way cheaper and it does not fail based on usage, then I do software raid.
There's some convoluted way you can do software raid for the boot drive but it seems like voodo magic to me. How do you initialize the OS and the raid software on a raid array that requires the raid software to be initialized. It's an egg and chicken game. It's doable, I just don't quite get how it's done, easier to just put in a SSD. When the server is setup the way I need I then take an acronis image.