mnewsham
Lifer
- Oct 2, 2010
- 14,539
- 428
- 136
And a Sharpie! :awe:Please don't tell me that's network cabling on top...
And a Sharpie! :awe:Please don't tell me that's network cabling on top...
Please don't tell me that's network cabling on top...
Skip the "thin client" part anyway. A thin client does all its processing on the host/server and doesn't really do any itself.Would using a thin client network boot via PXE be a practical alternative to giving each CPU a HDD?
Skip the "thin client" part anyway. A thin client does all its processing on the host/server and doesn't really do any itself.
:sneaky: sure...ah...it's not network cable...mostly :sneaky:
At least it wasn't as bad as this: (not mine)
thanks for clarifying. Ok then, just a network boot. Would that be practical?
Nope. Not for the applications we are talking about, bandwidth contention is problematic because you need your network fabric to be an available resource for scaling purposes.
Also think of the advantages in SSD's over HDD's. Going from a local HDD to a remote HDD accessed by the network would be a step in the wrong direction.
The improvement to be had would be in going from HDD to SSD. The programs create scratch files that are tens, sometimes hundreds, of gigabytes in size.
Imagine trying to run Photoshop on your personal computer but having the scratch file be located on a network drive. That would "counter productive" to say the least. Now try and have 6 or more computers (and 4 parallel instances of Photoshop on each computer) trying to access the same remote disk array simultaneously...are they each going to get 200MB/s throughput?
And even if they did, what bandwidth would left for actual interprocessor communications?
Turns out for my apps even the cheapest lowliest $40 dedicated hard-drive is a superior price/performance solution compared to any kind of remote storage alternative.
When I built the P4 cluster at TI the local IT group fought with me regarding local storage, they wanted diskless nodes (they were responsible with maintaining the system) so I had them work with DELL to run my apps on a pre-configured cluster with and without local disk storage. The performance penalty, even when as optimized as the local IT and DELL server guys could get it, was staggering.
Nearly 10x decrease in performance for the same server price (they doubled the network backbone, etc, spending as much on the sub-system as the HDD's themselves, trying to close the gap). It was a QED, I got my local drives on the cluster.
I'm sure there are some apps out there that don't suffer a performance penalty while maintaining a storage footprint that fits within the local ram so you can get away with using ramdrive and pushing the drive image on startup. But it is always a cost/performance tradeoff. Sure I could build my nodes to have enough ram that a hard-drive is irrelevant, but I'd spend so much on ram that I'd be looking at reducing my overall node count from say 12 nodes to 6 nodes.
That tradeoff works in favor of price/performance for some apps, not mine though, at least not at the current pricepoints. Maybe in another 4-6yrs time the landscape will have changed.
I just saw this picture of a Dell Poweredge C5125 on JFAMDs blog, and I have to admit that I am impressed at how dense they are able to make that system.
I think that is 148 cores, if I am not mistaken. I wonder if that would work well for your application IDC? (Although it may be well out of your pricerange as well.)
Thank you for this detailed explanation.
Would large amounts of RAM negate the need to use a HDD as scratch? (which, as you said, introduced horrible delays over network).
Sure I could build my nodes to have enough ram that a hard-drive is irrelevant, but I'd spend so much on ram that I'd be looking at reducing my overall node count from say 12 nodes to 6 nodes.
That tradeoff works in favor of price/performance for some apps, not mine though, at least not at the current pricepoints. Maybe in another 4-6yrs time the landscape will have changed.
About how many GB of ram do you think would be needed for render farm?
What OS was used to manage the cluster?
Holy flying spaghetti monster!At least it wasn't as bad as this: (not mine)
Holy flying spaghetti monster!
Would using a thin client network boot via PXE be a practical alternative to giving each CPU a HDD?
The photoshot of the blade layout is rather sexy
Of what I could find on the C5125 it is limited to nothing more powerful than an single X4 per blade (no thubans or magny cours) so you'd be looking at 48 cores for a 12-blade config.
I couldn't find anything on pricing, but I didn't look past the third page of search results in google. It may be too early.
Unless you are really space-limited it is a very hard sell to make the case for rack-mount anything (let alone high-density rackmount) versus just cobbling together a cluster based on COTS (commodity off the shelf) parts.
With AMD's thuban and X4 prices and the mobo prices being a notch lower than Intel's at most of the points on the performance curve, you can put together a rather ferocious computing farm for a couple thousand bucks.
If you've been building rigs since the dawn of the x86-based PC it can feel absurd at times nowadays just how much compute power your measly $400 will buy you.
In fact, since one of my apps is a windows-only app, by far the two biggest contributors to the expense of my clusters is the windows OS (nearing $170-$180 per node if you want 64bit + remote desktop capabilities) and the back-up power supply (usually around $100/node after shipping is factored).
But its really no surprise. Just look at the profit margins of Microsoft and APC versus those of AMD, any mobo maker, any dram maker, any HDD maker, and any PSU maker.
I just impulse bought a new computer for about $2,600 about 15 minutes ago.
At least it wasn't as bad as this: (not mine)
No, no, no. Not $2600 - the hot new thing is the Core i7 2600K.
(Joke)
thats actually his cousin, Ethernet cable monster.
I wonder how long it would take for them to fix that? :biggrin:
Even funnier is that is the processor that I bought.
Although the monitor was the most expensive part.
EDIT:This is what I bought:
HAF-X Case
Seasonic X760 PSU
Intel i7-2600K CPU
ASROCK P67 Extreme4 MB
Crucial C300 128GB SSD
Western Digital Black 2GB HD
12X LG BluRay burner
HIS HD6970 Video Card
Corsair H60 CPU Heatsink
HP ZR24w 24" Monitor
SAITEK Cyborg V5 Keyboard
16GB (4x4GB) G.Skill 1600 DDR3 RAM (1.5V, CAS 9)
Windows 7 Home Premium (64bit)
Trendnet TEW-623PI Wireless N PCI adapter
Trendnet TEW-652BRP Wireless N Router
I am sure I screwed up in there somewhere, but hopefully it is a good total system.
Congrats on the new gear :thumbsup:
I feel the impluse too That hardware looks solid to me! I don't see anything I would regret owning.
Mmmmmm C300....