Both projects is sharing the Master Science Database, currently Galileo, E3500 with 6x 400 MHz cpu's using small, slow, nearly full non-raided hd's.
Both projects is apparently also sharing 3 servers for splitting, this is Galileo, Milkyway (U60 with 2x 400 MHz cpu's) and Philmor (U10 - single 350 MHz cpu)
A HP SureStore Tape Autoloader is used to read upto 9 tapes at a time, and save as tape-image to disk. The splitting atleast for BOINC apparently happens from these tape-images...
The tape-images should still be saved on the 3 TB NetApp filer, this also contains all "classic"-wu waiting to be sent out.
"Classic" is running off 3 E450-servers, not sure if all has the max 4 cpu's or not.
One is user-db-server, one dataserver responsible for upload/download, and one online science-db there all results in "classic" is inserted and slowly validated before moved to the Master Science Database. This last server will most likely still be tied-up for some weeks/months after "classic" has ended...
"Classic" also uses 2 different web-servers, not sure on these...
BOINC is using one AFAIK unlisted as upload/download-server, AFAIK still connected to a snapappliance 18000 - SATA-raid-disk-system all wu/results is residing on. Not sure on the disk-capasity here...
Kryten as scheduling-server.
Koloth for transitioner, validator and file_deleter.
Klaatu for web-pages and transitioner.
Kosh for validator and replica-database-server.
All of these servers is Sun-220R with 2x 440 MHz and 2 GB ram except Koloth with 1 GB.
Last but not least, a Sun V40z acting as BOINC database server. This has 2x Opteron 844 - 1.8 GHz (max 4 cpu's) and 8 GB memory (max 32), and is connected to Sun StoreEdge-3510 fibre-channel raid-disks.
Not exactly sure on the production, but according to the latest "classic" stats there's 1.1-1.2 M results/day. SETI@home/BOINC doesn't post this info, but 16.02 they was reportedly sending out 250k "results"/day.
But, they're currently keeping a queue of 500k "results" ready to send out, by using how long time from split & transitioned to issued by the scheduling-server, it looks like around 320k "results"/day now...
How many results that is actually crunched and returned on the other hand is unknown...