OK. Medical CT scans are saved as images in a format called DICOM.
The raw data is usually proprietary and can only be processed by software on the scanner console, or a workstation with the scanner engine reconstruction engine software on it. This type of data would not be archived or transmitted anywhere as it's not useful. It might stay on the scanner hard drive for a bit, just incase the doc wants different image generation parameters tried.
DICOM is an open standard, but it's a bit obscure as it's only used in medical work. The image data is almost always saved as uncompressed pixel data (16 bit greyscale), but may be compressed with JPEG2000 or, if old, lossless JPEG. There is some open source software for viewing it available - by far the best OSS is Osirix for Mac OS X and iOS. There really isn't anything remotely comparable for windows or linux. Although, a doc would be using a commercial package (likely costing $2-5k per user per year) and these typically are very full featured.
CT scans are almost always acquired at 512x512 per image, giving an image file size of 528k (512 k pixel data + 16k header).
How many images in a scan varies depending on type of scan and scan tech/doc preference. Modern scanners have a voxel resolution of better 0.5 x 0.5 x 0.5 mm, and its increasingly common practice to produce an image for each 0.5 mm (or less) in the "z" direction.
This type of data set is almost like the scanner raw data - with the right software (and generic software, this time) you can combine images to simulate thicker "slices", or do other kinds of processing - change the plane of the slice, volume rendering, etc. In general, you need some sort of GPU to do this on the fly, but it doesn't need to be that beefy.
It's important to distinguish between "basic" software - which just loads 1 image at a time and shows it on screen, and allows you to "scroll" between images with the mouse; and "advanced" software which can reslice or 3D render (or do other types of processing, e.g. average multiple slices together to improve signal-to-noise ratio, but at the cost of resolution, etc.).
A few years ago, I knocked up a program in c# over a weekend to do this sort of advanced processing, and it ran fine on a 5750 - the volume rendering was a bit slow - maybe 5 fps with quadrilinear filtering - but not terrible.
Screen shot here. In the screenshot the actual image, as saved, is shown at top right - the other images are generated by "stacking" the slice images and "re-slicing" in software.
For something like a knee CT, where the doc is interested in fine bone detail - it would be normal to have 300 or 400 0.5mm slice images - giving a total data size of about 200 MB. For convenience, in case the doc doesn't have advanced software, it may be common for the scanner to be set up to automatically "reslice" the images in pre-defined planes and save those images as well.
For something like a brain CT, where fine detail is not needed, but high signal-to-noise ratio is, then you might only have 30 images saved because they are "thick" 5 mm slices. However, the doc might also be interested in a skull fracture, so the might need the "thin" 0.5 mm images as well.
The latest technology is "dynamic" or 4D CT. Where entire CT volume scans are repeated at 1-2 fps. So, imagine a CT brain artery study (where fine detail is needed). 640 images @ 512x512 for 30 frames. You're in serious trouble trying to do anything with that (that's about 10 GB of data).
In terms of hardware, if you're not wanting to do 3D reslicing on whole-body 0.5 mm CT scans, then you don't really need much hardware.
I've set up a medical teleradiology system (used by radiologists for real medical diagnosis). The hardware is Ivy bridge Core i3, 4 GB RAM, Win 7 and 2x Dell U2713H + spyder calibrator. The ivy bridge GPU is plenty fast enough for real-time reslicing and 3D rendering.