Ok, I read a post in this forum by CountZero on the difference between GPUs and CPUs and it got me thinking. According to things like Moore's law and whatnot, circuits are getting smaller and smaller so that, in time, we'll be able to snap in huge amounts of graphics processing power into our AGP slots in a few years. Here's my idea though. How feasible is it to, instead of a full graphics card, drop in an interface card, run something like a fiber optic cable to a seperate box with the actual GPU and memory. You're no longer constrained by space and can put like a gig of vid memory and multiple GPUs on a board. Granted, this wouldn't be cost efficient for the home user, but what about for a server-client model of some sort for professional graphics? Possibly linking multiple copies of these graphics boxes to make a scalable graphics architecture like what the Beowulf cluster does for raw CPU power. I'm pretty sure that optic cable bandwidth wouldn't be up to the task, but I suspect that splitting a signal between several cables would be possible. Has anyone heard of something like this? Thoughts? Comments?