Quote Originally Posted by Sam deathwalker
You have 36 lanes, each being 500M. The SSD drive takes up one Lane. Lets say you use 2 video cards at 16 lanes each how do you expect a single SSD drive taking up one lane to feed video cards that are capable of taking in 32 lanes of data? Ya the 24G of ram can deliver 51 Lanes (each lane is 500M in PCIe 2.0) but the SSD drive can only deliver 1 lane ..... If you could get all of wow into the 24G of ram maybe but you can't now since 4.0.1 is more then 20G or so, and each client is going to use about 1/2G so with 25 clients you have 12G of ram just for them.
Que? Why are you assuming that we're going to be maxing out the PCIE bus? First off, the SSD rofls has selected is PCIE x4, meaning four lanes, or 2GB/s bandwidth. I don't see wow using up that much bandwidth, even using 25 clients.

Alternately if lanes are that important to you, get a handful of lesser ssds and raid 0 them.

Assuming the use of three multi-gpu cards, we get an (optimum) bandwidth setup of x16/x8/x8, so the first card gets 8GB/s and the other two get 4GB/s. Again, I don't see that getting pushed to the limit. Even accounting for the four "lost" lanes from the card, (we'll just make every card run at x8 for simplicity), we're left with four "open" lanes of bandwidth.

Furthurmore, regarding "sli/cf" configurations, we WON'T be using that. herp derp. We'll simply be using the extra gpus to render extra instances of wow.

Ok, for additional headway, let's just up the ante and the cost by about...2.8k and switch to to 48GB RAM, dual six core processors, and an EVGA SR-2 motherboard.
I think that's a bit much, but...