Well, here's the numbers from testing, averaged over 3 tests each of flying to the Wetlands then hearthstone back to SW. The data was taken from a 10 second snapshot after they all transported:
Single install: Max current disk queue length of roughly 43
Single disk using junction points: Max current disk queue length of roughly 43
Five installs, after defrags: Max current disk queue length of roughly 273
Average disk queue lengths we the same sort of result since sitting in the Wetlands gives me a disk queue length of 0:
Single install: Average disk queue length of roughly 6 at the end of 10 seconds
Single disk using junction points: Average disk queue length of roughly 6 at the end of 10 seconds
Five installs, after defrags: Average disk queue length of roughly 83 at the end of 10 seconds
For the non-techie folks, a current disk queue length of more than 8 per disk drive in your computer is unacceptable for high access applications such as databases. An average disk queue length of over 8 per disk over time is the same, not good for high performance. We're not exactly running a high performance application here, but anything above that baseline will tell you than you're system is waiting for your hard drives to catch up.
Now, for actual application of the numbers, it took less than 3 extra seconds for everything to show up and render on my system. Is three extra seconds going to matter for PVE, probably not. For PVP arena teams? Probably not. Normal in game performance worked just fine for all of them.
Read operations consisted of roughly 98% of the disk I/O that I tracked over the course of doing this all. Read and write disk I/O was all in short bursts, and considering how my page file was not being used for anything this is mostly (a little system and background things happening) straight WoW disk usage. The only time there were any kind of big or noticeable writes to disk were when shutting down clients. Addons will affect this number, the only things I currently have installed is x-perl. Running auctioneer or something that logs data like that may significantly change that percentage.
For all intensive purposes, single installs or junction points were the same. Multiple installs performed worse, but that is relative since the delay was minimal. You would probably find a bigger bottleneck on RAM or CPU on the average system. All in all, a wash in my book.
Space wise, single installs or junction points were the exact same size. Multiple installs were 400% larger than the single install or junction points. Personally I hate haven't redundant data on my systems.
Disk performance of a RAID1 for faster read rates would be better than RAID0; but the overall performance of a RAID1 measured against a single disk drive is the same as or worse in some areas such as write rates unless you are using a hardware based RAID controller that is running as store and forward vs a pass through configuration. All in all, I'd have to call that a wash.
Sorry it took longer than I thought to get this all, was a bit of a busy day on Sunday for work. Had to finish up the testing on the multiple installs on Monday night.
Connect With Us