View Full Version : Boxing off NAS
MaltheMM
12-27-2008, 10:45 AM
what are your thoughts of boxing off Network Attached Storage
wowphreak
12-28-2008, 04:26 AM
Depends on what kind of thru put yeh can get.
not5150
12-28-2008, 05:59 AM
Hmmmm.... good question. If throughput was good, probably would work out.
What kind of NAS are you thinking about? I've got a Dell MD3000 with 16 TB of space at work. Maybe I'll secretly throw wow on it and try it.
moosejaw
12-28-2008, 06:04 AM
I know it sounds like a great idea, and I think a few people on here have tried it, but it will just add to your client side load times if it is not perfect.
I dabbled with it a bit albeit with less than ideal hardware, but a 1 gb connection, and I was not happy with the results. It worked ok with some BC content and I upgraded to a single machine before I really got to load test it with a very crowded weekend shatrath.
It is not a bad idea for consolidating everything efficiency wise if you can deal with the increased load times for those really busy areas.
By all means, if have the hardware at your disposal, test it out. Your results may differ.
Good luck.
Zzyzxx71
12-29-2008, 01:53 PM
First question that comes to mind is "For gods sake why???"
not5150
12-29-2008, 11:11 PM
First question that comes to mind is "For gods sake why???"You never ask a geek why... bad things will happen :)
I'm guessing the OP works in a data center like I do. I've got lots of new toys to play with now.
not5150
12-30-2008, 04:41 AM
What's the phrase... asking for forgiveness is easier than asking for permission?
sonic777
12-31-2008, 07:20 AM
With this idea, I would say leave it alone unless your really just want to try it. You would have 2 vlan segmented NIC's in each of your systems. To get close to the speeds you would want, you would segment a vlan on your switch controlled by your router for the NAS and one for internet. That way you don't have to worry about collisions of packets and resends to WoW servers or to your NAS.
leukos
01-02-2009, 02:29 AM
At the risk of using a cost-analysis for a five-box hardware multi-boxer, you are probably better off just picking up some hard drives. Especially, if you wanted to try and do anything that might come close to falling under a "best practice" or "manufacturer supported/recommended" configuration. I'm also not sure what would be gained by running WoW off a NAS if the corresponding machines running WoW were not booted off that NAS as well. That is, diskless boot, no hard drives in any of the machines running WoW, which means we are now bluring the traditional line between NAS and SAN.
Unfortunately, diskless boot (also called Boot From SAN) isn't an easy or cheap thing to accomplish.
In the server world (read, datacenter) we could use a Fibre Channel Host Bus Adaptor that connects up to a rack full of hard drives and conveniently presents a disk to the computer that looks like a local disk. Most Fibre Channel HBA are more expensive then an entire computer that could run WoW. Not to mention the per port cost of a fibre channel drop. I think I could build a computer that would run a WoW slave cheaper then the fibre channel switch port without trying to hard.
On the cheaper side we have ISCSI Host Bus Adaptors (HBA) that would use a normal 1 gigabit ethernet switch port. Again, I think I could build a computer cheaper then most ISCSI HBAs, they aren't cheap.
Which means we would want to look for a software solution, which, fortunately, does exist. This software, usually downloaded to the computer using a PXE boot (or put on a USB key) will connect to an ISCSI Target (the NAS) and boot the computer. The cost of emBoot, now Doubletake, winBoot/i and cost of an Intel Server NIC that supports ISCSI booting would be, rounding down, about $100 per computer.
It is also worth mentioning that Intel and Broadcom have released server network adaptors that have this type of software in the network card. The Intel version is called ISCSI Remote boot, and the card will set you back about $100. For those still reading, go track down what a small hard drive cost.
With all that aside, I think there could be some interesting solutions thru the use of open source/free software. These solutions do require more time then other solutions, but if you were just going to use that time playing WoW you are probably ahead of the game to spend it here instead. I've looked at just this problem off and on over the past year and the pieces are starting to come together.
Call it a NAS, SAN, or File Server, if a dedicated computer were setup to provide for the hosting of disks to boot up computers, and store data files (the home file server), I think the cost could be worth it. Sun's OpenSolaris platform is seeming to fit this purpose very well. The appropriate technology buzzwords someone would want to look up are ZFS, L2ARC, COMSTAR (specifically, the iSCSI target, although the SAS target does look interesting). The cost of MLC based solid state disk are rapidly getting cheaper and lend themselves to an L2ARC. Someone setting this up would probably also want to look at using a single volume for the base operating system image and running the slaves off snapshots of that base volume.
The gPXE (http://www.etherboot.org) project has created an open source solution to boot strap a computer for an iSCSI boot and hand that information off to a supported operating system (Windows Vista, Windows 2008, WIndows 2003). For those experimenting with getting this running, install it and boot off a USB key first before trying to do PXE chain loading. The MSDN (Microsoft Developers Network) is your friend here, start experimenting with Windows 2008 as it supports installing directly to a ISCSI target. Using Vista you will have to install to a hard drive first, then image it up to your ISCSI server.
Also, don't install Vista with the SATA drives in AHCI mode if you are going to use it for an image. All attempts I've made just result in a blue screen when Vista loads up. Interestingly, this occurs exactly when the ahci driver is loading.
Make sure your network switch ports support 1 gigabit ethernet (no 100mbps ethernet for this experiment), and you might want to try and track down an old switch that support Link Aggregation Control Protocol (LACP) and have multiple 1Gb ethernet lines running to the NAS/SAN/file server. I'd start with two links, then move to three if necessary.
Without actually sitting down and running cost numbers, the boot from SAN and running WoW off a NAS/SAN concept probably makes more sense with more then five computers then it does with less then five computers. I don't really get the feeling it is worth it at five unless that SAN/NAS is used for several other purposes (boot up media centers, other computers in the house, serve as a general file server, etc).
Yo-Yo Freak
01-02-2009, 03:10 AM
At the risk of using a cost-analysis for a five-box hardware multi-boxer........etcafter getting about half way through your post i was completely lost (no offense its not your fault, its all mine lol). i kept reading though just because it is extremely interesting what this thread is talking about. i have never herd of this kind of thing before. i don't know, it seems like kind of a lot of work just to play a computer game... this is all extremely interesting though. it also seems a lot more complicated then it probably is. it also seems like (if i understood Leukos post correctly) this kind of technology is somewhat ahead of most current technology to make it verey cost effective, atleast for any kind of personal use. if i understand how all this works right, then i could see how it would save a big company using a lot of computers money though.
again, this is all very very interesting, but it seems like it is a little more advanced then most current technology and because of that it isn't very cost effective at all. it also seems like an extremely large amount of work just to play a video game. but thats just my two cents lol.
~YYF
turbonapkin
01-02-2009, 03:30 AM
When we are considering the common domestic form of network attached storage which is becoming more prevalent in households every day, are we not simply adding further processing between source data and memory in the form of network and application level protocols and therefore encouraging increased latency and introducing new potential bottlenecks?
Considering the intensely bursty nature of wow's read requests my gut feeling would be to stick with direct attached storage but I would be very interested to see how joe bloe's acme nas handles five instances of wow loading into dalaran. Gigabit would be a must, I reckon.
Ellusionist
01-02-2009, 01:15 PM
When we are considering the common domestic form of network attached storage which is becoming more prevalent in households every day, are we not simply adding further processing between source data and memory in the form of network and application level protocols and therefore encouraging increased latency and introducing new potential bottlenecks?
Considering the intensely bursty nature of wow's read requests my gut feeling would be to stick with direct attached storage but I would be very interested to see how joe bloe's acme nas handles five instances of wow loading into dalaran. Gigabit would be a must, I reckon.
Household NAS =/= Datacenter NAS
Sure, they do the same thing, but they aren't even remotely comparable performance wise.This is very very true. Most enterprise level NAS equipment has 2+ gigabit ports onboard. Let me rephrase that: The better performing units that would work for this application would need dual ethernet ports. The rack mountable units I've seen cost $6,000. :(
I would say it's almost not worth it, but that depends on one's annual income. We do play (and pay for) multiple copies of World of Warcraft, y'know. :D We're a little crazy at least.
A cheaper way to do it would be using a Dell PowerEdge server. Some of those have dual ports, also. You can find those for way less than $1,000.00 on fleaBay.
turbonapkin
01-02-2009, 01:39 PM
...common domestic form of network attached storage...
Household NAS =/= Datacenter NAS
Sure, they do the same thing, but they aren't even remotely comparable performance wise.I'm sure you are very right, however I was referring specifically to the layman category i.e. the 99.9% of people here who might consider dropping a couple hundred bucks on a domestic nas device. Looking at enterprise solutions seems a moot point to me, unless you happen to have access to some juicy kit on a permanent basis!
leukos
01-05-2009, 02:26 AM
I've managed to secure some additional hardware towards the end of this week/next week to gather some additional information on the concept of using an ISCSI target to store the boot disk (a "Boot From SAN" configuration) and the Warcraft installation of multiple clients. I'm not expecting any surprises for the boot disk portion (there really isn't anything new there), but do want to verify "good-enough" performance can be achieved for the time it takes to zone-in (screen loading) in Warcraft. This isn't about breaking any speed records - it won't.
The "NAS" will be a box running OpenSolaris 2008.11 using the COMSTAR ISCSI Target, configured with 2 - 3 7500 RPM hard drives. The "NAS" will have between 4 - 8 GB of RAM. At this point, I don't know if I am going to have a spare SSD to serve as an L2ARC, but I don't think it will really be necessary. I'm only going to step the "NAS" box up to multiple 1 GB ethernet connections if initial testing shows it is required, and will probably be looking at multipathing support in the COMSTAR ISCSI Target and Microsoft MPIO before an LACP connection between the NAS and the switch. The ethernet switch have Jumbo frames enabled. I think this configuration will represent about as good as a "Home NAS/SAN" can get without spending an unreasonable amount of money (for my values of unreasonable).
The performance counters I am interested in, and will be logging, on the clients include:
Disk sec/Read
Disk sec/Write
Disk writes/sec
Disk reads/sec
Disk read bytes/sec
disk write bytes/sec
various CPU and interrupt counters
Network read bytes/sec
Network write bytes/sec
Network packets incoming/sec
Network packets outgoing/sec
I may also monitor similar counters on the "NAS," I haven't decided yet how much time I want to devote to this.
Keep in mind, I am not looking at this setup for the "fastest" configuration, but a good enough configuration. I already know Warcraft will push my single 7200-RPM SATA Drive to 50 - 60 MB/s on initial zone in.
If you have any interest in this concept and would like certain easily obtainable information while I have the hardware setup, please let me know. Please be aware that "best practice" and "manufacturer's recommended configuration" decided to take an extended lunch break for this configuration.
Ellusionist
01-06-2009, 01:32 AM
Just seems like a lot of work to me.
There are "conventional" multiboxing methods (just one box) that work fine. No need to reinvent the wheel.
My $0.02:
1) I don't see the need for 4-8GB RAM in an NAS. That's like putting 1GB RAM in a printer. It's just not needed.
2) If you're going SCSI, why not go 10,000 RPM? The extra 300RPM you're gaining will definitely make your project worth zilch. (7200RPM Conventional SATA [Non-NAS] --> 7500RPM [NAS])
leukos
01-06-2009, 03:24 AM
Just seems like a lot of work to me
1) I don't see the need for 4-8GB RAM in an NAS. That's like putting 1GB RAM in a printer. It's just not needed.
2) If you're going SCSI, why not go 10,000 RPM? The extra 300RPM you're gaining will definitely make your project worth zilch. (7200RPM Conventional SATA [Non-NAS] --> 7500RPM [NAS])
In my case, my existing home file server fills several purposes; backup location for TimeMachine for a couple Macs, backup location for a Vista box, a couple Linux servers, one lone Windows 2003 server (don't ask), central storage for source control, other files, central media library, central location for a tape drive. I greatly enjoy the flexibility of being able to have a computer die for no apparent reason and lose no data.
The purpose for that RAM is to keep the recent disk blocks in memory so a future request in handled from memory instead of a read from disk. Multiple machines requesting the same data blocks for the boot up of the operating system and those .MPQ files could then be serviced by memory instead of disk. While this low latency/high bandwidth working set is increasingly less important with a single 1 GB ethernet connection there is a more important driving factor - we're a community of luxuries and the smallest RAM module I have is 2GB. :D
The 7500 RPM drive was a typo, it should have read 7200 RPM - it was late, and still is.
Ozbert
01-06-2009, 08:18 AM
I have a NetGear ReadyNAS Duo at home and I wouldn't dream of trying to multibox from it. It's just too slow compared with local disks.
When copying a large file to it, I get about 25MB/sec transfer over a gigabit LAN which is probably about one third of the typical speed of a SATA-150 hard disks. Access times seem pretty slow too. I expect that having multiple copies of WoW trying to read data from a NAS would be horrendously slow and laggy.
Mukade
01-17-2009, 08:27 PM
A cheaper way to do it would be using a Dell PowerEdge server. Some of those have dual ports, also. You can find those for way less than $1,000.00 on fleaBay.XD
I've actually got a PE1950 sat under my couch. It failed to make it to colo when I started playing WoW, and gave up on the idea of running an FPS game server.
I'm sure with a full set of 8xSSDs in RAID0 hooked up to it's PERC 5i it would make one hell of a NAS, and would easily saturate the 2 gigabit ports on my 780i based desktop, as I was lucky enough to get the TCP offload key with it (so it basically has something like the killer NIC in it doing all the network processing).
The only problem is, it's a bit loud.
vBulletin® v4.2.2, Copyright ©2000-2025, Jelsoft Enterprises Ltd.