The reason I keep asking this is because ISBoxer, by default, is going to run all of your game clients at the same resolution. This increases the load on your hardware, but also allows you to take advantage of 1:1 mouse broadcasting, as well as other features of the software—Other software solutions don't do this unless you set up each region of your layout to be the exact same size. However, you can easily disable this in ISBoxer at the cost of losing some features if you're fearful of how it will impact your machine.
Sure, but again, if you're going to be using ISBoxer you won't be able to utilize some (one?) of its features across both machines. If you had multiple GPUs (you said you weren't going to, but I'm mentioning it anyway), you could always split the load between the two GPUs on a single machine, or you could use SLI in windowed mode to help boost performance if you stuck with nVidia.
Nothing that comes to mind, but I've personally never bothered with a multi-computer setup, and, in my opinion, such a setup is generally a hurdle in itself when you're trying to sync things like your UI and whatever else across two machines.
I've always used a software KM like Input Director, Synergy, or Mouse w/o Borders.
I can understand that. I also have two machines right next to each other at the moment, with the possibility of adding in a third to help with my video editing, but I still only ever desire to multibox using one machine. :)
In my opinion, unless you're comparing a "generation" of a certain node, to the same "generation" of another node, then the comparison is slightly flawed. For instance, the 700 series (minus the 750 Ti) was all on the Kepler node, whereas the 900 series is on an entirely different Maxwell node which was praised for being able to save power compared to the prior node. You also have to look at the power connectors of the GPU which dictate how much power the GPU can pull. GPUs with 2x 6-pin connectors (first-generation 900 series) only use 165W at most, whereas the more powerful GPUs tend to use either 6+8-pin or 8+8-pin (or more) and are going to use upwards of 250W (second-generation 900-series and 700-series).
I'm pretty sure it's safe to assume that each new (nVidia) generation is going to use a little less power than the prior node, but only during the first generation of that node. For example, the second generation of Maxwell is the GM200 chip which is currently used in the Titan X and, soon-to-be 980 Ti—Both of which will be using more power than the first-generation Maxwell GTX 980/970/etc, and will be on par with power requirements from the second-generation Kepler node (780 Ti, Titan Black) because they use the 6+8-pin configuration.
Disclaimer: For the record, whenever I say 900-series or 700-series, I am only ever talking about x60, x70, x80, or higher, GPUs. Anything below an x60 in terms of nVidia GPUs is not a gaming GPU in my opinion (no matter how much GDDR they try to sell you with it), and even using an x60 GPU is going to force you to cut down video settings in order to reach favorable performance levels while multiboxing.