Close
Page 2 of 4 FirstFirst 1 2 3 4 LastLast
Showing results 11 to 20 of 33
  1. #11
    Member
    Join Date
    Aug 2008
    Location
    Arkansas
    Posts
    451
    Blog Entries
    2

    Default

    Yeah John, reviews of the Founders and board partner cards are starting to surface.

    https://www.youtube.com/watch?v=Ce7-lv6pTv4

    Here's a good one of the FE 1070, with an SLId MSI version coming shortly.

    All looks very promising, I'm still waiting in the weeds to see how the new AMD cards look, if the 480 is as good as they claim, I suspect I might go team Red this time around.
    My Blog: SRS Business

  2. #12

    Default

    Just got 2 x 1080.
    Waiting for the water blocks to arrive (should be today), and then gonna spend the weekend assembling and testing everything.
    Just in time for legion

    Sent from my A0001 using Tapatalk
    Last edited by Pazgaz : 06-30-2016 at 09:48 AM

  3. #13

    Default

    Before:
    c4d1414efad916e4fcd5374c9b856d77.jpg

    After:
    46e2d1e1150d7395977465a901d8ce68.jpg


    And for some numbers:
    I can run 5 instances of WoW at 4k resolution, everything set to max except AA. Getting 60 fps on master and 30 on slaves SUPER stable when running outside killing mobs and stuff.
    I've yet to try Stormshield. Will update.

  4. #14

    Default

    Actually that was running on overclocked settings. When not overclocked it does go below 60 fps.

  5. #15

    Default

    I read that SLI scaling for the new cards is bad in most cases, due to the drivers not being optimized yet. So you might get better results in the future when updated drivers support SLI on these 10xx cards.

    Im holding out for 1080TI or Titan versions of 10xx cards.
    Last edited by Lyonheart : 07-06-2016 at 12:26 PM
    Currently 5 Boxing 5 Protection Paladins on Whisperwind Alliance
    The Power of Five!!! ( short video )

  6. #16
    Multiboxologist MiRai's Avatar
    Join Date
    Apr 2009
    Location
    Winter Is Coming
    Posts
    6824

    Default

    Quote Originally Posted by Lyonheart View Post
    I read that SLI scaling for the new cards is bad in most cases, due to the drivers not being optimized yet. So you might get better results in the future when updated drivers support SLI on these 10xx cards.
    I was really disappointed in that article when I looked at it, and it seems that I'm disappointed in most review sites which are doing the 2-way SLI 1080 benchmarks, and that's because they're not putting them up against any other meaningful multi-GPU benchmarks. It just doesn't make any sense.

    I mean... HTF can you review a GPU in SLI when you don't put it up against any other multi-GPU competition? That just doesn't make any sense to me. TechPowerUp did a very similar thing and put up 1080 SLI against 970 SLI, which is just as weird as not putting it up against any multi-GPU benchmarks at all, but as for a bonus, they also have an aftermarket Asus GPU which they're putting up against stock cards of the prior generation. Look how much faster this overclocked, aftermarket 1080 is compared to a stock 980 Ti!

    Again, I'm not saying that the 1080 isn't a nice GPU, but this all just feels like such a weird release to me because it seems like information is being hidden, or skewed. Especially when you look at nVidia's most recent jab at AMD:

    NVIDIA-GeForce-GTX-1060-vs-Radeon-RX-480-performance-1.jpg

    It's convenient that the graph starts at 0.8, but someone from the Anandtech forum fixed the graph.


    Quote Originally Posted by Lyonheart View Post
    Im holding out for 1080TI or Titan versions of 10xx cards.
    There's another thread on Anandtech where people were talking about that article, and I agree with this particular post at the moment. However, I'd also assume a price tag of at least $1200 if a Titan was going to show up anytime soon.

    Titan "P" for "Privilege"
    Do not send me a PM if what you want to talk about isn't absolutely private.
    Ask your questions on the forum where others can also benefit from the information.

    Author of the almost unknown and heavily neglected blog: Multiboxology

  7. #17

    Default

    I'm waiting for Pauls Hardware reviews the 1060.....

  8. #18

    Default

    I think it's going to be 1200 for the 12gb ver and 1600 for the 16gb ver for titan.

  9. #19
    Multiboxologist MiRai's Avatar
    Join Date
    Apr 2009
    Location
    Winter Is Coming
    Posts
    6824

    Default

    Quote Originally Posted by Pazgaz View Post
    I think it's going to be 1200 for the 12gb ver and 1600 for the 16gb ver for titan.
    Two reasons why I don't believe this to be true...

    1) The market segment for the Titan GPUs is already incredibly small, and splitting them up into two, even smaller segments is probably going to hurt profits, more than help. I doubt nVidia is going to bother to spend extra money in order to manufacture two different types of ultra-top-end GPUs just to try and price gouge people at $100 per GB of VRAM.

    2) The amount of VRAM on the PCB is determined by the GPU's memory interface:

    GDDR5/X 256-bit = 2/4/ 8/16/32GB
    GDDR5/X 384-bit = 3/6/12/24/48GB
    HBM2 4096-bit = 4/8/12/16/20/24GB? (I'm guessing here, but it could also be 4/8/16/32GB like we see with 256-bit)

    Now, all of the articles say that nVidia is going to "unveil" this new GPU, but that could mean anything. It could mean only spoken words with a picture on screen with no model or solid release date in sight, or a non-working mock-up model of the GPU, or a fully working GPU... we have no idea.

    However, if nVidia is going to actually unveil a new Titan GPU at GamesCom and announce a release date in the near future, then my guess is that it's going to be G5X memory, and not HBM2. HBM2 isn't even available in the wild at this time, so for nVidia to offer it on a consumer-level GPU is absurd, because they release their GPUs like this: Tesla (Supercomputing/Deep Learning) > Quadro (Workstation) > GeForce (Gaming). I highly doubt that nVidia is going to cut into the super-scarce supply of HBM2 just so they can charge pennies on the dollar for it, when they could easily be making thousands of dollars per GPU in the Tesla or Quadro market.

    So, with that said, if the GPU is going to be using the standard G5X VRAM, then it's either going to be 256-bit or 384-bit, and you can't have both 12GB and 16GB models on the same interface.
    Do not send me a PM if what you want to talk about isn't absolutely private.
    Ask your questions on the forum where others can also benefit from the information.

    Author of the almost unknown and heavily neglected blog: Multiboxology

  10. #20

    Default

    Quote Originally Posted by MiRai View Post
    2) The amount of VRAM on the PCB is determined by the GPU's memory interface:

    GDDR5/X 256-bit = 2/4/ 8/16/32GB
    GDDR5/X 384-bit = 3/6/12/24/48GB
    HBM2 4096-bit = 4/8/12/16/20/24GB? (I'm guessing here, but it could also be 4/8/16/32GB like we see with 256-bit)
    Err, the GB is determined by the density of the VRAM chips. The interface determines the number of chips.
    GDDR5(X) chips have a 32bit interface, so you take you GPU interface, divide by 32 and that is the number of VRAM modules you will have on the PCB.
    HBM1 is 1024 bits wide.

    It then becomes a simple bit of math. gpu interface / VRAM chip interface * density / 8.

    Funnily enough, the top end graphics cards tend to use the highest density chips available at the time, which is currently 8Gb (gigabit, not bytes), but it is not mandatory.

    /e: it is possible to put more vram on the board, but this would then require the memory controller to switch between modules, so you would not gain any extra "bandwidth" out of doing so, and would probably then generate extra latency (in fact this is what the GTX 970 does, so it's 256bit interface is just marketing gumpf because it is not a true 256bit pipeline throughout the whole shebang).

    /e2: The Titan-X (because someone will bring it up), does the memory controller switch to access the 24 VRAM modules (it has 4Gb modules), the internals of how it does it without impacting significantly on performance, who knows, and why the 970 does it so inefficiently in comparison, who knows. My guess is that the Titan-X does not switch to accessing a single module at a time, it is always addressing 12 modules simultaneously, whereas the 970 goes from addressing 7 modules to 1 module, then back to 7 etc (although only when it really needs that extra 512MB).
    Last edited by mbox_bob : 07-11-2016 at 08:22 PM Reason: adding some more excrement just to fan the fire :)

Tags for this Thread

Posting Rules

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •