Close
Showing results 1 to 8 of 8
  1. #1

    Default Bought a new SSD, speed slower than expected

    So bought a brand new Samsung 850 EVO 500 GB SSD from newegg.com and installed.

    While speed is good, it is not what I expected. According to the specifications, I should be getting 540 MB/s read and 520 MB/s writes.
    Based on the AS SDD test bench, I am getting 400 MB/s read and a paltry 180 MB/s writes.

    I did some research and disabled the following
    • Indexing
    • Paging
    • Prefetch


    I also ensured it was connected to a SATA III 6.0 Gbps port on my motherboard.


    I also used Samsung Magician tool, which comes with its own performance optimization and performance benchmark tools. According to their own tool, I am getting the same results as in AS SDD.

    So does anyone else have some ideas why my performances seems really low?
    Last edited by daanji : 06-13-2015 at 01:42 AM

  2. #2
    Member JohnGabriel's Avatar
    Join Date
    Oct 2008
    Location
    Seattle Washington, USA
    Posts
    2272

    Default

    http://ssd.userbenchmark.com/Compare...0GB/2385vs2977

    It puts (for the 256GB) average user benchmark at 486 MB/s read and 419 MB/s write.

    After my very limited research turned up great things, I ordered four of the 256 GB Samsung 850 Evo series, for two drives in RAID 0. The Evo is almost half the price of the Pro, but if I run into the same problems as you I am going to regret not getting the Pro.

  3. #3

    Default

    Cool site, I ran the UserBenchMark on my machine. Here is the report

    http://www.userbenchmark.com/UserRun/268449

  4. #4

    Default

    So did some experimenting and research.


    My motherboad, an ASUS Sabertooth X79 has 8 Sata ports.

    2x Sata3 6.0 Gbps Intel
    2x Sata3 6.0 Gbps Marvel
    4x Intel Sata2 3.0 Gbps Intel

    The first two Intels are taken up by my RAID 0 SSD and they get great performance (~1 GBps read/write).

    Now, my new SSD is just a single drive. So when connected to Marvel, he gets 400 MB/s read and a paltry 180 MB/s.
    When connected to the 3.0Gbps, he gets 280/260 respectively.

    Based on what I read the Marvel Sata3 chips have terrible performance and looks like that appears to be true.
    Last edited by MiRai : 06-13-2015 at 04:36 PM Reason: Formatting

  5. #5
    Multiboxologist MiRai's Avatar
    Join Date
    Apr 2009
    Location
    Winter Is Coming
    Posts
    6815

    Default

    Two things come to mind...

    1) Are you using a cable rated for SATA III speeds? I heard that all cables are created equally, but then I heard they weren't, so I have no idea (and I've never seen a benchmark for this). I personally only use modern cables, and I got rid of all the old ones I had which may or may not have worked well with SATA III.

    2) Is it plugged into one of the several Intel chipset ports? If not, then you could easily see reduced speeds by using a third-party port, and I've never had good results from the third-party ports on my motherboards, In fact, I usually just disable those ports altogether in the BIOS so that they don't even attempt to boot up (unless I really, really need to plug in another SSD).

    EDIT: Apparently you figured it out while I was replying.
    Do not send me a PM if what you want to talk about isn't absolutely private.
    Ask your questions on the forum where others can also benefit from the information.

    Author of the almost unknown and heavily neglected blog: Multiboxology

  6. #6
    Member Ughmahedhurtz's Avatar
    Join Date
    Jul 2007
    Location
    North of The Wall, South of The Line
    Posts
    7169

    Default

    Benchmarks that give you a single number are almost entirely pointless unless it's a "relative score factor" based on testing a bunch of them on the exact same system under the exact same test loads.

    HDD_SDD_comparison_8-11-2014.jpg

    That's on an aging i7-2600K (LGA1175) reference Intel board with relatively slow DRAM.

    The important thing is, what kind of reads/writes are you going to be doing? Gaming is usually large sequential reads, so you don't suffer as badly from poor controllers, though it does tend to exhibit itself as odd frame stutters randomly intertwined with good performance.

    Regarding MiRai's comment about SATA cables, in my experience there isn't really a "slow" SATA cable. That's a misnomer for SATA cables that are poorly shielded/grounded and use cheap components, resulting in SATA errors. You can usually discover these by pulling your system logs and looking for HDD warnings. It'll usually show something like this in the dump (this is from a linux box's /var/log/messages):
    Code:
    ata1.00: failed command: WRITE FPDMA QUEUED 
    ata1.00: cmd 61/08:e8:f1:57:ed/00:00:02:00:00/40 tag 29 ncq 4096 out
              res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) 
    ata1.00: status: { DRDY } ata1.00: failed command: WRITE FPDMA QUEUED 
    ata1.00: cmd 61/08:f0:41:59:ed/00:00:02:00:00/40 tag 30 ncq 4096 out
              res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) 
    ata1.00: status: { DRDY } ata1: hard resetting link 
    ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320) 
    ata1.00: configured for UDMA/100 
    ata1.00: device reported invalid CHS sector 0 
    ata1.00: device reported invalid CHS sector 0 
    ata1.00: device reported invalid CHS sector 0
    The SMART data can also show errors that may indicate a poor controller or cable:
    Code:
    Error 24 occurred at disk power-on lifetime: 21 hours (0 days + 21 hours)
      When the command that caused the error occurred, the device was active or idle.
    
      After command completion occurred, registers were:
      ER ST SC SN CL CH DH
      -- -- -- -- -- -- --
      84 51 a0 11 00 b8 40  Error: ICRC, ABRT at LBA = 0x00b80011 = 12058641
    
      Commands leading to the command that caused the error were:
      CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
      -- -- -- -- -- -- -- --  ----------------  --------------------
      61 00 a0 11 00 b8 40 08      00:13:44.712  WRITE FPDMA QUEUED
      61 00 98 11 0c b4 40 08      00:13:44.709  WRITE FPDMA QUEUED
      61 00 90 11 08 b4 40 08      00:13:44.706  WRITE FPDMA QUEUED
      61 00 88 11 04 b4 40 08      00:13:44.703  WRITE FPDMA QUEUED
      61 00 80 11 00 b4 40 08      00:13:44.700  WRITE FPDMA QUEUED
    Both of those are from systems that did not like a particular 6gbps SSD because it was too fast for the system design. Some other cases looked very similar; it's non-deterministic but just included here as an example of where things can be observed that indicate types of failures.
    Last edited by Ughmahedhurtz : 06-13-2015 at 03:18 PM
    Now playing: WoW (Garona)

  7. #7
    Member JohnGabriel's Avatar
    Join Date
    Oct 2008
    Location
    Seattle Washington, USA
    Posts
    2272

    Default

    Quote Originally Posted by daanji View Post
    Cool site, I ran the UserBenchMark on my machine. Here is the report

    http://www.userbenchmark.com/UserRun/268449
    Did you see the warning about ambient CPU load? It says you're at 23% CPU usage before the test started running. Maybe something running in the background using the SSD?

  8. #8

    Default

    yeah, I saw that. I had another program running. I've run it again and got the same results.

Posting Rules

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •