View Full Version : Bought a new SSD, speed slower than expected
daanji
06-13-2015, 01:40 AM
So bought a brand new Samsung 850 EVO 500 GB SSD from newegg.com and installed.
While speed is good, it is not what I expected. According to the specifications, I should be getting 540 MB/s read and 520 MB/s writes.
Based on the AS SDD test bench, I am getting 400 MB/s read and a paltry 180 MB/s writes.
I did some research and disabled the following
Indexing
Paging
Prefetch
I also ensured it was connected to a SATA III 6.0 Gbps port on my motherboard.
I also used Samsung Magician tool, which comes with its own performance optimization and performance benchmark tools. According to their own tool, I am getting the same results as in AS SDD.
So does anyone else have some ideas why my performances seems really low?
JohnGabriel
06-13-2015, 02:19 AM
http://ssd.userbenchmark.com/Compare/Samsung-850-Pro-256GB-vs-Samsung-850-Evo-250GB/2385vs2977
It puts (for the 256GB) average user benchmark at 486 MB/s read and 419 MB/s write.
After my very limited research turned up great things, I ordered four of the 256 GB Samsung 850 Evo series, for two drives in RAID 0. The Evo is almost half the price of the Pro, but if I run into the same problems as you I am going to regret not getting the Pro.
daanji
06-13-2015, 05:06 AM
Cool site, I ran the UserBenchMark on my machine. Here is the report
http://www.userbenchmark.com/UserRun/268449
daanji
06-13-2015, 08:28 AM
So did some experimenting and research.
My motherboad, an ASUS Sabertooth X79 has 8 Sata ports.
2x Sata3 6.0 Gbps Intel
2x Sata3 6.0 Gbps Marvel
4x Intel Sata2 3.0 Gbps Intel
The first two Intels are taken up by my RAID 0 SSD and they get great performance (~1 GBps read/write).
Now, my new SSD is just a single drive. So when connected to Marvel, he gets 400 MB/s read and a paltry 180 MB/s.
When connected to the 3.0Gbps, he gets 280/260 respectively.
Based on what I read the Marvel Sata3 chips have terrible performance and looks like that appears to be true. :(
MiRai
06-13-2015, 08:39 AM
Two things come to mind...
1) Are you using a cable rated for SATA III speeds? I heard that all cables are created equally, but then I heard they weren't, so I have no idea (and I've never seen a benchmark for this). I personally only use modern cables, and I got rid of all the old ones I had which may or may not have worked well with SATA III.
2) Is it plugged into one of the several Intel chipset ports? If not, then you could easily see reduced speeds by using a third-party port, and I've never had good results from the third-party ports on my motherboards, In fact, I usually just disable those ports altogether in the BIOS so that they don't even attempt to boot up (unless I really, really need to plug in another SSD).
EDIT: Apparently you figured it out while I was replying.
Ughmahedhurtz
06-13-2015, 03:15 PM
Benchmarks that give you a single number are almost entirely pointless unless it's a "relative score factor" based on testing a bunch of them on the exact same system under the exact same test loads.
1460
That's on an aging i7-2600K (LGA1175) reference Intel board with relatively slow DRAM.
The important thing is, what kind of reads/writes are you going to be doing? Gaming is usually large sequential reads, so you don't suffer as badly from poor controllers, though it does tend to exhibit itself as odd frame stutters randomly intertwined with good performance.
Regarding MiRai's comment about SATA cables, in my experience there isn't really a "slow" SATA cable. That's a misnomer for SATA cables that are poorly shielded/grounded and use cheap components, resulting in SATA errors. You can usually discover these by pulling your system logs and looking for HDD warnings. It'll usually show something like this in the dump (this is from a linux box's /var/log/messages):
ata1.00: failed command: WRITE FPDMA QUEUED
ata1.00: cmd 61/08:e8:f1:57:ed/00:00:02:00:00/40 tag 29 ncq 4096 out
res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
ata1.00: status: { DRDY } ata1.00: failed command: WRITE FPDMA QUEUED
ata1.00: cmd 61/08:f0:41:59:ed/00:00:02:00:00/40 tag 30 ncq 4096 out
res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
ata1.00: status: { DRDY } ata1: hard resetting link
ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 320)
ata1.00: configured for UDMA/100
ata1.00: device reported invalid CHS sector 0
ata1.00: device reported invalid CHS sector 0
ata1.00: device reported invalid CHS sector 0
The SMART data can also show errors that may indicate a poor controller or cable:
Error 24 occurred at disk power-on lifetime: 21 hours (0 days + 21 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
84 51 a0 11 00 b8 40 Error: ICRC, ABRT at LBA = 0x00b80011 = 12058641
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
61 00 a0 11 00 b8 40 08 00:13:44.712 WRITE FPDMA QUEUED
61 00 98 11 0c b4 40 08 00:13:44.709 WRITE FPDMA QUEUED
61 00 90 11 08 b4 40 08 00:13:44.706 WRITE FPDMA QUEUED
61 00 88 11 04 b4 40 08 00:13:44.703 WRITE FPDMA QUEUED
61 00 80 11 00 b4 40 08 00:13:44.700 WRITE FPDMA QUEUED
Both of those are from systems that did not like a particular 6gbps SSD because it was too fast for the system design. Some other cases looked very similar; it's non-deterministic but just included here as an example of where things can be observed that indicate types of failures. ;)
JohnGabriel
06-13-2015, 03:32 PM
Cool site, I ran the UserBenchMark on my machine. Here is the report
http://www.userbenchmark.com/UserRun/268449
Did you see the warning about ambient CPU load? It says you're at 23% CPU usage before the test started running. Maybe something running in the background using the SSD?
daanji
06-13-2015, 05:48 PM
yeah, I saw that. I had another program running. I've run it again and got the same results.
vBulletin® v4.2.2, Copyright ©2000-2025, Jelsoft Enterprises Ltd.