While working on the service architecture for one of our projects, I considered several SATA SSD options as the possible main storage for the data. The system will be quite write intensive, so the main interest is the write performance on capacities close to full-size storage.
After some research I picked several candidates (I show prices obviously actual for the date of publishing this post):
- Samsung 840 Pro, 512GB, the current price $450
- Samsung 840 Evo, 750GB, price $525. I consider these new SSD because of availability of models with big capacity, 750GB-1TB
- Intel DC S3500, 480GB, price $650. This model is more expensive in $/GB, as Intel positions these devices for data center usage
- SanDisk Extreme II 480 GB, price $450. This device caught my attention by good results in some third-party benchmarks
The devices are all attached to a LSI MegaRAID SAS 9260-4i raid controller with 512MB cache, and configured as individual RAID0 virtual devices.
Testing workload is: random writes, 16KiB block size sending in 8 threads as asynchronous IO on ext4 file system. Asynchronous IO is used in the most recent MySQL/InnoDB and theoretically it shows the best possible throughput.
For the Samsung 840 Evo I used 600GiB file size, and for the rest 350GiB, to emulate 70-80% filling
The results in a timeline form to see the stability of the performance:
The results in a jitter form with a median throughput:
My thoughts on these results:
- I frankly expected more from Samsung devices, but they ended up below 50MiB/sec in random writes
- SanDisk is definitely attractive for absolute throughput, but significant instability is a concern
- Intel shows a decent performance, but also it is the most expensive
So I am going to run further tests on Intel and SanDisk to find out which one is more suitable for our needs.
For the reference the script I used for tests:
1 2 3 | sz=350G sysbench --test=fileio --file-total-size=$sz --file-num=64 prepare sysbench --test=fileio --file-total-size=$sz --file-test-mode=rndwr --max-time=180000 --max-requests=0 --num-threads=8 --rand-init=on --file-num=64 --file-io-mode=async --file-extra-flags=direct --file-fsync-freq=0 --file-block-size=16384 --report-interval=10 run |
I’m currently waiting for Crucial M500 960GB. Would you be interested in its test results?
MIchael,
Yes, I would like to see the results for Crucial 960GB
Did you set the virtual disk settings to writethrough mode and disabled read-ahead (aka as “cut-through i/o”)? This could give you better number for random write IOPS.
I actually use writeback not writethrough, to use 512MB cache on RAID card . I will try in writethrough mode. I did not disable read-ahead, as I thought it is irrelevant for random-write benchmark.
“cut though i/o” is a special mode for the virtual disk and in order to activate it you have to use writethrough and disable read-ahead (doesn’t matter that you are only testing writes). This is only interesting for SSD disks because apparently writing to the disk directly is faster than first writing to the cache and then to the disk.
I read about this in the context of Dell’s H700 Controller (which is just a rebranded LSI 92xx Controller). When I enabled this mode on our Intel 520 Raid-10 and 9270 controller setup random write IOPS (measured with FIO) increased by about 20%.
One interesting side effect of this is since you don’t use the cache at all you don’t need a battery to back it up which makes things cheaper. I’m not sure if this is a plus for every workload or just for random writes though and it might very well be that sequential performance might suffer but even then this is still interesting if you know that your load is mostly random write heavy.
I think it is also very interesting to see how the drives perform with controller vs using SATA slot on the board. Are we getting much extra benefit from this write back cache or not ?
I forgot to mention that the fio tests I did where synchronous so I’m not sure if the effect is as important for asynchronous workloads.
Just to comment, I played with different settings on writethrough, no-readahead and Direct mode, and I see practically no effect on the results
Would u mind to test a Corsair Neutron drive? It seems the random write performance at QD32 is gorgeous.
@Wesley
All mentioned devices in this post are bought by Percona’s expenses and I have a quite limited budget, meaning that I can’t buy all available devices on market. So I do not think I am going to buy Corsair Neutron from my budget, but I will happily test if someone provides it.
>SanDisk is definitely attractive for absolute throughput, but significant instability is a concern
Hello , can you please elaborate on this? Are you referring to the performance variations or actual system instability?
@Enrico,
By instability I mean performance variations.
Looking at SanDisk architecture it uses 2 levels of cache – DDRAM and SLC (and data is stored on MLC).
So I think this is cache handling what contributes to performance variations.
Hey Vadim,
I might receive some new SSDs and i can run some tests on.
How do you produce those nice graphs?
Thanks!
Oh well, parsing the sysbench report with sed to a csv format and then using Excel for graphs.
Have you ever tried benchmarking the Intel 520 Series 480GB? even though it’s not classified as Enterprise (lack of capacitor for example), it’s seems as the results are very surprising and stable.
I have run a small benchmark for one hour using the same sysbench command you used, the results of the 520 vs DC3500 are putting the 520 on a different scale.
Erez,
I use R/ggplot2 for charts, but there is some learning curve to have it going,
so unless you familiar with it, doing charts in Excel is the fastest way.
I did not test Intel 520 480GB (but I test Intel DC S3500).
Vadim,
When I qualified a Sandisk (and OCZ) model at YouTube I was able to lock them up using a very aggressive, wide slow query bench and also with iozone. Maybe they have improved? We went with Intel X25’s and then 320’s and 520’s.
You should be aware that Samsung 840 behave properly behind LSI 9260 cards, but very badly behind LSI 9266 cards.
I’ve been checking this page over and over looking for a reason why my Samsungs were behaving that bad, but it turns out the problem is related to the raid adapter. Sorry if it’s a bit off-topic, but people might be interested and aware of this problem 🙂 http://www.webhostingtalk.com/showpost.php?p=8644070&postcount=112
Vadim, great information!
Yes, the Samsung had some issues with RAID before. Those issues were fixed and significantly boosted performance.
Which Firmware is/was your 840 Pro using during this test? Could you re-test using the latest firmware?
My R720 with two H710P cards comes in tomorrow. Planning on running two arrays loaded with 16 x 1 TB Samsung SSDs… I’ll post some updates if anyone is interested.
Yes Ryan, I would love to hear from you 🙂