Samsung 950 pro vs Crucial BX100 in RAID-0

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
If this is true, then the protection is not what i anticipated and asserted. And you would be very right to set this straight. So i guess i owe you an apology - it seems i did not fully understand the situation properly.

That's what I was told by Micron. I was fooled by their marketing at first too, but they then explained that it only protects the lower pages (i.e. existing data) from corrupting, which might happen during an ongoing write operation as the voltage level is between two states.

OCZ Vector 180, on the other hand, has a capacitor that protects the FTL from corruption.

I get what you are saying but as I said, consumer hardware is always a compromise. If the current SSDs had a significant reliability issue due to data corruption, then there would clearly be a market for PLP equipped consumer drives at a premium and someone would certainly jump on that niche. In the end, there are many smaller players that have very little or zero enterprise presence, so they could easily include PLP without the worry of jeopardizing their enterprise sales.

I'm not saying it wouldn't be nice, but on the other hand the purpose of a company, as harsh as it sounds, is to generate profit for shareholders. The truth is that separating client and enterprise grade hardware turns in great profits because enterprises are willing to be ridiculous amounts of money for the added reliability, whereas the vast majority of consumers only look at cost per gigabyte.

The good news is that SSDs have already improved client-grade storage reliability (ignoring the growing pains and early issues). Hard drives have always had horrible failure rates, so despite the "window of opportunity" SSDs are still a much, much better option over HDDs from a reliability perspective.

When NAND is succeeded by a next generation NVM technology, the power loss issue will also be gone since SSDs should no longer require DRAM as a buffer.
 

R0H1T

Platinum Member
Jan 12, 2013
2,582
162
106
OCZ Vector 180, on the other hand, has a capacitor that protects the FTL from corruption.

I get what you are saying but as I said, consumer hardware is always a compromise. If the current SSDs had a significant reliability issue due to data corruption, then there would clearly be a market for PLP equipped consumer drives at a premium and someone would certainly jump on that niche. In the end, there are many smaller players that have very little or zero enterprise presence, so they could easily include PLP without the worry of jeopardizing their enterprise sales.
How much does this add to the cost of an SSD, perhaps single digits in $ terms? If so then wouldn't it make sense to differentiate even your consumer drives from the rest of the pack, especially since SATA III is already saturated & M.2/NVMe is gonna be more expensive for the foreseeable future.
When NAND is succeeded by a next generation NVM technology, the power loss issue will also be gone since SSDs should no longer require DRAM as a buffer.
That's up for debate as DRAM/HBM/HMC is gonna be faster than NVM, also costs & to a lesser extent system complexity.
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
How much does this add to the cost of an SSD, perhaps single digits in $ terms? If so then wouldn't it make sense to differentiate even your consumer drives from the rest of the pack, especially since SATA III is already saturated & M.2/NVMe is gonna be more expensive for the foreseeable future.

A quick search puts volume pricing of 100µF tantalum capacitors at ~0.40$ each (these are the ones you usually find inside an SSD). Depending on the capacity, SSDs with full PLP tend to have 1,500-3,000µF of capacitance (link), meaning 15-30 of these capacitors.

That may sound cheap, but even for a 256GB drive the cost of the capacitors alone would be inline with the cost of the controller (client-grade controllers typically cost ~$5-8). For a higher capacity drive you would easily be looking at twice the cost of the controller.

Also, as with any business, if you buy something in at $6 you don't sell it out at $6. The gross margin targets are usually +50%, so $6 would become $9 when it's sold to a distributor/retailer, which then add their own gross margins (usually ~33%), turning that $6 into $12.

Given that 256GB drives retail for about $80 nowadays, that $12 would add a 15% price premium over competing drives without PLP. At this point you would really have to think whether there is enough demand for such feature to justify the higher cost. The average consumer isn't aware that SSDs may be vulnerable to sudden power losses and the group of enthusiasts that do - and most importantly are willing to pay the premium - certainly aren't a big niche.

If you chose to educate the consumers, you would be looking at significant marketing costs. Besides, without having any objective third party data to show that PLP is actually needed, most consumers and even enthusiasts would likely not care because they wouldn't see any added value in the feature.
 

bgstcola

Member
Aug 30, 2010
150
0
76
Not sure where you can. But using the MX200 250GB as SLC SSD is simple: simply do not write more than half the LBA/storage capacity. You can force this by using partitions and not partition more than 50%. You might want to partition 45% to provide some overprovisioning as well. So 110GB partition would be good. Then you have a very cheap killer-SSD that operates as SLC SSD at very marginal consumer-grade cost. I like it! :awe:
But how fast will it be? Will it be as fast as the Samsung 950 pro?
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
A quick search puts volume pricing of 100µF tantalum capacitors at ~0.40$ each (these are the ones you usually find inside an SSD). Depending on the capacity, SSDs with full PLP tend to have 1,500-3,000µF of capacitance (link), meaning 15-30 of these capacitors.
Do you happen to know the capacitance of the Intel 320? This consumer-grade SSD is the only one with full power-loss protection of the entire buffer-cache as far as i know, because it uses internal 192KiB SRAM buffercache instead of using the DRAM chip to buffer writes. Thus, it should require less power to flush the entire DRAM - which is not really required to provide adequate protection. If it just protects the mapping tables, this would be equivalent to how harddrives operate and all filesystems of the 2nd generation are created to cope with lost writes in DRAM buffer.

So, i guess a controller with internal SRAM buffercache could make due with far less capacitors and the additional cost would be very marginal.

If you chose to educate the consumers, you would be looking at significant marketing costs. Besides, without having any objective third party data to show that PLP is actually needed, most consumers and even enthusiasts would likely not care because they wouldn't see any added value in the feature.
I do not necessarily agree with this. I think it is fair to say SSDs are not all that reliable as they potentially can be, and the widespread problems of early SSDs - particularly those sold by OCZ - has led many disappointed in the technology. With some SSDs of OCZ having as high as 50% failure rate it could certainly make an impression on consumers to market your SSD as being a very reliable SSD with hardware protections, showing them pictures of the PCB with the capacitors being present and thereby defending the slightly higher cost. Together with other good specs and marketing on other fronts, this could certainly be a reason to consumers to buy your SSD over that of others.

But how fast will it be? Will it be as fast as the Samsung 950 pro?
Faster in some areas - SLC will always be faster in latency than MLC regardless of the MB/s specs. But with AHCI interface there is a larger penalty than the NVMe interface that the newest generation of SSDs use, compensating for some of the performance increase.
 

Deders

Platinum Member
Oct 14, 2012
2,401
1
91
Sheesh, why you need 300.000 IOps? You going to run 2000 websites on a single SSD or what? Seriously, the SLC just provides a marginal decrease to latency which improves performance no MLC/TLC SSD can. It also improves lifespan if you write to the SSD a lot, for example when the SSD is used as caching SSD like with Intel SRT or ZFS L2ARC. It does not provide all that much benefit for really ordinary consumers. Those want the SSD to be as cheap and large as possible.

But wasn't it shown before that IOPS is directly related to latency?
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
But wasn't it shown before that IOPS is directly related to latency?
That is more complex - since the high IOps almost always refers to IOps with multiple queue depth. In this case, we speak about average latency. You can have 1000ns (1ms) latency per I/O, but because you do 10 concurrently, the average latency would be only 100ns - even though all I/O's take 1000ns to complete.

Think of it like a highway: you can have one car at 100kph driving down the road. To reach the destination, it takes 15 minutes. Now if you have two cars driving the same speed, you can say: one per 7,5 minutes, even though both cars take 15 minutes to reach the destination. 15 minutes is the absolute latency here, 7,5 minutes is the average latency.

But this is all rather technical. I compiled this list of what i believe to be the most important performance specs for consumers, starting with the most important spec and concluding with the least important spec:


4K Blocking Random Read - often referred to as QD=1 (single queue depth)
This is the most important performance spec of any SSD and due to latency constraints and the fact that RAID0 cannot accelerate/improve this, it is about the same for every SSD. Only SLC versus MLC or MLC versus TLC can improve it, as can technologies like NVMe which has a lower latency than AHCI which is used today for harddrives and SSDs.

4K random reads are very common on consumer workloads, for example: booting, launching applications, basically anything you click or every normal application you use, is going to rely on 4K random reads for a good portion.


Sequential read
This is the most common number and often the highest number. It is also important, but almost always this is already pretty high. SSDs internally have a lot of read bandwidth since NAND has properties that allow it to read about 7 times as fast as writing it. So if the SSD can write at 400MB/s, then you can say that internally it has about 400*7=2,8GB/s of read bandwidth. Due to the SATA/600 interface and because of controller constraints, this often get capped to 500MB/s.

This kind of performance can be felt when loading large applications, like games, which often have large bulk data they need to load into RAM memory.


4K Random Read with high queue depth
This is the 3rd most important spec i think. If the application uses asynchronous reads, meaning multiple reads will be issues concurrently, the SSD can process them at the same time. Just like a highway can process multiple cares on multiple lanes at once. And while the cars themselves do not go faster, the overall throughput - x cars per second - does improve significantly. This is why high queue depths allow the SSD to transcend above its 20MB/s 4K random read performance and offer up to 15/16 times faster speeds in theory - in reality little over 10 times as fast like 250MB/s. That is a shitload of IOps - in excess of 60.000. You need some serious applications to actually reach this. Most often, you have varying queue depths between 2 and 10. You require NCQ feature to actually send multiple I/O requests concurrently to the SSD, and AHCI is required to activate NCQ. So if your BIOS is set to IDE/legacy mode, you will have the same performance as 4K with single queue depth: only 20MB/s.


4K Random write
Every once in awhile the system will write log files and other stuff; these are often small writes that happen in bursts, so the queue depth is variable and does not matter that much. What is important is that the SSD does not compromise the other performance specs while this happens, as is the case for mechanical harddrives - a few random writes in between causes seeks which distorts the read performance. For modern SSDs this is no longer the case.


Sequential write
Sequential write is not very important for consumers, especially when they use a small SSD as system drive. I mean, writing at 400MB/s+ on your C: drive, when are doing that? Maybe when you install a game or something, but then the data has to come from somewhere, and often is needs to be extracted which can also be CPU-bound.

Ironically, this performance spec is about the only thing where SATA/600 SSDs differentiate, and as such everyone looks at this spec. Intel and Crucial traditionally where less strong in this area, and thus their products were not all that popular for ordinary consumers who focus on high numbers.

Maybe later though, when we have 2TB or 10TB SSDs, the sequential write becomes important because we start using our SSDs as mass storage. And copying one SSD to the other can go at lightning speeds in excess of 2GB/s. Then you want a decent write speed. But for now, unless you have very specific needs or workloads, you can safely ignore sequential write altogether.
 
Feb 25, 2011
16,823
1,493
126
But wasn't it shown before that IOPS is directly related to latency?
Yes. Latency is the minimum time (milliseconds or microseconds) it takes to do something, and IOPS (peak or sustained) is the number of "do something"s per second. So for a single disk, they correlate pretty closely.

Most SSDs have fairly low latency and therefore have high IOPS, although 300k IOPS is way beyond anything most consumer SSDs can provide in any case. (You can browse the results on the anandtech benchmark list, but 5k-10k sustained appears pretty typical for newer consumer drives, with peaks around 50k for some models.)

When you RAID a bunch of disks together, typically, there is a an increase in potential IOPS, but latency doesn't budge. This is because if you request a particular block of data, it is still sitting on a single drive somewhere (so minimum "do something" time doesn't change) but all the other drives can, theoretically, be servicing other requests. So as long as all your read/write requests don't hammer a particular drive unfairly, you get a pretty good (close to linear, but never perfectly so) increase in IOPS.

This is why big RAID arrays are so useful for heavily loaded database applications - You can do "clever" things with RAM caching and so on, especially with small databases, but for big ones, the number of users you can reliably service ultimately scales with the number of drives in the array.

The problem for home users putting SSDs in RAID-0 is that many (cheap) RAID controllers actually create a latency bottleneck. So for many users (particularly with older on-board motherboard RAID) it's a tradeoff: you can get a single SSD that provides 40k peak IOPS and ~500MB/sec sequential throughput. However, when you stick a pair of those same SSDs into a RAID-0, you get the expected ~900MB/sec sequential throughput but maybe only ~30k peak IOPS. Or less. Depending.

It didn't matter for HDDs, since most of them were hard pressed to push more than a couple hundred IOPS each - even a cheap RAID controller would never bottleneck a handful of those.

As far as usability goes, it's a wash - doesn't really matter. But people pay attention to benchmarks, and will want to know why their RAID array is "slower" than a single drive.

Adding another few hundred bucks for a "good" RAID controller adds additional complexity and cost to a system that rarely benefits from the increased potential performance.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
Yes. Latency is the minimum time (milliseconds or microseconds) it takes to do something
I would put it differently. I would say: latency is the actual time, not the minimum time. When people talk about latency, they usually mean latency for random I/O like 4K. But latency is more universal and also applies to sequential I/O with larger transfer sizes, meaning the latency is a bigger number not the lowest number. For example, you can easily calculate the latency for sequential reads with 1MiB request size:

Assuming the SSD can do 500MB/s and assume the host sends 1MiB requests, then the latency is 1/500th second or 2.0ms.

For random reads, the latency is usually much lower on SSDs. Assuming 250MB/s of multiqueue random read performance, assuming 4KiB requests. The latency now is 0,016ms.

So for a single disk, they correlate pretty closely.
Even a single SSD is already a 15 or 16-way interleaved (RAID0) of NAND, so you would have the comparable performance characteristics as a RAID0 of 16 harddrives, for example. That is: you need multiple queue depth to saturate a single SSD in random reads.

When you RAID a bunch of disks together, typically, there is a an increase in potential IOPS, but latency doesn't budge.
True. This is why a single NAND SSD using AHCI always has about 20MB/s of blocking random read performance -- RAID0 cannot improve this.

This is because if you request a particular block of data, it is still sitting on a single drive somewhere (so minimum "do something" time doesn't change) but all the other drives can, theoretically, be servicing other requests. So as long as all your read/write requests don't hammer a particular drive unfairly, you get a pretty good (close to linear, but never perfectly so) increase in IOPS.
This is true to explain why increasing the queue depth does not linearly increase IOps - the I/O requests are not evenly distributed. That is why you need 16 or 32 queued I/O's to saturate a 10-channel controller.

The problem for home users putting SSDs in RAID-0 is that many (cheap) RAID controllers actually create a latency bottleneck. So for many users (particularly with older on-board motherboard RAID) it's a tradeoff: you can get a single SSD that provides 40k peak IOPS and ~500MB/sec sequential throughput. However, when you stick a pair of those same SSDs into a RAID-0, you get the expected ~900MB/sec sequential throughput but maybe only ~30k peak IOPS. Or less.
I disagree. You seem to blame the controller. But you know that onboard RAID is FakeRAID and many cheap addon controllers are also FakeRAID. FakeRAID means that the controller is just a SATA controller acting as HBA. The (Windows-only) drivers do the actual RAID part.

The real reason RAID0 might not provide increased IOps or increased throughput:

1. Using a PCI or PCI-express addon FakeRAID controller, the controller is limited in PCI(e) bandwidth. PCI is already a huge bottleneck, PCI-express x1 runs at 250MB/s or 500MB/s and you can calculate 15% overhead/inefficiency depending on PCIe Payload.

This has nothing to do with the RAID part. The controller itself simply cannot provide enough bandwidth to the memory because of interface bottlenecks.

2. Using an addon FakeRAID card means you need to use their drivers to provide the RAID functionality. Like: ASMedia, Promose, Silicon Image, Marvell, JMicron, etc. These drivers are not at all efficient of properly engineered. Some always transfer the entire stripe block when only a fraction of the stripe block has been requested.

3. People configure their RAID0 the wrong way, using a too low stripesize. People somehow have been taught that you need a high stripesize to have good MB/s scores and a low stripesize to have good IOps scores. It is actually the other way around: lower stripesizes are good for throughput, higher stripesizes are good for IOps. Stripesizes of 1MiB and above are quite common when optimizing for IOps, as can be done on Linux/BSD software RAID.

4. People use old-fashioned operating systems like Windows XP that start the partition at sector 63 offset, meaning an offset of 31.5KiB. This causes a misalignment issue which is not so bad for throughput (MB/s) but kills the IOps performance potential.


Today, if one uses Windows 7+ and uses Intel onboard RAID, most of the above points are not applicable and the user will get doubled IOps performance. But because Windows has single threaded storage backend, this can be bottlenecked by single core CPU performance. But for two SSDs and a decent CPU this should not happen easily.

It didn't matter for HDDs, since most of them were hard pressed to push more than a couple hundred IOPS each - even a cheap RAID controller would never bottleneck a handful of those.
That is not true. The RAID0 reviews where Anandtech and StorageReview said that RAID0 had no place on a desktop, were based on bad testing and bad setup with FakeRAID PCI card and misalignment issues. This impacted RAID0 HDD performance to the point where there was only a small benefit in throughput and all benefit in IOps was penalized. In these tests, the latency actually became worse due to all bottlenecks. This is why RAID0 has such a bad name for the desktop.

The funny thing is that RAID0 today is what gives SSDs their speed - without it all the numbers in benchmarks would be very low.

Adding another few hundred bucks for a "good" RAID controller adds additional complexity and cost to a system that rarely benefits from the increased potential performance.
Actually the true hardware RAID controllers do exactly what you say: increase latency. This is because they have their own Intel IOP ARM processor with own memory. This increases latency because basically the request needs to be processed twice. People were taught that Hardware RAID was better because they were good at XOR-ing calculations for RAID5 and RAID6. This is not true, your own CPU can do XOR at multiple GB/s fairly easy - XOR is one of the easiest instructions for your CPU that is bound by memory bandwidth.

The real reason Hardware RAID was better in RAID5 and RAID6 was of the better firmware that provided a true Split&Combine-engine. This is required to cut host I/O into pieces that are optimal - namely the optimal stripe block. For RAID5 of 4 disks and 128KiB stripesize that is 128K * (4 - 1) = 384KiB. Only writes of this size will be fast, other writes require a read-modify-write phase where data needs to be read before the write can begin.

Software RAID is superior to Hardware RAID. But in reality, Hardware RAID in the past had better engineered firmware than the software RAID engines available at that time. Today, GEOM RAID is the best, followed by Linux MD-raid. Windows continues to provide very poor software RAID features. But Intel actually has good features and decent performance. It also provides 'Volume Write-back caching' which introduces a nice RAM buffercache which accelerates writes regardless of RAID level.
 
Last edited:

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
That is not correct. Latency is the actual time, not the minimum time. But when people talk about latency, they usually mean latency for random I/O like 4K. But you can easily calculate the latency for sequential reads with 1MiB request size:

I think dave_the_nerd's meaning was quite clear, and this whole paragraph is overly pedantic to be the point of being sort of an intentional mischaracterization of what was said.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
I think dave_the_nerd's meaning was quite clear, and this whole paragraph is overly pedantic to be the point of being sort of an intentional mischaracterization of what was said.
I agree it is somewhat picky. Sorry about that. I rephrased that paragraph.

I jumped on it, because very often people talk about latency when it only concerns random I/O; then it is a low number for SSDs anyway. But latency can also be quite a bit higher for sequential I/O.

I agree it is a trivial point. Maybe i shouldn't have written about it at all.
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
Do you happen to know the capacitance of the Intel 320? This consumer-grade SSD is the only one with full power-loss protection of the entire buffer-cache as far as i know, because it uses internal 192KiB SRAM buffercache instead of using the DRAM chip to buffer writes. Thus, it should require less power to flush the entire DRAM - which is not really required to provide adequate protection. If it just protects the mapping tables, this would be equivalent to how harddrives operate and all filesystems of the 2nd generation are created to cope with lost writes in DRAM buffer.

So, i guess a controller with internal SRAM buffercache could make due with far less capacitors and the additional cost would be very marginal.

Unfortunately I don't know the capacitance and I couldn't find any proper results when searching the capacitor (may be a discontinued part). There are six tantalum capacitors on the PCB, which is certainly less than what modern enterprise SSDs have.

The SSD 320 does have DRAM, though, but only 64MB, which is used solely for FTL caching (on-die 256KB SRAM is used for user data). Old SSDs use a different FTL design (details here), hence they got away with very little DRAM. Since 2011-2012 SSDs started to use a new structure that brought performance gains especially in performance over time, which also increased the DRAM requirement to ~1MB per 1GB of NAND.

DRAM-less controllers are starting surface again in the lower end (e.g. Silicon Motion 2246XT) due to lower BOM, but the performance is obviously not competitive against controller with external DRAM, thus we won't see DRAM-less controllers taking over the market.

Note that the Intel SSD 320 was never purely a consumer drive either. Intel aggressively marketed it to enterprises as well as an entry-level drive because the company didn't have proper enterprise line up back then and the SSD 710 when it was released was considered as a high-end drive due to eMLC. That's also why the SSD 320 wasn't EOLed when the SSD 510 was released. Back then SSD volumes were substantially lower (and margins higher), so it probably made financial sense to do just one design for both client and entry-level enterprise rather than have two SKUs.

I do not necessarily agree with this. I think it is fair to say SSDs are not all that reliable as they potentially can be, and the widespread problems of early SSDs - particularly those sold by OCZ - has led many disappointed in the technology. With some SSDs of OCZ having as high as 50% failure rate it could certainly make an impression on consumers to market your SSD as being a very reliable SSD with hardware protections, showing them pictures of the PCB with the capacitors being present and thereby defending the slightly higher cost. Together with other good specs and marketing on other fronts, this could certainly be a reason to consumers to buy your SSD over that of others.

OCZ's Vector 180 is essentially the SSD you want, but it didn't gain much traction despite having power loss protection for the FTL. I know that OCZ history doesn't associate the brand with high reliability (although that has changed dramatically), but the point is that consumers don't really care about such minor details because they don't fully understand them in the first place.

Power loss protection isn't something you can explain to a consumer without decent knowledge of computer architecture and thus marketing it is difficult. Showing a picture of a capacitor array doesn't do much if the consumer doesn't understand what they are and what's their function inside an SSD. You would also be surprised by how many enterprise customers are buying client-grade drives because they don't understand power loss protection or simply decide to ignore it, even though the data is much more critical and there's no time for FTL recovery.

Moreover, consumers are finally starting to learn that they need a backup no matter what their storage setup is. With an up-to-date backup in place, a drive failure isn't a catastrophe, especially since smartphones/tablets can be used to perform most of the day-to-day tasks, making PC downtime not as painful as it used to be. I'm not saying that drive failures are acceptable, but as SSD failure rates are already below 1% I don't think many consumers would be willing to pay premium for the extra 0.5% of reliability that PLP might bring.

For random reads, the latency is usually much lower on SSDs. Assuming 250MB/s of multiqueue random read performance, assuming 4KiB requests. The latency now is 0,016ms.



Queue depth is part of the equation, so your calculation is incorrect. Assuming QD32, the correct latency would be 0.512ms.

Increasing queue depth does not improve latency, only IOPS and MB/s. The latency per IO is still the same (or usually higher as the drive has to perform more parallel operations), but as the drive is fed with more IO requests it can process them simultaneously, hence improving IOPS and MB/s.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
Unfortunately I don't know the capacitance and I couldn't find any proper results when searching the capacitor (may be a discontinued part). There are six tantalum capacitors on the PCB, which is certainly less than what modern enterprise SSDs have.

The SSD 320 does have DRAM, though, but only 64MB, which is used solely for FTL caching (on-die 256KB SRAM is used for user data). Old SSDs use a different FTL design (details here), hence they got away with very little DRAM. Since 2011-2012 SSDs started to use a new structure that brought performance gains especially in performance over time, which also increased the DRAM requirement to ~1MB per 1GB of NAND.

DRAM-less controllers are starting surface again in the lower end (e.g. Silicon Motion 2246XT) due to lower BOM, but the performance is obviously not competitive against controller with external DRAM, thus we won't see DRAM-less controllers taking over the market.

Note that the Intel SSD 320 was never purely a consumer drive either. Intel aggressively marketed it to enterprises as well as an entry-level drive because the company didn't have proper enterprise line up back then and the SSD 710 when it was released was considered as a high-end drive due to eMLC. That's also why the SSD 320 wasn't EOLed when the SSD 510 was released. Back then SSD volumes were substantially lower (and margins higher), so it probably made financial sense to do just one design for both client and entry-level enterprise rather than have two SKUs.
Thanks for your insights!

OCZ's Vector 180 is essentially the SSD you want [..] I know that OCZ history doesn't associate the brand with high reliability (although that has changed dramatically)
I will never buy nor recommend something branded as OCZ, even though they got acquired by Toshiba. Toshiba isn't something that i associate with quality either. LSI buying Sandforce might actually improve the brand to a degree, but their dedupliation and compression techniques are not all that sexy any more now that with conventional technology high speeds can be achieved as well. Besides, Sandforce was made to cheat benchmarks i think: compression to cheat ATTO and dedup to cheat IOmeter (without the patch that stopped re-using the same incompressing string over and over).

Power loss protection isn't something you can explain to a consumer without decent knowledge of computer architecture and thus marketing it is difficult.
Well i disagree. Consumers do not need to understand it. You just need to play into the feeling consumers have about reliability. Objective facts and arguments do not win the consumers heart, but playing into their sense of insecurity and feelings will.

So displaying the components and having some marketing blahblah about reliability and establishing your brand as something that stands for quality, will be a an effective marketing tool - i think anyway.

Plus, you can brag that other brands do not have your hardware protection 'to save cost' - but that your MEGAPRO®-ULTRA(TM)-TeRMiNaToR® solution does. :biggrin:

Moreover, consumers are finally starting to learn that they need a backup
In my experience, consumers are just as lazy as they were before. The tools to actually make backups are just a lot more accessible, so they chance that they have backups of their stuff has increased.



Queue depth is part of the equation, so your calculation is incorrect. Assuming QD32, the correct latency would be 0.512ms.
If you talk about absolute latency - or actual latency - then yes. But i was calculating average latency, as described in this paragraph:

Think of it like a highway: you can have one car at 100kph driving down the road. To reach the destination, it takes 15 minutes. Now if you have two cars driving the same speed, you can say: one per 7,5 minutes, even though both cars take 15 minutes to reach the destination. 15 minutes is the absolute latency here, 7,5 minutes is the average latency.
 
Last edited:

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
Well i disagree. Consumers do not need to understand it. You just need to play into the feeling consumers have about reliability. Objective facts and arguments do not win the consumers heart, but playing into their sense of insecurity and feelings will.

So displaying the components and having some marketing blahblah about reliability and establishing your brand as something that stands for quality, will be a an effective marketing tool - i think anyway.

Plus, you can brag that other brands do not have your hardware protection 'to save cost' - but that your MEGAPRO®-ULTRA(TM)-TeRMiNaToR® solution does. :biggrin:

I guess we can agree to disagree on this. I'm still of the opinion that it's not an easily marketable feature as understanding the benefits of PLP would require knowledge of SSD architecture. I know a group of enthusiasts like yourself would appreciate the extra layer of protection, but it's not a big niche.

If you talk about absolute latency - or actual latency - then yes. But i was calculating average latency, as described in this paragraph:

Think of it like a highway: you can have one car at 100kph driving down the road. To reach the destination, it takes 15 minutes. Now if you have two cars driving the same speed, you can say: one per 7,5 minutes, even though both cars take 15 minutes to reach the destination. 15 minutes is the absolute latency here, 7,5 minutes is the average latency.

That's not the way average latency works. Latency is always an absolute measure of time and it's independent from throughput and IOPS. It doesn't matter how many IOs or cars there are, the fundamental characteristics don't change.

Using your car analogy, the average latency would still be 15 minutes [(15+15)/2]. You wouldn't say that a 15-minute drive took us only 7.5 minutes because we had two cars, would you? Sure, twice as many cars and hence people got from place A to place B in 15 minutes, thus doubling the number of cars/people per 15 minutes, but it still took 15 minutes for each car to get from A to B.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
In my analogy it would be on average 7,5 minutes per car that arrives. When people say "1.5 people a day get killed by lightning" that does not mean 1 person dies and only half of the other person dies. It means that on average measured over time 1.5 people die, which is not an absolute measure of course. Neither is average latency.

Average latency is connected to Throughput and IOps. It's not unlike the way Watt/Volt/Amperage are connected. For example, i use these formulas:

Throughput = IOps * Request Size
IOps = Throughput / Request Size
Request size = Throughput / IOps
Latency(absolute) = Queue depth / IOps
(only valid if all I/O requests take an equal amount to process)
Latency(average) = 1 / IOps


Instead of saying 250MiB/s we can say 64000 IOps of 4KiB each. Or we can say on average 15,625ns latency per I/O transaction. The absolute latency indeed depends on the queue depth as you said. But it usually varies a lot, so it is not all that useful to calculate. The average latency is much easier to calculate and work with. Generally, we use MB/s for throughput when describing sequential I/O, use IOps when doing burst random IOps and use latency when it is important for example for blocking random reads (often with a low queue depth).
 
Last edited:

Elixer

Lifer
May 7, 2002
10,376
762
126
I will never buy nor recommend something branded as OCZ, even though they got acquired by Toshiba. Toshiba isn't something that i associate with quality either.
At this point in time, I see nothing wrong with OCZ/Toshiba, and Toshiba itself has tons of units out there, in apple, dell, and others.
Their controller is decent, it won't win any speed records, but, it gets the job done, their warranty service is better than samsung, and, overall, they are in the top 5 SSD makers.
So, what is your definition of quality?
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
Well, it might be a bit unfair of me to bring this up... but... i cannot help myself. :awe:

The heart and soul of all Kingston V Series SSDs is the Toshiba TC58NCF602GAT controller pictured above. [..] This Toshiba controller is based off the JMicron JMF602 controller. Many enthusiasts might be thinking about the stutter issue associated with this specific controller, but Kingston claims to have solved it with new and improved firmware.

Basically, the JMicron JMF602 controller has been laser etched with the Toshiba name as Kingston didn’t want consumers to see the JMicron name and think this drive would stutter.


I think this is just sooooo amusing! :biggrin: Besides, it might say something about the positioning of Toshiba if it is using the lowest grade controllers. Basically they are a budget brand, and their SSDs might not be bad like was the case in the past, but also not worth buying.

Since there are so many other good SSDs to consider:

If you want an allround good SSD, pick the Crucial MX200 - great SSD and great value.

If you want highend performance, the Samsung 950 Pro would be your first choice.

If you want a cheap SSD, the Kingston V300 with Sandforce controller provides very good value and the controller is decent nowadays - after Sandforce underwent extensive betatesting by OCZ customers - who more than happily served as guinea pigs to squeeze all the hundreds of firmware bugs out of the Sandforce controller with its complex deduplication functionality designed to cheat outdated benchmarks like ATTO and unpatched IOmeter.

If you want the most reliable consumer-grade SSD, the Intel 320 is still the classic that is almost impossible to beat.

So why bother with Toshiba? Why bother with all the other brands that operate in the margin? They are not interesting at all to consider - IMO.
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
When asked to provide an anecdote about why you don't like toshiba, the response was to find a situation where Kingston was cheaping out on components and likely paid toshiba extra to provide them with a re etched controller to hip fake reviewers?

If you have a problem with toshiba's SSDs you should probably start by picking a toshiba ssd and showing what is wrong with it. They are a well regarded omen and their client devices perform very well. Why bother with toshiba? Because their q series is occasionally the best perf/dollar.

Also the problem with the v300 wasn't just the controller. It was a bait and switch on the NAND.
 

coercitiv

Diamond Member
Jan 24, 2014
6,403
12,864
136
Besides, it might say something about the positioning of Toshiba if it is using the lowest grade controllers. Basically they are a budget brand, and their SSDs might not be bad like was the case in the past, but also not worth buying.
Many Apple products use Toshiba NAND and controllers, they must be a budget brand indeed. In fact, ever since Toshiba invented NAND flash memory people should have realized there's something wrong with them. /s
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
When asked to provide an anecdote about why you don't like toshiba, the response was to find a situation where Kingston was cheaping out on components and likely paid toshiba extra to provide them with a re etched controller to hip fake reviewers?

If you have a problem with toshiba's SSDs you should probably start by picking a toshiba ssd and showing what is wrong with it.
Toshiba was the brand that chose the lowest quality SSD controller (JMicron) to invest their time in to do a die-shrink of their controller and provided better firmware - as far as i recall. First JMicron, now OCZ, i see a pattern where they choose the low-grade brands to invest in. That says something about their positioning as budget brand.

I also provided a list of interesting SSDs and Toshiba isn't in that list - other SSDs are far more interesting to consider in my opinion.

Sorry if you guys like Toshiba. I do not.
 

CiPHER

Senior member
Mar 5, 2015
226
1
36
He doesn't mean the NVMe protocol, he means a successor to NAND flash memory - the current Non-Volatile Memory (NVM) technology that SSDs use. Those successors, such as PCM - Phase-Change Memory - have better characteristics such as no more restrictions on re-programming/erasing existing data, which vastly improves performance and allows for less complex controllers. This should boost reliability, allows for easier/faster controller design and of course has performance benefits.

PCM was also said to be able to replace DRAM, and would allow for computers that can instantly switch off and when switched on again do not have to reboot, because the data in the RAM memory is not gone, as is the case with volatile memory like DRAM.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |