Storage Devices, Memory & The Future

flexy

Diamond Member
Sep 28, 2001
8,464
155
106
We're now at a time where SSDs are becoming mainstream, no question.

SSD == Memory, although slower than system memory, but we're getting there, no question about that either.

How an OS or computers in general still work today is that data, eg. files like a movie, game data or a program are stored on a "storage device" (magnetically on a conventionally HD)...and then "loaded" from this storage device into System memory where the code is accessed or run.

If you look at this from a SSD perspective it's actually bizarre.

A game "loads" a game level from one type of memory into System memory, the OS "loads" a program from the "storage type memory" into system memory.

Now..let's add 5 or 10 years and let's assume that this "storage memory" aka SSDs has become equal in terms of speed of System memory.

Once SSDs are so fast like system memory, the concept of "loading" a program will obviously become outdated.

In such a future scenario, the only time when data would have to be "loaded" (into memory) would be at time of install, then never again. Once a program, data etc. is "installed" it would remain on the "storage memory", it is already "pre-loaded" in the same way as when data today is "loaded" from the storage device into system memory (say when I click a button in Windows to "load" whatever program) - a crucial, redundant and time consuming etc. element (the requirement to "load" into memory first) is taken away.

Obviously (IMO) this would result to significant changes in how computers work (???) since there won't be a difference anymore of what is "system memory" and "storage memory", it's the same thing. "Loading" is a thing of the past, because data will always and instantly be available already, ready to be executed.

Since loading times and disk accesses (which still have SOME significance today) will entirely be gone this would also mean a significant improvement in general computer speed. (Simplified speaking: Say you install a game, all the data, levels etc. will be already there whether you're actually playing right now or not).

(Obviously this will require some changes, eg. technically, data in system memory may have a different structure than data "on storage"...but this won't be an issue. The data would simply be stored and structured on the "storage memory" so it can be executed by the OS right away...this could happen at the time of install already in the same way as eg. the "loading" process today may place data in system memory in certain ways, eg. where parts of a file may be allocated to different areas in memory etc...but all that's technicalities which can be solved)

I think in the not so far future the concept of "storage device", "file", "file system" etc. will also become obsolete. (When everything is already "there" and accessible....what is supposed to be a "file"?) Respective we might keep those terms but they will technically be something different of what we have today.

*

How would that look in terms of hardware and general computer/PC archtecture?

When "storage memory" and "system memory" are the same, obviously, no extra "storage devices" won't be needed anymore. We will have computers which will have a huge chunk of memory, that's it.

Not only "storage devices" per se won't be needed anymore, this also means controllers etc. won't be needed, no wired connection from a board to a "storage device" external from the board. This ALONE will add a significant speed factor to access data which today still have to be squeezed through SATA ports etc. So basically, the entire bottleneck there will be gone.

Agree, Disagree?
 
Last edited:

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
MRAM has been touted as a possible solution to this as it offers performance and endurance similar to DRAM but is non-volatile (i.e. holds data without power). The only question is whether it will scale well enough to compete with DRAM and NAND because in the end mainstream storage is all about cost.
 

88keys

Golden Member
Aug 24, 2012
1,854
12
81
MRAM has been touted as a possible solution to this as it offers performance and endurance similar to DRAM but is non-volatile (i.e. holds data without power). The only question is whether it will scale well enough to compete with DRAM and NAND because in the end mainstream storage is all about cost.
^This.

From what I can see, ordinary HDDs aren't going anywhere anytime soon. It is by far the cheapest storage solution by unit available. They may fall out of favor as far as where you install your programs and OS, but they will still be in use as storage as a secondary drive or an enclosure.

If SSDs and system memory every reach comparable performance levels, it would most likely result in your SSD and RAM being combined into a single module and HDDs becoming more of an optional accessory to those who need to store large amounts of data.

And even if SSDs don't keep up with system memory, combining the two may become a solution for mobile devices.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
MRAM has been touted as a possible solution to this as it offers performance and endurance similar to DRAM but is non-volatile (i.e. holds data without power). The only question is whether it will scale well enough to compete with DRAM and NAND because in the end mainstream storage is all about cost.
Can it scale in cost well enough to be a cache?

It seems like an ideal technology for an SSD write-cache. Do write logging to MRAM, then flush them to NAND, making bursty loads super quick, with minimal voltage holding needs during power loss to keep sane state (just long enough to write and start charging down command/address lines).
 

VirtualLarry

No Lifer
Aug 25, 2001
56,554
10,171
126
From a computer-science perspective, you also have to look at how you address "persistant RAM". Especially also considering how many hosts on the internet there are, if they all also have persistant RAM. How do you unify the address spaces to make everything one gigantic computer address space for storing data and programs?
 

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
Can it scale in cost well enough to be a cache?

It seems like an ideal technology for an SSD write-cache. Do write logging to MRAM, then flush them to NAND, making bursty loads super quick, with minimal voltage holding needs during power loss to keep sane state (just long enough to write and start charging down command/address lines).

Everspin, the only company that's currently commercially manufacturing MRAM (technically ST-MRAM), says that they're able to quadruple the density every 12-18 months. We're currently at 64Mb, so in 2-3 years we should have 1Gb MRAM chips. Cost will at first be an issue (Everspin predicts their 1Gb MRAM chip to have $/GB of around $50) but Everspin is also claiming that MRAM should move from SRAM pricing to DRAM pricing within the next 5 years, which would make it viable for use in SSD as a replacement for DRAM.

I agree that MRAM sounds perfect for SSDs given the power-loss problem. It's a much better solution than capacitors (which introduce their own set of issues) and might even be cheaper once prices drop enough.
 

corkyg

Elite Member | Peripherals
Super Moderator
Mar 4, 2000
27,370
239
106
The notion that system memory and solid state storage memory are the same is currently incorrect. There is a fundamental difference - one is volatile and the other is non-volatile.
 

npaladin-2000

Senior member
May 11, 2012
450
3
76
Actually, flash memory is fairly slow and getting slower by the generation. It doesn't even saturate the SATA3 bus, and won't saturate a dual-channel RAM setup of any kind. The reason SSDs are as fast as they are is not because flash memory is fast, but because they've attached a lot of it, in a multi-path configuration operating similarly to RAID-0, all behind an on-board controller, generally with some extra cache. Speed advances come from tweaking the architecture. But capacity advances are coming from tweaking the flash, and a side effect has always been to decrease speed, increase latency, or both, necessitating the architecture tweaks previously mentioned.

At this point we're levelling off, SSDs are not really getting much faster, the struggle is to maintain current speeds as capacity increases, dies shrink, etc etc.

The scenario the OP is proposing will never happen. There will always be a need for both a small amount of high-speed "work area" memory, combined with a relatively large amount of relatively slow "storage area" memory for stuff that isn't being worked on right away. Just because some of that storage area is now made up of flash chips that are faster than magnetic disks doesn't mean said flash chips about to replace high-speed DRAM.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
As far as the OP's stuff goes, it's way, way, off.

Let's look at bandwidth:
L1: 100GBps
RAM: 13GBps (single-channel)
NAND: up to about 0.4GBps (device), but also down to more like 0.04GBps
HDD: up to 0.2GBps

Now, let's look at access times:
L1: ~1ns
RAM: ~50ns (Intel CPUs seem to go up to 80ns or so, for random access with high-latency RAM, but it's highly variable)
NAND: 20,000 to 150,000nS read, and rising; anywhere from 50,000ns to 1,300,000ns for writing, and rising; and low milliseconds for erasing, too
HDD: 6,000,000,000ns right on up into whole seconds. 6ms only for some 15K drives. Double it for 7200RPM.

And, finish up with costs:
L1: N/A
RAM: $8/GB (steadyish, but rising)
NAND: $0.6/GB (typical, but shrinking)
HDD: $0.05/GB (steadyish)

SSDs may beat the crap out of HDDs, but the NAND making them up already is needing caching tricks to be as fast as NAND from prior gens, and that's not likely to get better. We already have tiering inside SSDs, in part due to slowing down of MLC NAND as it shrinks.

It's also crazy slow compared to RAM. You're not waiting on the program ggetting to the CPU, so much as the program using the CPU for awhile to reach a point where it's usable by you. Program loading time are mostly CPU-limited, not storage-limited, once storage isn't taking 20ms to get anything from point A to point B (this includes the 2nd time you start a program, even w/ an HDD).

(Obviously this will require some changes, eg. technically, data in system memory may have a different structure than data "on storage"...but this won't be an issue. The data would simply be stored and structured on the "storage memory" so it can be executed by the OS right away...this could happen at the time of install already in the same way as eg. the "loading" process today may place data in system memory in certain ways, eg. where parts of a file may be allocated to different areas in memory etc...but all that's technicalities which can be solved)
https://en.wikipedia.org/wiki/Memory_management_unit
https://en.wikipedia.org/wiki/Demand_paging
https://en.wikipedia.org/wiki/Mmap
Win9x, WinNT, *n*xes of varied kinds, and so on, have been doing all that for some time. It's the normal way files that aren't parsed and discarded are used, including executables. Yes, there's some management overhead compared to just having it in a flat address space, but not much, especially considering all those microseconds SSDs take to load anything; plus, it allows storage to exceed the size of the address space.

The thing is, there's no need for special structure. The OS can tell the hardware what needs to go where, and the hardware can make it happen. The CPU just sees that some virtual address set points to some physical address set, and reads it. I mean, yes, it can get quite complex, but the basic implementation removes the need for the CPU to have to know or care, yet at the same time allows it to be efficient both in using what is in RAM, and in allowing it to try to access what isn't, fault, and have the OS software tell it what it needs to load that it faulted on. Aside from making big dense storage faster, it's pretty much been solved for decades. Since it's rarely ever been a performance problem to do all the mapping and cache management, nobody outside of HPC has generally cared to try to make FSes all fit in the CPU's address space neatly.

I think in the not so far future the concept of "storage device", "file", "file system" etc. will also become obsolete. (When everything is already "there" and accessible....what is supposed to be a "file"?) Respective we might keep those terms but they will technically be something different of what we have today.
A file is a named list of data elements, usually bytes, with a known bit order. Nothing more, nothing less. Where and how it is stored has nothing to do with it. Files and file systems exist for the sake of humans organizing data, not for any need the computer system itself has. If they are to be made obsolete, it's going to take something far more disruptive than a big chunk of memory to do it.

The computer only sees some numbers in some tables. "/var/mypileoffiles/myfile" could be in RAM, guaranteed by ramfs, likely in RAM, backed by the page file, in tmpfs, on some block device locally in in any number of FSes, across the building on a NFS or CIFS share, or even across the world. It could even be on a block device far away, using a block file system, but be treated like it was just a high-latency local disk. But, putting a file in any of them, and moving it about, will get you back the same sequence of bits, with the same name. Being able to say, "this ordered sequence of data is called this," is all that makes a file. Windows supports such abstractions, too, deep down inside...it just also has historical clutter mixed in, with things like drive letters mucking it up on the outside, so it's clunkier to work with.

When "storage memory" and "system memory" are the same, obviously, no extra "storage devices" won't be needed anymore. We will have computers which will have a huge chunk of memory, that's it.
Apple already sells iPhones and iPads . No thanks. Building storage right in is possible with PCs, and is done in some tablets. It reduces cost. Added storage is not always needed, but it hasn't, and probably won't, end up so niche as to not matter. For every, "all I need is the cloud," guy, there's somebody with TBs of photos, and growing.

But, also, not separating classes of storage makes life harder. We can take tons of complexities over compatibility between versions, and gloss over them with a reboot. Persistent software is useful, but as, "the way things should be," building from the bottom up, it is too hard for devs to want to work with, when they could be spending their time adding features people want, making GUIs that don't suck (or, that are highly efficient, but ugly enough to get complaints ), improving stability, performance, etc..

Not only "storage devices" per se won't be needed anymore, this also means controllers etc. won't be needed, no wired connection from a board to a "storage device" external from the board. This ALONE will add a significant speed factor to access data which today still have to be squeezed through SATA ports etc. So basically, the entire bottleneck there will be gone.
It just reduces cost. The SATA interface will hopefully disappear, as a requirement, anyway, within the next few years. But cheap slow flash will still be cheap slow flash, and that's going to tend to be what gets used on highly-integrated devices.

Is what we have ideal? No, not nearly. But, it is fashioned from what has been available and useful, and is a lot more robust than people give it credit for.
 
Last edited:

Hellhammer

AnandTech Emeritus
Apr 25, 2011
701
4
81
SSDs may beat the crap out of HDDs, but the NAND making them up already is needing caching tricks to be as fast as NAND from prior gens, and that's not likely to get better. We already have tiering inside SSDs, in part due to slowing down of MLC NAND as it shrinks.

Not exactly true. Samsung is claiming that their 3D NAND is ~20% faster than their current planar NAND, which makes sense given the bigger process node. Sure 20% isn't all that much but it will at least temporarily help with the situation.
 

Turab

Member
Dec 16, 2013
43
0
61
Not exactly true. Samsung is claiming that their 3D NAND is ~20% faster than their current planar NAND, which makes sense given the bigger process node. Sure 20% isn't all that much but it will at least temporarily help with the situation.

Are they trying RAID inside of RAID within NAND Flashes ?
 

npaladin-2000

Senior member
May 11, 2012
450
3
76
Are they trying RAID inside of RAID within NAND Flashes ?

Umm yeah they've been doing that for a long while now, that's part of the reason SSDs are so frigging fast. All the NAND chips are operating in what is in essence a RAID0 or RAID5 array (some drives incorporate parity calculations for NAND cell fault tolerance and error correction now)
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |