Are there any plans to bring Optane DIMMS to consumer? Being able to install your OS on near DRAM latency storage would be an incredible way to differentiate from competitors.
Do you see Optane replacing NAND based SSD anytime soon? Do you see it getting cheaper by economy of scale?
The optane 900p and 905p pcie cards both have a pcie 4x connection and are somewhat bandwith limited by that. Will there be optane pcie cards with pcie 8x or even 16x connections to have all the benefits of optane while also having improved bandwidth for large file transfers?
Are there plans to further expand the capabilities of Intel RST Caching to be used in conjunction with consumer Optane drives?
Specifically I would be interested in -
1) The ability to use a consumer Optane drive as both a cache and a regular storage drive (perhaps via partitioning). For example with a hypothetical 256GB Optane drive reserving 64GB of it to act as a cache while the rest acts as a regular drive.
2) The ability to act a cache for multiple other drives in a system as opposed to only one. For example a system with 2 conventional HDDs and a SSD with a single Optane drive acting as a cache for all 3?
3) Expanding the cache size?
Related to this do Optane SSD and Optane Memory products have functional differences when acting as cache via Intel RST?
Are there any plans for consumer level targeted products using the u2 form factor?
Are there any figures for latency and bandwidth of the pm product? Also is there any update on the developer challenge?
Thanks for the question, Steve. We are growing a developer community for optimizing software applications to take advantage of Persistence, aligned to the Intel Optane DC Persistent Memory DIMMs. More info here: http://pmem.io/pmdk/ We previously shared some details on PM capabilities. You can find them here: https://www.anandtech.com/show/12826/intel-persistent-memory-event-live-blog.
Regarding the developer challenger--you can register at the following website to get an email as soon as the program launches. https://builders.intel.com/persistent-memory-developer-challenge
I am already using VROC (4 900P in VROC 0) on one of my workstations. I am asking about the future of hardware RAID controllers. With Optane DIMMs on the way it would seem that RAID controllers that support both DDR4 and Optane DIMMs could offer more flexibility than is currently available.
This is what I am referring to:
https://downloadcenter.intel.com/do...-Driver-for-System-Acceleration?product=99745
This download is valid for the product(s) listed below.
- Intel® Optane™ Memory M10 Series (16GB, M.2 80mm PCIe 3.0, 20nm, 3D XPoint™)
- Intel® Optane™ Memory M10 Series (32GB, M.2 80mm PCIe 3.0, 20nm, 3D XPoint™)
- Intel® Optane™ Memory M10 Series (64GB, M.2 80mm PCIe 3.0, 20nm, 3D XPoint™)
- Intel® Optane™ Memory Series (16GB, M.2 80mm PCIe 3.0, 20nm, 3D Xpoint)
- Intel® Optane™ Memory Series (32GB, M.2 80mm PCIe 3.0, 20nm, 3D Xpoint)
The 800P is not mentioned. Neither is the 900P which also works with Optane cache.
Hi, nosirrahx. For your RAID question: In our testing, most applications when using all SSDs with hardware RAID controllers, bypassing RAID controller caches provides the optimized performance. Intel® Virtual RAID on CPU (Intel® VROC) is an enterprise RAID solution specifically designed for NVMe*-based solid-state drives (SSD). The biggest advantage of Intel® VROC is the ability to directly connect NVMe-based SSDs to the new Intel® Xeon® Scalable Processor PCIe* lanes. It then makes RAID arrays using those SSDs without using a RAID host bus adapter (HBA). As a result, Intel VROC unleashes NVMe SSD performance potential without the complexity and power consumption of a traditional Hardware RAID HBA. We believe hardware RAID controllers don't provide the best value with NVMe SSDs.
Moreover, being one hop away from the Processor is less than optimal from a latency standpoint (Which is Optane strength). Aren't you focusing a bit too much on connecting Optane via the Chipset when that seems to be suboptimal?
I am curious about Optane Memory as a stand-alone data drive, which the Optane FAQ says that, while it is not a supported configuration, it can be used as such!
1) If Optane Memory is used as a stand-alone data drive, can it be used with any system? Say I put it an NVMe-to-USB external enclosure. Does Optane Memory, as a data drive, work on older or otherwise unsupported systems?
2) Say I’m extra crazy. If I ever wanted to boot from Optane Memory (again noted as possible, but unsupprted), can they be formatted as FAT32? Say for a speedy Live CD or Win10 installer.
Here is a cluster bomb of questions about Optane
1 - Are there any plans related to LightNVM / OpenChannel based product?
Basically, these interfaces would be used to make an Optane SSD with a dumb PCIe Controller interface that can access the Optane memory chips, then leave the host computer to do the job of the Optane controller by using CPU and RAM resources. As CPU core count growths and RAM becomes cheaper, it can make sense if the cost reduction due to not needing controller and RAM on the SSD itself is decent enough.
2 - How much does the current PCI Topology of Intel consumer platforms affects the performance, latency and scalability of multiple Optane units?
Yes, I know that the Cannon Point Chipset generation (2xx) added 4 more PCIe Lanes, but the point is, a single Optane already maxes out 4 PCIe 3.0 lanes and that means that it bottlenecks the Chipset DMI since it is also 4 PCIe 3.0 lanes wide, but shared among all the Chipset peripherals. Moreover, being one hop away from the Processor is less than optimal from a latency standpoint (Which is Optane strength). Aren't you focusing a bit too much on connecting Optane via the Chipset when that seems to be suboptimal?
I think that relying on the Processor PCIe Controller by bifurcating it as 8x/4x/4x to have a Video Card and either two Optanes or one Optane and a miscellaneous PCI Device (10G NIC card?) seems to be the sweet point in balancing the Processor PCIe Lane usage in the consumer platform. Yes, I would step up to a Xeon W with 48 Processor PCIe Lanes if I could since I love I/O (And for some reason they have 4 more lanes than the LGA 2066 Core i9...). But, this leads out to the next question...
3 - Besides the Optane SSD units themselves, how important you consider the entire ecosystem of support adapters and accessories? How much it is estimated that they indirectly increase Optane (Or other NVMe SSDs) prices and adoption?
While technically the 8x/4x/4x arrangement I mentioned previously is entirely possible, and from my point of view, the sweet spot, one of the problems that I see is that even if you can get the PCIe Lanes, you never get them in the way you need them for your specific configuration. For example, if I wanted to use two U.2 Optanes, I would need two U.2 connectors on the Motherboard, which I don't think that is a common arrangement (*AND* that they come from the Processor, not the Chipset!). I think that seeing 3 16x PCIe Slots in Motherboards is the easiest possible way that I can get 8x/4x/4x in an Intel consumer platform, but that would either force me to get two PCIe 4x-to-U.2 adapter cards, or the two Optanes have to be in the PCIe Card form factor version. Another alternative would be using M.2-to-U.2 adapters. Yet another alternative seems to be a PCIe 8x-to-2 U.2 card, assuming that in-slot bifurcation is a supported platform feature (Otherwise you need an expensive PCIe Switch).
All in all, it seems that in a lot of cases you either need to purchase two different type of Optane units, or at least some kind of adapter, since Motherboards seems to be focused on providing at least one of each kind of connector. Something like the AsRock ULTRA QUAD M.2 CARD but for U.2 would be nice.
4 - Is an Optane with an OcuLink interface being considered? Seems to be far more elegant than the overkill U.2 cables.
As if we didn't had enough interfaces and adapters already...
5 - Does Optane plays any role in forcing Intel to reconsider the amount of Processor PCIe Lanes in their consumer platforms?
Your competitor is in a slighty better position since they offer 20 PCIe Lanes coming from the Processor in their consumer platforms, being potentially usable as 8x/8x/4x. If my thought process didn't bore you, then I suppose that by this point you should notice that I think that Optane requires more Processor PCIe lanes to actually shine if using more than one NVMe SSD.
That's pretty much all the Optane questions I can think about.
Is an Optane with an OcuLink interface being considered? Seems to be far more elegant than the overkill U.2 cables.
I would like access to a server with Optane DC Persistent memory, for just a few hours, so I can test a library that I have developed that I think will perform very well with such a setup. Is there any way that could be arranged?