Intel Optane AMA - Starts Wednesday at 11am

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

CatMerc

Golden Member
Jul 16, 2016
1,114
1,153
136
Are there any plans to bring Optane DIMMS to consumer? Being able to install your OS on near DRAM latency storage would be an incredible way to differentiate from competitors.
 

Franz01234

Junior Member
Sep 26, 2018
1
0
1
The optane 900p and 905p pcie cards both have a pcie 4x connection and are somewhat bandwith limited by that. Will there be optane pcie cards with pcie 8x or even 16x connections to have all the benefits of optane while also having improved bandwidth for large file transfers?
 

arandomguy

Senior member
Sep 3, 2013
556
183
116
Are there plans to further expand the capabilities of Intel RST Caching to be used in conjunction with consumer Optane drives?

Specifically I would be interested in -

1) The ability to use a consumer Optane drive as both a cache and a regular storage drive (perhaps via partitioning). For example with a hypothetical 256GB Optane drive reserving 64GB of it to act as a cache while the rest acts as a regular drive.

2) The ability to act a cache for multiple other drives in a system as opposed to only one. For example a system with 2 conventional HDDs and a SSD with a single Optane drive acting as a cache for all 3?

3) Expanding the cache size?

Related to this do Optane SSD and Optane Memory products have functional differences when acting as cache via Intel RST?

Are there any plans for consumer level targeted products using the u2 form factor?
 

Intel_Optane_Team

Team Intel Optane
Official Representative
Sep 24, 2018
20
5
36
Are there any plans to bring Optane DIMMS to consumer? Being able to install your OS on near DRAM latency storage would be an incredible way to differentiate from competitors.

Thanks for the question, CatMerc. Persistent memory depends on CPU and Optane architecture working well together, and we're constantly evaluating additional market opportunities.
 

Intel_Optane_Team

Team Intel Optane
Official Representative
Sep 24, 2018
20
5
36
Do you see Optane replacing NAND based SSD anytime soon? Do you see it getting cheaper by economy of scale?

Thanks, Atul. Intel has announced QLC NAND SSDs. We see those as growing the NAND storage market aggressively, as an alternative to HDDs. We believe a tiered data strategy of Optane as cache/journaling/meta-data combined with QLC to reduce total cost of ownership of data storage is a great approach. It delivers the most value for customers through performance and cost-effectiveness.
 

Intel_Optane_Team

Team Intel Optane
Official Representative
Sep 24, 2018
20
5
36
The optane 900p and 905p pcie cards both have a pcie 4x connection and are somewhat bandwith limited by that. Will there be optane pcie cards with pcie 8x or even 16x connections to have all the benefits of optane while also having improved bandwidth for large file transfers?

Thanks, Franz. There are third-party solutions enabling multiple SSDs to be aggregated into a high-bandwidth slot. In addition, the PCIe 4.0 spec has been announced, which will bring higher bandwidth to future platforms and SSDs.
 

Steve heller

Junior Member
Sep 26, 2018
4
0
1
Are there any figures for latency and bandwidth of the pm product? Also is there any update on the developer challenge?
 

Intel_Optane_Team

Team Intel Optane
Official Representative
Sep 24, 2018
20
5
36
Are there plans to further expand the capabilities of Intel RST Caching to be used in conjunction with consumer Optane drives?

Specifically I would be interested in -

1) The ability to use a consumer Optane drive as both a cache and a regular storage drive (perhaps via partitioning). For example with a hypothetical 256GB Optane drive reserving 64GB of it to act as a cache while the rest acts as a regular drive.

2) The ability to act a cache for multiple other drives in a system as opposed to only one. For example a system with 2 conventional HDDs and a SSD with a single Optane drive acting as a cache for all 3?

3) Expanding the cache size?

Related to this do Optane SSD and Optane Memory products have functional differences when acting as cache via Intel RST?

Are there any plans for consumer level targeted products using the u2 form factor?

Thanks for the questions, Arandomguy. Regarding your first question, we don't support partitioning the drive to be used as a cache and storage. We currently don't have plans to support this in the future. As to your second question, we are exploring this possibility for a future release. For your third question, yes--and in time densities will continue to increase. As to your fourth question, yes, we have u2 consumer-level products. You can find information here. https://ark.intel.com/products/1236...0P-Series-280GB-2-5in-PCIe-x4-20nm-3D-XPoint-
 

Intel_Optane_Team

Team Intel Optane
Official Representative
Sep 24, 2018
20
5
36
Are there any figures for latency and bandwidth of the pm product? Also is there any update on the developer challenge?

Thanks for the question, Steve. We are growing a developer community for optimizing software applications to take advantage of Persistence, aligned to the Intel Optane DC Persistent Memory DIMMs. More info here: http://pmem.io/pmdk/ We previously shared some details on PM capabilities. You can find them here: https://www.anandtech.com/show/12826/intel-persistent-memory-event-live-blog.

Regarding the developer challenger--you can register at the following website to get an email as soon as the program launches. https://builders.intel.com/persistent-memory-developer-challenge
 
Last edited:

Steve heller

Junior Member
Sep 26, 2018
4
0
1
Thanks for the question, Steve. We are growing a developer community for optimizing software applications to take advantage of Persistence, aligned to the Intel Optane DC Persistent Memory DIMMs. More info here: http://pmem.io/pmdk/ We previously shared some details on PM capabilities. You can find them here: https://www.anandtech.com/show/12826/intel-persistent-memory-event-live-blog.

Regarding the developer challenger--you can register at the following website to get an email as soon as the program launches. https://builders.intel.com/persistent-memory-developer-challenge

I have already signed up for the challenge but was wondering when it might start. I'll take a look at the blog posts when I get home this evening. Thanks!
 

Intel_Optane_Team

Team Intel Optane
Official Representative
Sep 24, 2018
20
5
36
I am already using VROC (4 900P in VROC 0) on one of my workstations. I am asking about the future of hardware RAID controllers. With Optane DIMMs on the way it would seem that RAID controllers that support both DDR4 and Optane DIMMs could offer more flexibility than is currently available.



This is what I am referring to:

https://downloadcenter.intel.com/do...-Driver-for-System-Acceleration?product=99745

This download is valid for the product(s) listed below.

The 800P is not mentioned. Neither is the 900P which also works with Optane cache.

Hi, nosirrahx. For your RAID question: In our testing, most applications when using all SSDs with hardware RAID controllers, bypassing RAID controller caches provides the optimized performance. Intel® Virtual RAID on CPU (Intel® VROC) is an enterprise RAID solution specifically designed for NVMe*-based solid-state drives (SSD). The biggest advantage of Intel® VROC is the ability to directly connect NVMe-based SSDs to the new Intel® Xeon® Scalable Processor PCIe* lanes. It then makes RAID arrays using those SSDs without using a RAID host bus adapter (HBA). As a result, Intel VROC unleashes NVMe SSD performance potential without the complexity and power consumption of a traditional Hardware RAID HBA. We believe hardware RAID controllers don't provide the best value with NVMe SSDs.

For your caching-support question: We don't actively promote our SSD line as a caching device--but the functionality exists if someone choses to use it.
 

nosirrahx

Senior member
Mar 24, 2018
304
75
101
Hi, nosirrahx. For your RAID question: In our testing, most applications when using all SSDs with hardware RAID controllers, bypassing RAID controller caches provides the optimized performance. Intel® Virtual RAID on CPU (Intel® VROC) is an enterprise RAID solution specifically designed for NVMe*-based solid-state drives (SSD). The biggest advantage of Intel® VROC is the ability to directly connect NVMe-based SSDs to the new Intel® Xeon® Scalable Processor PCIe* lanes. It then makes RAID arrays using those SSDs without using a RAID host bus adapter (HBA). As a result, Intel VROC unleashes NVMe SSD performance potential without the complexity and power consumption of a traditional Hardware RAID HBA. We believe hardware RAID controllers don't provide the best value with NVMe SSDs.

Let me ask a much more specific question then. A friend of mine uses large arrays of massive HDDs (16 or more of them) for media editing purposes.

Current RAID solutions allow for caching in the form of BBU RAM and SSDs. I am asking if there are any plans for Intel RAID cards with Optane integrated.

As far as VROC goes, my personal needs are covered.
 

ikjadoon

Member
Sep 4, 2006
147
253
146
I am curious about Optane Memory as a stand-alone data drive, which the Optane FAQ says that, while it is not a supported configuration, it can be used as such!

1) If Optane Memory is used as a stand-alone data drive, can it be used with any system? Say I put it an NVMe-to-USB external enclosure. Does Optane Memory, as a data drive, work on older or otherwise unsupported systems?

2) Say I’m extra crazy. If I ever wanted to boot from Optane Memory (again noted as possible, but unsupprted), can they be formatted as FAT32? Say for a speedy Live CD or Win10 installer.
 

ikjadoon

Member
Sep 4, 2006
147
253
146
Market opportunity for Intel:

You have the highest performance 32GB drive in the history of storage, from my research.

You all should put Optane into SD cards and USB drives. We don’t need terabytes. But quickly moving 15GB between computers would be a godsend at this price point. You’d sell them like hotcakes!

I always want a speedy, small external drive to install Windows or to move some files between computers.
 

zir_blazer

Golden Member
Jun 6, 2013
1,184
459
136
Here is a cluster bomb of questions about Optane



1 - Are there any plans related to LightNVM / OpenChannel based product?

Basically, these interfaces would be used to make an Optane SSD with a dumb PCIe Controller interface that can access the Optane memory chips, then leave the host computer to do the job of the Optane controller by using CPU and RAM resources. As CPU core count growths and RAM becomes cheaper, it can make sense if the cost reduction due to not needing controller and RAM on the SSD itself is decent enough.


2 - How much does the current PCI Topology of Intel consumer platforms affects the performance, latency and scalability of multiple Optane units?

Yes, I know that the Cannon Point Chipset generation (2xx) added 4 more PCIe Lanes, but the point is, a single Optane already maxes out 4 PCIe 3.0 lanes and that means that it bottlenecks the Chipset DMI since it is also 4 PCIe 3.0 lanes wide, but shared among all the Chipset peripherals. Moreover, being one hop away from the Processor is less than optimal from a latency standpoint (Which is Optane strength). Aren't you focusing a bit too much on connecting Optane via the Chipset when that seems to be suboptimal?
I think that relying on the Processor PCIe Controller by bifurcating it as 8x/4x/4x to have a Video Card and either two Optanes or one Optane and a miscellaneous PCI Device (10G NIC card?) seems to be the sweet point in balancing the Processor PCIe Lane usage in the consumer platform. Yes, I would step up to a Xeon W with 48 Processor PCIe Lanes if I could since I love I/O (And for some reason they have 4 more lanes than the LGA 2066 Core i9...). But, this leads out to the next question...


3 - Besides the Optane SSD units themselves, how important you consider the entire ecosystem of support adapters and accessories? How much it is estimated that they indirectly increase Optane (Or other NVMe SSDs) prices and adoption?

While technically the 8x/4x/4x arrangement I mentioned previously is entirely possible, and from my point of view, the sweet spot, one of the problems that I see is that even if you can get the PCIe Lanes, you never get them in the way you need them for your specific configuration. For example, if I wanted to use two U.2 Optanes, I would need two U.2 connectors on the Motherboard, which I don't think that is a common arrangement (*AND* that they come from the Processor, not the Chipset!). I think that seeing 3 16x PCIe Slots in Motherboards is the easiest possible way that I can get 8x/4x/4x in an Intel consumer platform, but that would either force me to get two PCIe 4x-to-U.2 adapter cards, or the two Optanes have to be in the PCIe Card form factor version. Another alternative would be using M.2-to-U.2 adapters. Yet another alternative seems to be a PCIe 8x-to-2 U.2 card, assuming that in-slot bifurcation is a supported platform feature (Otherwise you need an expensive PCIe Switch).
All in all, it seems that in a lot of cases you either need to purchase two different type of Optane units, or at least some kind of adapter, since Motherboards seems to be focused on providing at least one of each kind of connector. Something like the AsRock ULTRA QUAD M.2 CARD but for U.2 would be nice.


4 - Is an Optane with an OcuLink interface being considered? Seems to be far more elegant than the overkill U.2 cables.

As if we didn't had enough interfaces and adapters already...


5 - Does Optane plays any role in forcing Intel to reconsider the amount of Processor PCIe Lanes in their consumer platforms?

Your competitor is in a slighty better position since they offer 20 PCIe Lanes coming from the Processor in their consumer platforms, being potentially usable as 8x/8x/4x. If my thought process didn't bore you, then I suppose that by this point you should notice that I think that Optane requires more Processor PCIe lanes to actually shine if using more than one NVMe SSD.



That's pretty much all the Optane questions I can think about.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Moreover, being one hop away from the Processor is less than optimal from a latency standpoint (Which is Optane strength). Aren't you focusing a bit too much on connecting Optane via the Chipset when that seems to be suboptimal?

There's some difference in latency, but its negligible.

For bandwidth, the 900P/905P doesn't saturate PCIe x4 so DMI shouldn't be a limiter. PCIe x4 is capable of 3.5GB/s throughput but 905P only gets 2.6GB/s read, 2.2GB/s write. The 2TB version of Samsung's 970 Pro can nearly saturate it by sequential reads reaching 3.5GB/s. Again, not the case for Optane.
 

Intel_Optane_Team

Team Intel Optane
Official Representative
Sep 24, 2018
20
5
36
I am curious about Optane Memory as a stand-alone data drive, which the Optane FAQ says that, while it is not a supported configuration, it can be used as such!

1) If Optane Memory is used as a stand-alone data drive, can it be used with any system? Say I put it an NVMe-to-USB external enclosure. Does Optane Memory, as a data drive, work on older or otherwise unsupported systems?

2) Say I’m extra crazy. If I ever wanted to boot from Optane Memory (again noted as possible, but unsupprted), can they be formatted as FAT32? Say for a speedy Live CD or Win10 installer.

Yes, any system that has been shown to support a standard NVMe SSD should also likely to work with using Intel Optane Memory device as an NVMe SSD.
 
Reactions: ikjadoon

Intel_Optane_Team

Team Intel Optane
Official Representative
Sep 24, 2018
20
5
36
Here is a cluster bomb of questions about Optane



1 - Are there any plans related to LightNVM / OpenChannel based product?

Basically, these interfaces would be used to make an Optane SSD with a dumb PCIe Controller interface that can access the Optane memory chips, then leave the host computer to do the job of the Optane controller by using CPU and RAM resources. As CPU core count growths and RAM becomes cheaper, it can make sense if the cost reduction due to not needing controller and RAM on the SSD itself is decent enough.


2 - How much does the current PCI Topology of Intel consumer platforms affects the performance, latency and scalability of multiple Optane units?

Yes, I know that the Cannon Point Chipset generation (2xx) added 4 more PCIe Lanes, but the point is, a single Optane already maxes out 4 PCIe 3.0 lanes and that means that it bottlenecks the Chipset DMI since it is also 4 PCIe 3.0 lanes wide, but shared among all the Chipset peripherals. Moreover, being one hop away from the Processor is less than optimal from a latency standpoint (Which is Optane strength). Aren't you focusing a bit too much on connecting Optane via the Chipset when that seems to be suboptimal?
I think that relying on the Processor PCIe Controller by bifurcating it as 8x/4x/4x to have a Video Card and either two Optanes or one Optane and a miscellaneous PCI Device (10G NIC card?) seems to be the sweet point in balancing the Processor PCIe Lane usage in the consumer platform. Yes, I would step up to a Xeon W with 48 Processor PCIe Lanes if I could since I love I/O (And for some reason they have 4 more lanes than the LGA 2066 Core i9...). But, this leads out to the next question...


3 - Besides the Optane SSD units themselves, how important you consider the entire ecosystem of support adapters and accessories? How much it is estimated that they indirectly increase Optane (Or other NVMe SSDs) prices and adoption?

While technically the 8x/4x/4x arrangement I mentioned previously is entirely possible, and from my point of view, the sweet spot, one of the problems that I see is that even if you can get the PCIe Lanes, you never get them in the way you need them for your specific configuration. For example, if I wanted to use two U.2 Optanes, I would need two U.2 connectors on the Motherboard, which I don't think that is a common arrangement (*AND* that they come from the Processor, not the Chipset!). I think that seeing 3 16x PCIe Slots in Motherboards is the easiest possible way that I can get 8x/4x/4x in an Intel consumer platform, but that would either force me to get two PCIe 4x-to-U.2 adapter cards, or the two Optanes have to be in the PCIe Card form factor version. Another alternative would be using M.2-to-U.2 adapters. Yet another alternative seems to be a PCIe 8x-to-2 U.2 card, assuming that in-slot bifurcation is a supported platform feature (Otherwise you need an expensive PCIe Switch).
All in all, it seems that in a lot of cases you either need to purchase two different type of Optane units, or at least some kind of adapter, since Motherboards seems to be focused on providing at least one of each kind of connector. Something like the AsRock ULTRA QUAD M.2 CARD but for U.2 would be nice.


4 - Is an Optane with an OcuLink interface being considered? Seems to be far more elegant than the overkill U.2 cables.

As if we didn't had enough interfaces and adapters already...


5 - Does Optane plays any role in forcing Intel to reconsider the amount of Processor PCIe Lanes in their consumer platforms?

Your competitor is in a slighty better position since they offer 20 PCIe Lanes coming from the Processor in their consumer platforms, being potentially usable as 8x/8x/4x. If my thought process didn't bore you, then I suppose that by this point you should notice that I think that Optane requires more Processor PCIe lanes to actually shine if using more than one NVMe SSD.



That's pretty much all the Optane questions I can think about.

To your first question: The media used in an Optane SSD is quite different from NAND... NAND media is erased in large blocks, and written in smaller pages, where you must erase before you write. Because of the need to erase a large area before writing a small area with NAND, this forces the need for significant spare area, and garbage collection to clean up and recover this space. The main benefits of Open NVMe is to enable the host to control when garbage collection events happen, which in theory should allow more of the spare capacity to be used and minimize garbage collection performance disturbance with NAND. Optane media is a write-in-place media, meaning there is no need to erase data before writing. This means that Optane SSDs do not have the same concept of garbage collection. This is one of the reasons Optane SSD performance is nearly the same for reads, or writes, or with a mixed workload. It is also the same reason why Optane SSD latency variability is so significantly better than an NAND SSD. While it's not clear there's a media benefit, we'll continue to evaluate whether this makes sense.

To your second question: Direct CPU attach for single devices allows latency reduction by removing the PCH latency. We can measure at the hardware level but it may not show end-user application benefit due to software overhead. The real benefit of CPU attached storage is with multiple devices and using RAID. As you highlighted, direct CPU attached can provide benefit when multiple devices are connected via RAID to get higher Sequential Performance.

To your third question: We recently announced the M.2 905P with capacity up to 380GB and continue to work with the ecosystem to take advantage of the form-factor to create M.2 adapters (up to 4x M.2) similar to the ASROCK card which you mention.

To your fourth question: It's an interesting idea, but we have no plans at this point for something like that.

To your fifth question: The ntel Optane team continues to work with Intel CPU and Platform teams and evaluate the need for more PCIe lanes in consumer platforms. Something which we are working on for future Intel Platforms.

Thanks for your questions!
 
Last edited:
Reactions: zir_blazer

Steve heller

Junior Member
Sep 26, 2018
4
0
1
I would like access to a server with Optane DC Persistent memory, for just a few hours, so I can test a library that I have developed that I think will perform very well with such a setup. Is there any way that could be arranged?
 

russell2000

Junior Member
Apr 27, 2017
1
0
6
I actually bought what was probably one of the first 900p drives (480gb). Using it as my main OS drive, it mostly works fine, but frequently it will be painfully slow and disk usage shows that it's at 100% even though I'm not doing anything than opening up windows file explorer. Is there anything I need to do to make sure that I'm getting the best performance out of it?

Thanks
 

Billy Tallis

Senior member
Aug 4, 2015
293
146
116
Is an Optane with an OcuLink interface being considered? Seems to be far more elegant than the overkill U.2 cables.

OCuLink is great for connecting a motherboard to a backplane, but nobody is going to put that connector on the drive itself because it doesn't deliver power and doesn't hot-swap well.
 

Intel_Optane_Team

Team Intel Optane
Official Representative
Sep 24, 2018
20
5
36
I would like access to a server with Optane DC Persistent memory, for just a few hours, so I can test a library that I have developed that I think will perform very well with such a setup. Is there any way that could be arranged?

Hi, Steve. We see you're active on the Accelerate with Optane GitHub and have forwarded your request to the Optane DC Persistent Memory Team. Thanks for your interest!
 
Reactions: Steve heller
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |