Practically an insignificant reduction compared to using the Second NVMe slot on AMD's X470 platform.
By doing a simultaneous copy during a full transfer speed test, he reduced speed on Z370 drive to ~2600 MB/s.
Using the Second NVMe slot on AMD X470 speed was reduced to ~1300 MB/s, while it was doing nothing at all.
So Z370 was still twice as fast while doing a simultaneous copy. How terrible for Intel.
In practical terms running two fast NVMe drives:
1: On Intel Z370 your drives are almost always running full speed in normal use cases (not benchmarks will copying) and very occasionally you share some bandwidth and still have very high speed, that would almost certainly go unnoticed.
2: On AMD X470, one of your drives is fast all the time, while the other drive is drastically slower all the time.
Only extreme AMD partisans, make the the Z370 bandwidth sharing into some kind of major show stopping drawback, while quietly ignoring how much worse it is for X470.
Partitioning...
Primary storage
Secondary storage
Tiering
Using an amazingly fast primary storage device as a cache space
Chipset devices :
SATA/PCIE 2.0/NVME #2/USB/etc/etc
The point is to allow for a clear partition and tiering while realizing that chipset devices are 2nd class citizens.
You wont grasp this nor the use cases that require it. An 8 core processor is not a run of the mill basic desktop platform. It's a power user/server grade core count. I am currently on a dual core 2.5ghz i5. It's what I use when doing mundane tasks like browsing the web. I don't use an 8core based machine to do regular user tasks. If I use SSD as my OS drive and NVME exclusively as a caching drive between ram using my own caching policy [I wrote the program], my nvme is most definitely going to be slamming the PCIE 3.0 x4 interface. I don't want this impacting all of my other 2nd class citizens hanging off of the chipset. I don't want 2nd class citizen data flow activity impacting my high speed caching performance.
You're not thinking like an enterprise user and fail to realize that's what an 8 core count processor is. Intel never wanted people to realize this because they wanted to continue their Xeon meme pricing and sales. There are a number of use cases where someone could have a constant 400MB/s flow through a chipset based device. This could increase by 2 or three more such flows easily. USB 3.1 is 10Gb/s .. ~1.2 GB/s. There is absolutely zero reason to have my #1 class storage device impacted by 2nd rate citizen data flows that can be substantial for a number of users. There is zero reason to let one second class citizen rob all the others.
I've concluded my exchanges with you.
You don't want to get the point and have a sea of excuses every-time someone makes one.
You keep using terms like Partisans. This isn't politics man. This is technology/science/math/data/facts.
There is zero room for politics here. The AMD 2ndary NVME slot has much slower performance because they restrict it to a PCIE 2.0 interface off the chipset treating it like they treat every other device hanging off of it : A 2ndary citizen. No one device should be able to hog the chipset's bandwidth. That's the whole point of a chipset based shared interface : Give slower citizens
equal access to a shared lane.
This is actually the very thing that intel were such @(#&@! about .. PCIE lanes. They reserved higher lane counts for enterprise customers that they charged an arm and a leg for via Xeons (even there they kept lane counts artificially low). They do so to force people into these proprietary interfaces that are exclusive to Intel. PCIE was created for the reason of being an open and standard interface to the CPU. Intel fights this constantly with proprietary gimmicks. NVME comes out and they had years of boards to figure out that it should have a dedicated interface to the CPU. They resisted and fought this and hung it off some stupid chipset interface that Used to be PCIE 2.0. Then they come out with Optane...
They engaged in PCIE gimmicks for desktop/enterprise users. Nvidia absolutely hated them for this as this shut them out of sales. Intel did it on purpose because Nvidia's GPU's were becoming first class citizens in data centers.
You don't seem to maintain this broad knowledge/understanding nor a history of understanding that would have formed if you had been historically active and non-partisan. Instead you try to run every sound discussion into the ground w/ these partisan accusations. Someone states they have just as much Intel equipment as AMD... that doesn't register to you. I highlighted the clear issue DMI 3.0 has at an architectural level and you still voiced disbelief of something that is painfully obvious. A user goes out of their way to show you a benchmark demonstrating the very thing I stated and you still have excuses.
Get your head out of your butt man. No one's buying this charade whose a serious systems builder.
When even Linus has negative things to say and is 100% right, you know you've messed up :
https://www.youtube.com/watch?v=TWFzWRoVNnE
Intel's platform for years has been centered on artificial segmentation and a land mind laden maze to trap users and exploit them. AMD caught them with their pants down and delivered a far better platform. Until Intel gets their head out of their butts (not going to happen when apologist keep making excuses and buying into their madness), they are non-existent to me.
https://www.amazon.com/Intel-VROC-STD-RAID-VROCSTANMOD/dp/B077PXMDMZ
https://www.pcworld.com/article/319...le-crazy-raid-configurations-for-a-price.html
FFS !