Question PCIe lanes

Moggy

Member
Sep 17, 2018
37
6
41
Hi!
I have a Asus Sage 10G (motherboard has 4x PCIe 16x slots + 3x PCIe 8x slots).
I want to use the build as file server and a little gaming sometimes.

On there I want to use 1x video card AMD Vega 64 (16 PCIe lanes) and
4x Adaptec 72405 RAID cards (8 PCIe lanes each) connected to 96 x 128GB sata SSD (=a total of 12TB in RAID 0)

--> so that is a lot of SSD data traffic over these lanes to the 2x 10Gb network...
______
Question is which Intel CPU to get; 7820X (28 lanes) or 7900X (44 lanes)
Does the lane count matter when populating with 4x 8 lane hungry & fast RAID cards?
_____
Thanks!
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,548
10,171
126
Is this for a YouTube video? Or something serious? There's got to be a better way to get the needed bandwidth and IOPS than stringing together 96 freaking SATA SSDs...

I would just buy one or two $4000-8000 Optane PCI-E 4.0 x16 "accelerators".
 

sdifox

No Lifer
Sep 30, 2005
96,931
16,199
126
Horrible idea
20gbps translates roughly to 2GBps

One nvme drive can handle that. So get one of those pcie nvme carrier card that takes 4 nvme drives.

I would get rid of that board and go Threadripper.
 
Last edited:

Moggy

Member
Sep 17, 2018
37
6
41
Horrible idea
20gbps translates roughly to 2GBps
One nvme drive can handle that. So get one of those pcie nvme carrier card that takes 4 nvme drives.

I would get rid of that board and go Threadripper.
Correct about speed of NVME drives ... (hmmm, why din't I think of that) oh, oops; because of an extra 3000 euro maybe...?
I now have 12TB storage SSD's, I got them basically for free because nobody uses small 128GB SSD's anymore. Even one simple 12TB HDD these days would cost me about 250 euro max. (but the HDD comes with a lot less IOPS).
What you are talking about is 12TB of NVME, right? OK...; Ebay; 4TB NVME= +/-1000 euro
So 12TB would be 3000 euro!
Check this;
RAID 4-Slot Adapter Card + 4x Intel 660P 1TB SSD

Threadripper has other problems and has nothing to do with my original question;
"which intel CPU to choose?"
 
Last edited:

sdifox

No Lifer
Sep 30, 2005
96,931
16,199
126
Correct about speed of NVME drives ... (hmmm, why din't I think of that) oh, oops; because of an extra 3000 euro maybe...?
I now have 12TB storage SSD's, I got them basically for free because nobody uses small 128GB SSD's anymore. Even one simple 12TB HDD these days would cost me about 250 euro max. (but the HDD comes with a lot less IOPS).
What you are taking about is 12TB of NVME, right? OK...; Ebay; 4TB NVME= +/-1000 euro
So 12TB would be 3000 euro!
Check this;
RAID 4-Slot Adapter Card + 4x Intel 660P 1TB SSD

Threadripper has other problems and has nothing to do with my original question;
"which intel CPU to choose?"


4x2tb= 8tb get two of them= 16TB. Each of those have 15GBps of throughput, way more than your 20gbps can handle.

You never mentioned you already have the SSDs. Plus you are going to run out of lanes. You need 5 of the raid cards to host 96 sata drives. Add x16 video you are at 56 lanes.

At least with TR you know there isn't a new vulnerabilty every couple of weeks.

That is a horrible price for 4 1TB nvme + carrier card


Add 4 2tb adata 8200 pro and it still comes out less than €1000

So €2000 for 16tb

Price from US.
 
Last edited:

Moggy

Member
Sep 17, 2018
37
6
41
Is this for a YouTube video? Or something serious? There's got to be a better way to get the needed bandwidth and IOPS than stringing together 96 freaking SATA SSDs...

I would just buy one or two $4000-8000 Optane PCI-E 4.0 x16 "accelerators".
Great!! I found a sponsor (you?) that can provide me with $4000-8000 Optane memory

I'm trying to save a few $ to choose between Intel 7820X or 7900X
The difference between 7820X and 7900X is about $50-$100 on ebay @ feb 2020
It's not likely that I will spend $4000 on exotic memory...

Regarding your other question;
It's a file server. It serves lots of small (10MB) files.
 

Moggy

Member
Sep 17, 2018
37
6
41
So to get back to my original question;
Running an INTEL system (>NO< AMD please) with 4 fast RAID cards in 8 lane slots;
#1== What would be the impact on the system regarding total lanes used;
and so;
#2== Should I use 28 or 44 lane CPU and why? (intel 7820X vs 7900X)
Thanks
 
Last edited:

sdifox

No Lifer
Sep 30, 2005
96,931
16,199
126
So to get back to my original question;
Running an INTEL system (I DO NOT WANT AMD) with 4 fast RAID cards in 8 lane slots;
#1== What would be the impact on the system regarding total lanes used;
and so;
#2== Should I use 28 or 44 lane CPU and why? (intel 7820X vs 7900X)
Thanks

Neither will give you enough lanes. 32+16 =48.

Actually, just looked up spec on your board, you can use either chip since the mb provisioned enough pcie lanes.

 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,548
10,171
126
You never mentioned you already have the SSDs.
This. I was looking at $35-40 per SSD x 96, that's like $4000 already. Or would be, if you didn't already have them.

PS. Even if you weren't planning on it, if you go through with this, make a YT video PLEASE.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,548
10,171
126
#2== Should I use 28 or 44 lane CPU and why?
I can't answer that without some research, but you need to actually be asking yourself, what the lane layout is of your mobo.

Some mobos are arranged, such that ALL of the slots, can be accessed (in a limited capacity), with only a 28-lane CPU. That means that dropping a 44 lane CPU in there is a waste. Or that board is a limitation, and should be avoided.

There are other boards, that cater to BOTH 28-lane as well as 44-lane CPUs, those are more versatile, but there ends up being parts of the board (PCI-E slots, NVMe slots, etc.) that CANNOT BE USED when using a 28-lane CPU in them.

Then there are (I think?) a few boards, that are explicitly designed for 44-lane CPUs, and may not even work, or even POST (in the extreme case) with 28-lane CPUs.
 

Moggy

Member
Sep 17, 2018
37
6
41
Neither will give you enough lanes. 32+16 =48.

Actually, just looked up spec on your board, you can use either chip since the mb provisioned enough pcie lanes.


Please have a look at pages A1 and A2 of the motherboard manual;

Asus uses 2 expensive ($75) PLX8747 multiplexers chips and QSW1480 switches.
So the board provides
2x 16 PCIe + 1x 4 PCIe lanes for the 28 lane CPU (the 1x 4 lanes are for the two 10Gb NIC)
2x 16 PCIe + 3x 4 PCIe lanes for the 44 lane CPU (both extra 2x 4 lanes for U.2)
but even the 16 lanes CPU is connected with 2x 16 PCIe + 1x 4 PCIe lanes to these multiplexers

Something does not compute with me on these diagrams.
I want to have the most straightforward connection to the CPU, without too many multiplexers in between. Like motherboard PCIe slots 1 & 3 & 5 & 7
My idea is that a 7900X with 44 lanes would literally have more lanes electrically connected to the motherboard.
But in the Asus diagram these connections seem equal for all CPU, be it 7820X or 7900X...?!
All diagrams for all different CPU's show the same 2x16PCIe plus the 4xPCIe NIC = 36 lanes ?!
>?????<

on a side note; At least the electrical lanes on the seven PCIe slots are not connected to the PCH, that would castrate them to DMI 3.0 speed (4 link totaling 3.9GB/s)
 

Moggy

Member
Sep 17, 2018
37
6
41
I can't answer that without some research, but you need to actually be asking yourself, what the lane layout is of your mobo.

Some mobos are arranged, such that ALL of the slots, can be accessed (in a limited capacity), with only a 28-lane CPU. That means that dropping a 44 lane CPU in there is a waste. Or that board is a limitation, and should be avoided.

There are other boards, that cater to BOTH 28-lane as well as 44-lane CPUs, those are more versatile, but there ends up being parts of the board (PCI-E slots, NVMe slots, etc.) that CANNOT BE USED when using a 28-lane CPU in them.

Then there are (I think?) a few boards, that are explicitly designed for 44-lane CPUs, and may not even work, or even POST (in the extreme case) with 28-lane CPUs.
True that some features are blocked when using 28 vs 44 lane CPU's. But most/all motherboard manufacturers cater for all different X299 compatible CPU's as far as I know. My mobo layout is as follows; A1+A2 pages; https://dlcdnets.asus.com/pub/ASUS/.../Manual/E16046_WS_X299_SAGE_10G_UM_V4_WEB.pdf
 

sdifox

No Lifer
Sep 30, 2005
96,931
16,199
126
Please have a look at pages A1 and A2 of the motherboard manual;

Asus uses 2 expensive ($75) PLX8747 multiplexers chips and QSW1480 switches.
So the board provides
2x 16 PCIe + 1x 4 PCIe lanes for the 28 lane CPU (the 1x 4 lanes are for the two 10Gb NIC)
2x 16 PCIe + 3x 4 PCIe lanes for the 44 lane CPU (both extra 2x 4 lanes for U.2)
but even the 16 lanes CPU is connected with 2x 16 PCIe + 1x 4 PCIe lanes to these multiplexers

Something does not compute with me on these diagrams.
I want to have the most straightforward connection to the CPU, without too many multiplexers in between. Like motherboard PCIe slots 1 & 3 & 5 & 7
My idea is that a 7900X with 44 lanes would literally have more lanes electrically connected to the motherboard.
But in the Asus diagram these connections seem equal for all CPU, be it 7820X or 7900X...?!
All diagrams for all different CPU's show the same 2x16PCIe plus the 4xPCIe NIC = 36 lanes ?!
>?????<

on a side note; At least the electrical lanes on the seven PCIe slots are not connected to the PCH, that would castrate them to DMI 3.0 speed (4 link totaling 3.9GB/s)


That is what happens on desktop platforms on the intel side.

That is why I mentioned TR3. You get 64 pcie lanes.
 
Reactions: VirtualLarry
Feb 4, 2009
35,238
16,705
136
Okay I’m a relative noob with this stuff but OPs plan sounds phenomenally complex. What do you need all that storage for and why so much bandwidth?
This part is where I’m not sure but wouldn’t it be similar cost but much better design if you went with some sort of EMC small business SAN storage solution?
 

sdifox

No Lifer
Sep 30, 2005
96,931
16,199
126
Okay I’m a relative noob with this stuff but OPs plan sounds phenomenally complex. What do you need all that storage for and why so much bandwidth?
This part is where I’m not sure but wouldn’t it be similar cost but much better design if you went with some sort of EMC small business SAN storage solution?



Apparently he got a whole bunch of small ssd for cheap or free.

Kicker is he can only push out 2 GBps.
 
Feb 4, 2009
35,238
16,705
136
Apparently he got a whole bunch of small ssd for cheap or free.

Kicker is he can only push out 2 GBps.

Ah so it’s a weird project us nerds get obsessed about with minimal practical value.
BTW he should buy an intel server, I’m sure you can get more PCIE lanes in them.
Hell go a full blade mounting system
 
Feb 4, 2009
35,238
16,705
136
Also I assume there will be some kind of raid setup. I can’t imagine having close to 100 cheap ssd’s and not having one fail. Particularly using them in some weird setup as described.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,548
10,171
126
I hate to say it, but RAID-0 of 96x 128GB SATA SSDs, at this point in technology, what with PCI-E 3.0/4.0 x16 Optane accelerators, is like RAID'ing USB floppy drives, rather than purchasing a HDD.

It's 100% pointless, even with the SSDs for free. Trust me on this one.
 
Reactions: Thunder 57

VirtualLarry

No Lifer
Aug 25, 2001
56,548
10,171
126
OP, have you considered the stripe size necessary for utilizing 96 SATA channels, and making effective use of that? I'm guessing that you haven't, and that you've just heard that "RAID-0 makes drives faster".

Well, not unless properly set up, and tuned. I ran chipset RAID-0 of a pair of 30GB SATA SSDs on a Core2-era P35 / ICH9R rig in Win7 64-bit when they were a new thing. Well, due to lack of proper TRIM support, the RAID-0's performance had dropped BELOW that of a single identical drive, within a week.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,548
10,171
126
Again, if this is for LOLs, and you're planning on making a YT video, and want the cred for having THE MOST SATA SSDs in ONE RAID-0, then sure, go for it, you'll be famous, at least for a day.

But if this is intended to have any practical application whatsoever (you mentioned a server?), then stay far, far away.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |