Well it's too late to offer that as an alternative. AM4 supports APU's. There are already APU's that use the socket and Raven Ridge will also be using the socket. My point is the same though, well little different. AM4 PCI lanes are restricted to 8x PCIe when an APU is installed to the video card. Which probably means that the 8 they disabled and additional 8 goes to the iGPU. Or some other combination. The end result is a packaging or technical decision and never about bad yields on the PCIe components.
TBH, I haven't paid much attention to the APU as I plan on using discreet graphics and am looking at TR or EPYC (if unlocked on at least one 16C and able to reach 3.6-4.0 overclock with sufficient cooling). So thank you for enlightening me as to the PCIe lane restriction. Do you have a source? (I always ask for sources, so don't take it personal, it is so that I can point to something other than a forum). But noted.
No not yours in general but all IF latency issues in general. I agree it would be nice to see it in it's unbridled power. But that 512Gbps. Is at Vega's 512bit bus at much higher data rate do to using really fast HBM2 (which would also apply to mini vega's later using GDDR5X). But it goes back to recent other posts and farther back. It starts with someone realizing that sometimes Adia reports a L3 Latency of 120ms. Then someone tests and finds an 80ms increase in latency when going from CCX to CCX. All of a sudden it becomes "the problem" for gaiming. Then it becomes either A.) A crappy design decision B.) A bug in the design that AMD had to push forward and would clear up later. To maybe in the next stepping AMD can take care of the CCX latency bug and on and on. All with very little actual analytical to show that there is even that much of an impact. What was once an early theory became fact and ignores the actual issues like Nvidia's poor core scaling in DX12.
I saw that hoopla. I see why you are fighting it, but I do agree. AIDA64 I don't consider the most accurate, instead relying more on MemMaxx2 for figuring out throughput (and to a lesser degree the numbers from memtest86+). But as I said, you see increased performance from software optimization, which even Intel benefits from, just to a far lesser degree (AMD=15+%, Intel = 1-3%).
I also agree with the comment on Nvidia's poor scaling (which it should be noted openCL has adopted and is merging with Vulkan, meaning more games may be designed to utilize that in the future), as well as the poor implementation of asynchronous computing.
I know the theoretical limit was related to HBM/2 performance, with no clear ceiling established, but the reason I brought it up is the 100Gbps of TR and 176Gbps of Epyc estimations on ram speed. If we see infinity fabric scale well with the ram at those rates, then it is as fast or faster than Intel's 100 omnipath, which is for the server board linking ONLY. That is more where my mind is, not that the CPU would get the full theoretical limit.