bryanW1995
Lifer
- May 22, 2007
- 11,144
- 32
- 91
I havent built a new rig in 5 years, so I'm really out of the loop here...but how the heck did AMD catch up to intel, and even beat them to 7 nm??
Proper partitioning of the core (no shared components), beefier core components, design for broad power requirements and Intel messing up 10nm.I havent built a new rig in 5 years, so I'm really out of the loop here...but how the heck did AMD catch up to intel, and even beat them to 7 nm??
I havent built a new rig in 5 years, so I'm really out of the loop here...but how the heck did AMD catch up to intel, and even beat them to 7 nm??
Looks like AMD is also really going to have a best gaming CPU: (3600 faster than 8700 in game)
https://translate.google.com/translate?hl=nl&sl=auto&tl=en&u=https://www.chiphell.com/thread-2001351-1-1.html
Even the the 2600x or 2700x had trouble to get equal level with the 6 core 8700 for games,
Big improvements here, excluding memory and AES. Also, there is no regression in the HDR test like the previous samples had, as I thought.New GB4 for Ryzen 5 3600 -
http://browser.geekbench.com/v4/cpu/13334011
This one has RAM running at 2666 MHz -
http://browser.geekbench.com/v4/cpu/13330826
According to Anandtech(https://www.anandtech.com/bench/CPU-2019/2203), ST score is right up there with Skylake.
He also says no overclocking and the 3600 is clocked 12% higher than the 8700 for base...The game he tested for comparing 3600&8700 is PUBG (googlish 'survive' means PUBG). Although just 1 game but still promising.
FPS 183 vs 172
Average FPS 6.3% higher
Highest FPS 10% higher
He also says no overclocking and the 3600 is clocked 12% higher than the 8700 for base...
Without knowing at what clocks they both where running this could be very impressive or very disappointing.
Does there exist a Ryzen 2000 that can beat the 8700 in PUBG? (187 is not a low FPS)He also says no overclocking and the 3600 is clocked 12% higher than the 8700 for base...
Without knowing at what clocks they both where running this could be very impressive or very disappointing.
The chipsets have nothing to do with RAM, the IMC is within the CPU. It will be up to the board manufacturers' BIOS support (possibly limited by the board layout) what range of speeds are officially supported respectively.Zen2 apparently supports JEDEC DDR4-3200 speeds. Does anyone know if this only applies to the X570 chipset or if this will be available on X470 boards as well? (And is there a release date for unbuffered ECC DDR4-3200 DIMMs?)
AMD jumped onto the shoulder of giants that are pure play foundries primarily working for mobile devices (e.g. first major user of TSMC's 7nm was Apple with its millions of iPhones). Intel meanwhile choked on 10nm after biting off more than they can chew.I havent built a new rig in 5 years, so I'm really out of the loop here...but how the heck did AMD catch up to intel, and even beat them to 7 nm??
He also says no overclocking and the 3600 is clocked 12% higher than the 8700 for base...
Without knowing at what clocks they both where running this could be very impressive or very disappointing.
I havent built a new rig in 5 years, so I'm really out of the loop here...but how the heck did AMD catch up to intel, and even beat them to 7 nm??
Base is pretty irrelevant.
No CPUs clock down to base when under load even after they've hit the turbo duration. They clock down to a 24/7 within TDP clock, typically a good few hundred MHz above base.
Lisa Su + Jim Keller + many many cockups by Intel over the last 5 or 6 years.
I suspect next year intel will figure out the 10nm thing by simply throwing money at it, then they’ll release something huge that their “Ugandan team” was working on but no one thought it would work.
Simply put it will be something big, something ground breaking and something they could have done years ago but chose not to spend money doing it.
You are quite optimistic. Do you remember process node roadmaps like this one?I suspect next year intel will figure out the 10nm thing by simply throwing money at it
Isn't one of the problems, power usage. More cores = more power relative to the IGP. Look at the Xeons with high clocks. Hungry fellas.Yep. It became pretty clear that Intel started sandbagging after Sandy Bridge. Almost all their efforts were in CPU status quo with extremely iterative improvement at best, with much more focus on all kinds of other things. IGP, SSD, etc, they're a bit all over the place.
Look at the 9700/9900 die. The Intel HD IGP takes up almost precisely the space of 4 full cores, yet is probably used by a fraction of buyers of that SKU. This is why I never agreed with the idea of putting it in every single desktop SKU like they did, it's wasteful at best.
A '9970k' would be possible with current design with 12C/24T with no IGP.
It was even more heinous with 2600k/3770k/4770k, where it was basically half the die size just for unnecessary IGP.
AMD on the other hand saw clearly that they could easily exploit that weakness by skipping the GPU on non-APUs to make the most of the potential with their CPU designs, and it paid off.
You are quite optimistic. Do you remember process node roadmaps like this one?
Since that original planned date Intel is "simply throwing money at it" and look where that got them to.
Edit: Looking at https://en.wikichip.org/wiki/intel/process Intel continuously had a new smaller process node every two years between 1987 (1.0 µm) and 2011 (22nm)! The delay of 14nm to 2014 already spelled danger.
Yes but I suspect that problem can be solved by throwing a s ton of money at it.
There was no incentive post 2012 to spend that amount of money.
Maybe I’m wrong, just my opinion.
Isn't one of the problems, power usage. More cores = more power relative to the IGP. Look at the Xeons with high clocks. Hungry fellas.
Yes, they have doubled the cache, which should also cut memory accesses significantly. Previously the L3 caches couldn't really share data between the two CCX. Geekbench seems to indicate (for a while now) that there are still 2 separate instances, but if they can snoop data from each other with low latency, we could be talking about up to 4x the effective L3 cache size.
Regardless, even 2x cache increase will diminish the need for memory accesses significantly
If money would be all that is needed to fix it, it would have been already fixed. Its just wishful thinking they have solution but dont do it, that makes ZERO sense businesswise.