- Mar 3, 2017
- 1,747
- 6,598
- 136
I am working on a benchmark similar to Geekbench (in that it uses similar workloads), but I have no timeline for completion.
Hey, the more the merrier in terms of benchmarks!
Sorry, couldn't resist!
Don't tempt me. On second thought, Primate Labs would probably try to sue me out of existence or something.That's pretty cool. My only suggestion is that you call it eekBench.
You can always point the judge into your Anandtech account predating Primate Labs founding.Hey, the more the merrier in terms of benchmarks!
Don't tempt me. On second thought, Primate Labs would probably try to sue me out of existence or something.
PeekBench it is then, glad we could help you.Don't tempt me. On second thought, Primate Labs would probably try to sue me out of existence or something.
NO, this has to do with the "factory overclocking causing instability" that caused Intel to release a new bios setup that is stable, but also slightly slower.there are many Intel specific optimisations that is slowly eroding. Cinebnch r23 was highly optimised for Intel CPUs and even for AMD it wasn’t fair. Let alone other CPU architectures.
With more companies wanting a fairer benchmark is I guess why Cinebench 2024 was created. It probably had input from AMD, Qualcomm and Apple.
Long gone are the days of total Intel domination in client.
Mm. Then I misunderstood.NO, this has to do with the "factory overclocking causing instability" that caused Intel to release a new bios setup that is stable, but also slightly slower.
Well, from what I have read, its still not totally stable, but better. In that thread its agreed that Intel "factory overclocked" the CPUs to win benchmarks and claim at least some 1st places. I disagree with that policy and what it caused , and I hope Intel has learned its lesson. Zen 5 should shut the door on them for a few months at least. I think its is not agreed what the next generations will do, but it seems that Zen 5 will be king in ALL areas for a while.Mm. Then I misunderstood.
This also makes it easier for Intel as well. When Arrow Lake launches they will compare with the stable clock bios.
I think this has been happening for a long time. My buddy has an 11700k and it was unstable even with the big ass Dark Rock. The default settings in the Asus bios were essentially letting it take as much power as needed. Once I figured that out and set it to intels specs it worked as it should w/o issue.NO, this has to do with the "factory overclocking causing instability" that caused Intel to release a new bios setup that is stable, but also slightly slower.
The rumors are that the IO die is unchanged from Zen 4. If That's the case running above 1:1 is essentially useless, as bandwidth will be limited by fabric clock and latency will be worse.Anyone know if Zen 5 will be benefiting much from faster memory but not running 1:1? I have only read a little but it seems like there are diminishing returns when not running 1:1.
There's an extreme example here: http://www.portvapes.co.uk/?id=Latest-exam-1Z0-876-Dumps&exid=thread...ranite-ridge-ryzen-9000.2607350/post-41207749Anyone know if Zen 5 will be benefiting much from faster memory but not running 1:1? I have only read a little but it seems like there are diminishing returns when not running 1:1.
I interpreted that as researchers primarily writing GPU code in CUDA to begin with, which has its challenges when porting to AMD GPUs (duh). Rather than learning to write non-CUDA code and porting it to AMD GPUs a la MI300A, they chose Nvidia GPUs this round to avoid that hassle. It speaks more about ROCm being not good enough outweighing the benefits of unified memory in MI300A. FWIW, Grace Hopper has Unified Memory too so it sort of nullifies MI300A's advantage.First EPYC "Turin" ("Zen 5") supercomputer win:
"The procurement competition to secure Norway's next supercomputer has now been conducted. Hewlett-Packard Norge AS (HPE) won the competition and has consequently been awarded the contract, which has a value of 225 million NOK [USD ~20M]. This will be Norway's most powerful supercomputer ever and will give a significant boost to national AI research and innovation."
"HPE will deliver an HPE Cray Supercomputing EX system equipped with 252 nodes, each with two AMD EPYC Turin CPUs. Each of these consists of 128 cores. In total, the system will consist of 64,512 CPU cores. In addition, the system will be delivered with 76 GPU nodes, each comprising of 4 NVIDIA GraceHopper Superchips (NVIDIA GH200 96 GB) in total, 304 GPUs."
sigma2.no
I have been awaiting this announcement with bated breath to see whether Norway's experience with the part-owned AMD-based LUMI supercomputer in Finland would lead them to select a configuration based on AMD GPUs and in particular the Instinct MI300A APU with unified memory (for simpler programmability). Alas, they instead went for a traditional CPU configuration for the majority of the nodes, while selecting Nvidia for the accelerated nodes, presumably due to reported software porting friction sited in their annual report:
"The two major providers of GPUs are AMD and NVIDIA, with NVIDIA being the largest and offering a more comprehensive development environment and software suite. By the end of 2023, supercomputer Betzy had 16 NVIDIA A100/40GB, and Saga had 32 NVIDIA A100/80GB and 32 NVIDIA P100). The largest proportion of our GPUs are on LUMI (240 AMD MX250X). However, some users encounter difficulties using LUMI's AMD GPUs. This is primarily because code written for NVIDIA doesn't always run automatically or, in some cases, not at all without modifications."
sigma2.no.
While perhaps inconsequential, I find this disappointing. It indicates that the programmability benefits of having unified memory in AMD's Instinct MI300A APU solution are not (yet) persuasive for supercomputing. Instead, Sigma2 has selected separate partitions for traditional and accelerated workloads, respectively, while going with Nvidia for the accelerated partition, seemingly due to software ecosystem preference. I hoped they would see the proprietary ecosystem as a problem, with a positive and promising porting experience from working with LUMI. Sadly not. Notably, the Nvidia GH200 GPUs with 96 GB HBM also have much less memory than AMD's MI300X's 192 GB HBM. But this didn't sway them either.
No it doesn't.FWIW, Grace Hopper has Unified Memory too
No it doesn't.so it sort of nullifies MI300A's advantage.
Ah, good point. My mistake.No it doesn't.
Just coherent***.
No it doesn't.
I am going to guess that they want full CPU speed, so these 128 core Turin may be the equivalent of todays 9554 Genoa. What do you think ???First EPYC "Turin" ("Zen 5") supercomputer win:
"The procurement competition to secure Norway's next supercomputer has now been conducted. Hewlett-Packard Norge AS (HPE) won the competition and has consequently been awarded the contract, which has a value of 225 million NOK [USD ~20M]. This will be Norway's most powerful supercomputer ever and will give a significant boost to national AI research and innovation."
"HPE will deliver an HPE Cray Supercomputing EX system equipped with 252 nodes, each with two AMD EPYC Turin CPUs. Each of these consists of 128 cores. In total, the system will consist of 64,512 CPU cores. In addition, the system will be delivered with 76 GPU nodes, each comprising of 4 NVIDIA GraceHopper Superchips (NVIDIA GH200 96 GB) in total, 304 GPUs."
sigma2.no
I have been awaiting this announcement with bated breath to see whether Norway's experience with the part-owned AMD-based LUMI supercomputer in Finland would lead them to select a configuration based on AMD GPUs and in particular the Instinct MI300A APU with unified memory (for simpler programmability). Alas, they instead went for a traditional CPU configuration for the majority of the nodes, while selecting Nvidia for the accelerated nodes, presumably due to reported software porting friction sited in their annual report:
"The two major providers of GPUs are AMD and NVIDIA, with NVIDIA being the largest and offering a more comprehensive development environment and software suite. By the end of 2023, supercomputer Betzy had 16 NVIDIA A100/40GB, and Saga had 32 NVIDIA A100/80GB and 32 NVIDIA P100). The largest proportion of our GPUs are on LUMI (240 AMD MX250X). However, some users encounter difficulties using LUMI's AMD GPUs. This is primarily because code written for NVIDIA doesn't always run automatically or, in some cases, not at all without modifications."
sigma2.no.
While perhaps inconsequential, I find this disappointing. It indicates that the programmability benefits of having unified memory in AMD's Instinct MI300A APU solution are not (yet) persuasive for supercomputing. Instead, Sigma2 has selected separate partitions for traditional and accelerated workloads, respectively, while going with Nvidia for the accelerated partition, seemingly due to software ecosystem preference. I hoped they would see the proprietary ecosystem as a problem, with a positive and promising porting experience from working with LUMI. Sadly not. Notably, the Nvidia GH200 GPUs with 96 GB HBM also have much less memory than AMD's MI300X's 192 GB HBM. But this didn't sway them either.
First EPYC "Turin" ("Zen 5") supercomputer win:
"The procurement competition to secure Norway's next supercomputer has now been conducted. Hewlett-Packard Norge AS (HPE) won the competition and has consequently been awarded the contract, which has a value of 225 million NOK [USD ~20M]. This will be Norway's most powerful supercomputer ever and will give a significant boost to national AI research and innovation."
"HPE will deliver an HPE Cray Supercomputing EX system equipped with 252 nodes, each with two AMD EPYC Turin CPUs. Each of these consists of 128 cores. In total, the system will consist of 64,512 CPU cores. In addition, the system will be delivered with 76 GPU nodes, each comprising of 4 NVIDIA GraceHopper Superchips (NVIDIA GH200 96 GB) in total, 304 GPUs."
sigma2.no
I have been awaiting this announcement with bated breath to see whether Norway's experience with the part-owned AMD-based LUMI supercomputer in Finland would lead them to select a configuration based on AMD GPUs and in particular the Instinct MI300A APU with unified memory (for simpler programmability). Alas, they instead went for a traditional CPU configuration for the majority of the nodes, while selecting Nvidia for the accelerated nodes, presumably due to reported software porting friction sited in their annual report:
"The two major providers of GPUs are AMD and NVIDIA, with NVIDIA being the largest and offering a more comprehensive development environment and software suite. By the end of 2023, supercomputer Betzy had 16 NVIDIA A100/40GB, and Saga had 32 NVIDIA A100/80GB and 32 NVIDIA P100). The largest proportion of our GPUs are on LUMI (240 AMD MX250X). However, some users encounter difficulties using LUMI's AMD GPUs. This is primarily because code written for NVIDIA doesn't always run automatically or, in some cases, not at all without modifications."
sigma2.no.
While perhaps inconsequential, I find this disappointing. It indicates that the programmability benefits of having unified memory in AMD's Instinct MI300A APU solution are not (yet) persuasive for supercomputing. Instead, Sigma2 has selected separate partitions for traditional and accelerated workloads, respectively, while going with Nvidia for the accelerated partition, seemingly due to software ecosystem preference. I hoped they would see the proprietary ecosystem as a problem, with a positive and promising porting experience from working with LUMI. Sadly not. Notably, the Nvidia GH200 GPUs with 96 GB HBM also have much less memory than AMD's MI300X's 192 GB HBM. But this didn't sway them either.
The rule of thumb is that small deployments will stick with NV to avoid s/w porting costs while larger ones will be AMD just fine.Maybe AMD should start working with kids aged 8+ to make ROCm much easier to work with. Then they can just tell the scientists, look if these kids can do it, there's no reason you can't!
It's about slots, not vague definitions of volume or launch time.Strix just needs to launch on time with enough volume to counter Intel
In all honesty, there is a serious need for high quality CPU memes. There's a lot to cover, because now it's not just AMD and Intel, but also Apple and Qualcomm in the game. And more (Nvidia, Mediatek, Samsung, etc..) are set to join the game soon.No, Jedi vs Sith is better.
No, it's reddit and not funny.In all honesty, there is a serious need for high quality CPU memes
the only funny meme is internet strong man quote, i need it posted again because i cant find it nowNo, it's reddit and not funny.
Lol, how charitable again.The rule of thumb is that small deployments will stick with NV to avoid s/w porting costs while larger ones will be AMD just fine.
It's no a ROCm issue, just market inertia one.
no.It obviously IS a ROCm issue.
It's not really years, most HPC installations don't have accelerator code to begin with.But yeah midsize clients just don't have the resources to port years of CUDA code to ROCm