- Mar 3, 2017
- 1,747
- 6,598
- 136
Anyone remembers Athlon XP and Detonators XP just because.... windows XP ?I hope someone at AMD sees this and redoes their abysmal naming again:
Using the term ‘artificial intelligence’ in product descriptions reduces purchase intentions
Companies may unintentionally hurt their sales by including the words “artificial intelligence” when describing their offerings that use the technology, according to a study led by WSU researchers.news.wsu.edu
Apparently adding "AI" to your product description makes it *less* desirable to people.
They should just remove that part from the naming scheme along with "HX". Both add nothing of value anyway
Really? As a kid I thought it was cool. I didn't even use Windows XP at the time. But it sounded Xtreme, which was the style at the time.Even my much younger self felt second hand embarrassment over the 'XP' name.
Microsoft and Google are done, then.I hope someone at AMD sees this and redoes their abysmal naming again:
Using the term ‘artificial intelligence’ in product descriptions reduces purchase intentions
Companies may unintentionally hurt their sales by including the words “artificial intelligence” when describing their offerings that use the technology, according to a study led by WSU researchers.news.wsu.edu
Apparently adding "AI" to your product description makes it *less* desirable to people.
They should just remove that part from the naming scheme along with "HX". Both add nothing of value anyway
Yeah, I also thought Windows XP was ridiculous, being used to numbers that at least referred to a feature series and progression (DOS x.x, Windows x.x, 9x, NTx). But I was maybe a bit older, and not used to aggressive branding on hardware/software that at least to me seemed start at that point.Really? As a kid I thought it was cool... I didn't even use Windows XP at the time. But it sounded Xtreme.
Yeah, XP was stupid but Vista really nailed it 🤣Yeah, I also thought Windows XP was ridiculous, being used to numbers that at least referred to a feature series and progression (DOS x.x, Windows x.x, 9x, NTx). But I was maybe a bit older, and not used to aggressive branding on hardware/software that at least to me seemed start at that point.
No? What gives you that idea?wait, Halo has on-die Ram??? 😲😲😲😲😲
Anyone remembers Athlon XP and Detonators XP just because.... windows XP ?
It's just the same idiocy.
That's because historically by default forward jumps [relative to current value of program counter so going "ahead"] were assumed to be non-taken and those correctly predicted branches do not allocate into the branch target buffer[BTB] [IIRC allocation happens on first misprediction]. At the same time backwards branches were assumed to be taken as these are usually assumed to be loops. So ideally if you structured your branches in the way that the most likely outcome would fit with "not taken" you freed resources [BTB] for other parts of the code that might need it more. And you avoided initial misprediction penalty. Now, those are a basics and I might have gotten something wrong, so hopefully somebody can correct me. Oh and TAGE and modern state of the art solutions are most likely more sophisticated than thisI don't think you quite "support" the dual-decode feature, this is probably about tuning or compiler having a built-on model for it (that it can use when trying to optimize code for zen 5), which may not be super important...
What I wondered about the dual-decode issue is that historically processors AFAIK had worse throughput and performance when doing lots of taken branches, while non-taken branches were easier for them?
I'm not a software dev so I have only a very rough idea of this area, but aren't there profiling optimizations that try to improve performance by structuring code in such ways that that when running, the non-taken case is picked more often?
If that is so, this approach may ironically be suboptimal for Zen 5, since it can in theory benefit from higher amount of taken branches, given how they likely enable it to use its dual decode. That's a theory and I don't know if it is ever viable to recompile and restructure code to make taken branches more common. It may still harm performance overall way more than what it would help Zen 5...
In any case, I expect the the core to be able to work with current binaries normally, this likely is no secret way to get +32%, real world gains would probably be minor anyway.
That’s not my experience. Do you have any example where specifying a specific micro-architecture (as opposed to enable the use of new instructions) changed significantly the run time of a program?Re: GCC decoder intelligence...
Essentially, this means that the compiler isn't presenting the actual machine language instructions to the CPU in a manner that is (near) optimal for it's instruction layout. This can (theoretically) hurt it's ability to efficiently, quickly and intelligently convert machine language instructions that it is receiving into actual work functions in the core itself. According to certain absent posters here that will not be named, modern out of order processors are completely immune to anything that a compiler or programmer can throw at them and can adjust on the fly to these things and the above has absolutely zero effect on processor performance. In reality, how the machine language instructions are presented to the decoder can have a measurable effect on processor performance and efficiency.
Time and updates to GCC will show the truth. I suspect that it won't be a big change in MOST, but not all, cases.
Compilers will have a hard time laying out code to please the 2x4 decoders of Zen5. If you do hand tuning, you might extract something, but that might be slower on all other uarch.No one said it did. It doesn't and that is why AMD didn't contribute anything to gcc for it. But in theory you can be a bit clever if you know that you can now chain likely-take. branches.
Strix Halo is really big. Anyone hoping It It will be relatively cheap should forget about that, just look at what they ask for a Strix Point laptop.
My prediction is >2000 euro and for that only 4070 80W level of performance is not very good.
In my opinion Strix Halo is not aimed for gamers, that's just secondary. The main selling point is the 16C32T CPU paired with 64-128GB RAM.
It is not.Strix Point laptops are expensive because there s no competition in this segment but since the chip is the same size or so as Hawk Point
No, it's an expensive novelty part, lol.Strix Halo points to the future of windows laptops.
Strix Halo is really big. Anyone hoping It It will be relatively cheap should forget about that, just look at what they ask for a Strix Point laptop.
My prediction is >2000 euro and for that only 4070 80W level of performance is not very good.
In my opinion Strix Halo is not aimed for gamers, that's just secondary. The main selling point is the 16C32T CPU paired with 64-128GB RAM.
Is it like >$2500 expensive? Hopefully it’s around the $2000 mark but the 128GB will be very very expensive, yeah I see now that will be a niche SKU.No, it's an expensive novelty part, lol.
If it sells anything decent, maybe, otherwise lmao.
Less, the whole point is selling it for less money than a comparable CPU + dGFX combo.Is it like >$2500 expensive? Hopefully it’s around the $2000 mark but the 128GB will be very very expensive, yeah I see now that will be a niche SKU.
It's a proto-Medusa.Halo is a prototype for PS6 i'm sure