With two of these (and they want you to use two, because dual slot), you can run a Llama 3 70B class models in FP8 locally.
That means basically running it lossless. The Llama 3 70B model is much, much "smarter" than the original ChatGPT release.
If you want to have this model do something for...
This is the official description -
> AI is driving a revolution that is rapidly reshaping every aspect of computing and the technology industry. Dr. Lisa Su will explore how AMD, together with its partners, is pushing the limits in AI and high-performance computing from the data center, to the...
I call it a filter because it seems like it cannot operate independently and would need a core that supports the full instruction set next to it. something to grow like a tumor on. Or is this meant to support the full instruction set?
If I am understanding what is being discussed correctly, it is not a core at all - just like a filter on top of regular Zen 5 cores that can carry out a subset of instructions in a power efficient manner without waking up the full core.
If this is the case, why would this be exclusive to Halo...
What motivation would Microsoft have to butt in here? Security? Why do they care what cores their software runs on? If someone replies, please avoid unnecessary Microsoft bashing.
The code name suggests development was started after 31. Similar to 40,41 -> 48.
Could it be a RDNA3.5 product that is yet to launch? Because as much as RDNA 3 did not meet expectations, it still has been selling well. Even it 36 did not meet expectations, was there really a need to cancel it...
I was referring to the theoretical peak of weights being streamed in and out of the NPU given the 7 interface-memory tile pairs. I am confused about what you are trying to say here.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.