Aapje
Golden Member
- Mar 21, 2022
- 1,530
- 2,106
- 106
Do you?You haven't got a flipping clue what it costs to build anything.
Do you?You haven't got a flipping clue what it costs to build anything.
The BOM lunatics are wrong about it being chips and components. It is entirely because Nvidia wants mass market products to have margins too. All Nvidia buyers pay for the development of those tensor cores, ray tracing support, CUDA and so on whether they use it or not.
I would not be surprised if the BOM of a 4060 was less than the 7600. The silicon might cost a tad more due to wafer costs but the lower TDP will mean NV can save on the heatsink so I bet that is a rough wash.
As such a $249 is still going to be very profitable for AMD / NV for this kind of product.
Because this is an oligopoly. Then you often get bad market conditions far from a truly competitive market where production costs drive prices.But the question remains. If margins are so rich, why won't AMD lower prices to $199 and take market share?
No doubt NVidia will get better margins than AMD, because NVidia will charge $299, which will force AMD to charge $249 at most, and even then they will be unlikely to match NVidia sales.
But the question remains. If margins are so rich, why won't AMD lower prices to $199 and take market share?
Because AMD learned after the HD4000 series that does not really work in the face of NV mindshare.
I was under the impression that HD4800 did very well for AMD.
But the question remains. If margins are so rich, why won't AMD lower prices to $199 and take market share?
It sold well but it didn't set AMD up for future prosperity.
Therein lies the rub. At at time, AMD would never make a massive die because outside of gamers, they'd have no alternate market to sell it to. Nvidia always had an HPC and workstation market that would justify the design and fabrication of a massive die, where the higher margins of those markets more or less enables a massive die GPU for the consumer market.If AMD were willing to make a massive die when they were ahead or on even footing, things may have gone a lot differently for them. They might have had some of the top cards to build more mind-share and more expertise with designing bigger cards that would have made Fiji a better product.
But, AMD should be able to amortize RDNA R&D across the console market as well. That’s a pretty big leg up. Workstation market is more about 'approved' drivers for various CAD/CAM software - etc (though that changed a bit with CUDA development and will change more with AI development I imagine).Therein lies the rub. At at time, AMD would never make a massive die because outside of gamers, they'd have no alternate market to sell it to. Nvidia always had an HPC and workstation market that would justify the design and fabrication of a massive die, where the higher margins of those markets more or less enables a massive die GPU for the consumer market.
Therein lies the rub. At at time, AMD would never make a massive die because outside of gamers, they'd have no alternate market to sell it to. Nvidia always had an HPC and workstation market that would justify the design and fabrication of a massive die, where the higher margins of those markets more or less enables a massive die GPU for the consumer market.
Ehhh, I was too young to be paying that close attention to AMD/ATI and Nvidia during that time, but I'm not so sure Nvidia had this grand scheme in mind the entire time. If I'm not mistaken, CUDA was developed by some non-Nvidia software engineer who was tinkering on their own and had come up with a way to use the GPU to do compute. Nvidia saw the value in the work of this person and they were hired to lead the development of this framework. A few years later, this project was given the name CUDA and it was launched in 2006 with the 8800 series. I don't think Jensen Huang had the foresight to develop CUDA because if they did, he wouldn't need inspiration from this software engineer. I will give him credit for seeing the opportunity and jumping on it though, not unlike how during the early innings of deep learning, Jensen Huang discovered that AI researchers were using Nvidia GPUs. Prior to this, the company's focus in the GPU accelerated computing space was HPC and scientific simulation. You know, 64-bit, double-precision traditional HPC stuff. Again, when he realized the potential of this, he immediately shifted the company's focus on this fledgling market because he correctly predicted that it would eventually dwarf the market cap of the markets they previously competed in. The same could possibly be said of their continual investments in the metaverse and self-driving cars, both of which are long-term plays that have the potential to become 100+B dollar industries. Long story short, if there's one thing JHH is good at, it's not that he's been able to "invent" anything that his company is known for doing. What he is good at is diversifying the use of GPUs and to keep making investments to find ways to increase that diversification. He recognizes that Nvidia is so dominant in the markets they currently compete in that to not diversify would be to stagnate because they're likely already saturating those markets. It's the Blue Ocean strategy in a nutshell: can't gain more market share when you already have a commanding portion of it, so you must create new markets to grow.-Its a vision and cultural problem for DAAMIT. Nvidia went into the programable shader DX10 gen with a vision.
They weren't selling Graphics Processing Units, they were selling General Processing Units, and their dies were gonna be every bit as good and ubiquitous as Intel's.
It's wild to look back and see how expertly and with laser focus NV executed on that vision with CUDA on the 8800 series all the way to today.
DAAMIT on the other hand started DX10 generation with a massive wet turd in the form of the 2900XT ( the 2900XTX never even launched, maybe a cursed moniker, next XTX card they would release is the 7900XTX, which also failed to live up to the hype). Their next couple gens was on the backfoot, following exactly the wrong small die small features strategy while NV entrenched CUDA and was willing to sell massive dies for razor thin margins because they knew CUDA was the golden ticket while AMD played the short game for children's toys in a fickle market.
AMD was and is a CPU company. GPUs were silly things that played children's games. They had these farty ideas about heterogenous compute (who even remembers that whole spiel) and TO THIS VERY GOD DAMN DAY have not been able to make a heterogenous compute complex (sort of what became an APU) where the graphics cores and x86 cores worked together on generalized workloads. We see APUs today as cute HTPC/Laptop parts, but they were supposed to be AMD's CUDA level revolution and they have failed to this very day in making that vision a reality.
AMD is fundamentally the wrong company to shepherd a big GPU division. They're *just not interested*. Look at all the cool stuff AMD has done in CPUs throughout the years. Even their failures are kinda neat "what if" moonshots that had vision for a market that never came to exist. Their CPUs have a vision. Their GPU's though? Paint by numbers, lost in the desert, the other guys. It's just a product to them, run in a very MBA esq manner. Which is fine for the vidyagames but it's not nearly enough to take on a juggernaut like Nvidia.
So they are expecting the most sales to be for 8 GB only. Also better to keep these sad souls stuck with 8 GB so they upgrade sooner to 4070 or 4080 in future.nVidia only planning on doing an FE for the 4060 Ti 8 GB.
Ehhh, I was too young to be paying that close attention to AMD/ATI and Nvidia during that time, but I'm not so sure Nvidia had this grand scheme in mind the entire time. If I'm not mistaken, CUDA was developed by some non-Nvidia software engineer who was tinkering on their own and had come up with a way to use the GPU to do compute. Nvidia saw the value in the work of this person and they were hired to lead the development of this framework. A few years later, this project was given the name CUDA and it was launched in 2006 with the 8800 series. I don't think Jensen Huang had the foresight to develop CUDA because if they did, he wouldn't need inspiration from this software engineer. I will give him credit for seeing the opportunity and jumping on it though, not unlike how during the early innings of deep learning, Jensen Huang discovered that AI researchers were using Nvidia GPUs. Prior to this, the company's focus in the GPU accelerated computing space was HPC and scientific simulation. You know, 64-bit, double-precision traditional HPC stuff. Again, when he realized the potential of this, he immediately shifted the company's focus on this fledgling market because he correctly predicted that it would eventually dwarf the market cap of the markets they previously competed in. The same could possibly be said of their continual investments in the metaverse and self-driving cars, both of which are long-term plays that have the potential to become 100+B dollar industries. Long story short, if there's one thing JHH is good at, it's not that he's been able to "invent" anything that his company is known for doing. What he is good at is diversifying the use of GPUs and to keep making investments to find ways to increase that diversification. He recognizes that Nvidia is so dominant in the markets they currently compete in that to not diversify would be to stagnate because they're likely already saturating those markets. It's the Blue Ocean strategy in a nutshell: can't gain more market share when you already have a commanding portion of it, so you must create new markets to grow.
AMD and Intel compete in Red Oceans:
View attachment 80798
Nvidia strives to come up with Blue Oceans. When you're first to enter an unknown market, you can dictate the rules. CUDA is what allows them to dictate the AI space.
View attachment 80799
Apple is a good example of a company that has created and captured numerous Blue Oceans (iPod, iPhone, iPad). It's no surprise that JHH wants Nvidia to be like Apple, and many of us have made that comparison before.
Lastly, Nvidia's disinterest in providing value to customers in the consumer GPU space is because the consumer GPU space is precisely the definition of a Red Ocean, and JHH doesn't give a f*ck about Red Oceans anymore when there's Blue Oceans to capture.
NVIDIA confirms no Founders Edition for GeForce RTX 4060 and RTX 4060 Ti 16GB is planned - VideoCardz.com
GeForce RTX 4060 Ti 16GB and RTX 4060 non-Ti only from AIBs NVIDIA has confirmed it will not release its in-house design for two out of three cards announced yesterday. Only RTX 4060 Ti 8GB will be getting a Founders Edition. NVIDIA’s own reference design for RTX 4060 Ti looks similar to what we...videocardz.com
nVidia only planning on doing an FE for the 4060 Ti 8 GB.
We'll see how $299 the 4060 ends up being.
They are actually making three-fan 4060's that are probably going to cost a decent chunk more, which is completely ridiculous given that it's actually a 4050.As long as the current economic malaise continues, we will probably get MSRP cards.
They are actually making three-fan 4060's that are probably going to cost a decent chunk more, which is completely ridiculous given that it's actually a 4050.
Funnily enough, the AIBs are also making a lot of absolutely tiny 1-fan models, to really drive the point home that you are getting a tiny chip that even Lay's would be ashamed to sell.
Yes, It's 4060, but the price is not good, but this is true for the whole generation.More of the same nonsense. It's not actually a 4050.
It is actually a 4060 which will be competitive with AMDs 7600 which is actually at 7600 and not a 7500...
Yes, It's 4060, but the price is not good, but this is true for the whole generation.
For the 4060, it's actually the positive standout of this generation, because it gets a 20% performance bump, and a price cut.
I'm pretty happy with the 4060, and will likely get one, if the AIB pricing holds to MSRP.
Really 55% more performance than the 3090 for 7% more money. Still great, but compare it to the actual flagship of the Ampere generation and not the last desperate attempt to squeeze maximum margins from miners before the bubble burst.The 4090 is the only standout card this gen. 46% more performance than the 3090Ti for a 20% lower MSRP.
EDIT: For the 4060 to match that it would need to have 3070/4060Ti performance and cost around $260. For the 4070 to match that it would need to perform like the 4070Ti and cost $400. For the 4080 to match that it would need to perform like it does but cost just $560.
Really 55% more performance than the 3090 for 7% more money. Still great, but compare it to the actual flagship of the Ampere generation and not the last desperate attempt to squeeze maximum margins from miners before the bubble burst.
Really 55% more performance than the 3090 for 7% more money. Still great, but compare it to the actual flagship of the Ampere generation and not the last desperate attempt to squeeze maximum margins from miners before the bubble burst.