Arachnotronic
Lifer
- Mar 10, 2006
- 11,715
- 2,012
- 126
When was the last time AMD could be the biggest player in a 5-6B market ??
What makes you think that:
(1) the micro-server market is $5-6 billion
(2) that AMD will be the largest player
When was the last time AMD could be the biggest player in a 5-6B market ??
What makes you think that:
(1) the micro-server market is $5-6 billion
(2) that AMD will be the largest player
Even after "contra-revenue", Intel will generate about $12 billion in operating income during 2014.
Just sayin'.
Are you sure they going to get 12B ??? They only managed to get 2.5B in Q1 2014, DOWN 29% from Q1 2013 and you are expecting they will manage to get 12B in 2014 ??
PCCG Revenue continues to decline for the last 3 years and 2014 will be even worse.
I think you may have misread the financial statement. Let me help you...
arm for servers and embedded, small die large margin.if i was AMD this is what i would do:
1. go arm for tablets/phones like they are, no phone/tablet manufacture wants x86/intel anyway.
keep small core, low clock and high ipc, high integration(radios, memory etc.). goes in tablets, embedded, low cost computers2. move the cat core up ~10 watts (15 to 25 watt TDP's), make it a little wider and a little deeper go 128bit mem bus, still keep it to a small 4 core quad like it is now, essentially an attempt to eat intels ULV cake.
as for the big cores, I think that AMD should go even MOAR COARS, eventually the software will hit a point where it uses the more cores.3. The big core should just be about single thread perf at all other considerations expense, lowish core count, high TDP (~200watts), idea point on the clock vs power curve. hell with that kind of design you could reuse the existing sockets as it wouldn't be a bandwidth monster.
I think AMD has a chance to pick up some profit if they successfully iterate on their current Seattle/ARM server strategy. There is definitely a niche for many small power efficient cores, or many small cores that need huge amounts of memory and bandwidth but not much processing. AMD theoretically has the know-how to implement ARM's efficient cores with high-end server fabric and uncore, whether they can beat other ARM players or Intel into addressing that market is completely up in the air. But I think its possible so long as they keep their eye on the prize. It won't be like when x86 took over high-end RISC in the server space, but it could be a profitable niche.
Speaking of small cores:
The changes on the Kabini platform vs. Brazos have me raising an eyebrow:
On one hand, they add AES-NI and ECC with Jaguar......but then at the same time AMD reduces SATA from six (on Brazos) to only two on Kabini.
Why? It seems like they had a lot of potential for small servers if they simply would have kept the SATA ports on the SOC.
thats an untest assumptionarm for servers and embedded, small die large margin.
now you have 2 products eating each others lunchkeep small core, low clock and high ipc, high integration(radios, memory etc.). goes in tablets, embedded, low cost computers
they have already lost that battle, they are so far behind on complex cache controls that they will jsut get eaten alive by products intel already have, they need to so something people will want ( high single thread perf) that someone else doesn't already have a product for on the market.as for the big cores, I think that AMD should go even MOAR COARS, eventually the software will hit a point where it uses the more cores.
dream specs:
4GHz base clock with upto 5GHz turbo, 16 cores or 8 modules, ddr4 and 150W tdp.
yeah, we'll have to wait and seethats an untest assumption
what 2 products?now you have 2 products eating each others lunch
how do they get eaten alive, Im sure that AMD isnt that far behind in terms of absolute perf of their big core line, That is just hyperbole.they have already lost that battle, they are so far behind on complex cache controls that they will jsut get eaten alive by products intel already have, they need to so something people will want ( high single thread perf) that someone else doesn't already have a product for on the market.
The "unmitigated failure" statement from Mr. Feldman is quicker to explain the situation. AMD screwed up with their big core, end of history.
you have developed two cores ( an ARM and an x86) core that cover the same performance/power curves.what 2 products?
no it isn't, every core they add to the die the interconnect complexity increases, the time it takes to check all the L2's (like they have to now) increases. Intel already have 16 core processors and they had to go to dual ring buses for there L3 to maintain performance. AMD don't have anything close to that kind of scalable cache interconnect, they would have to develop it, they would also have to develop a cache protocol on top of that as well. intel have been developing those for what 3-4 generations.how do they get eaten alive, Im sure that AMD isnt that far behind in terms of absolute perf of their big core line, That is just hyperbole.
How are amd going to beat intel existing 16 core, 32thread cpu's to a margin that makes people switch?Not every one want more single threaded performance but every wants more performance, they are different approaches to gaining performance. The future is decidedly multithreaded and AMD has already invested too much into the modular approach to abandon it because software doesn't quite take advantage of it, that is only a temporal issue. They need to double down and provide tools to help people to maximize performance on their architecture.
more sata ports via pcie, don't see an argument here.
cbn said:Now as far as people saying things like "just add a PCI-E based two SATA port controller card" to the various two SATA mobile processors (like Beema, Mullins, Kabini, Bay Trail-D, etc.) there are numerous reports of problems with add-on controllers (in general) when using Linux.
Here is an example of third party SATA controller problems from the Anandtech c2750 review --> http://www.anandtech.com/show/7970/asrock-rack-c2750d4i-review-a-storage-motherboard-with-management
The only issue that surrounds the C2750D4I since its release is the use of Marvell controllers. Users have been reporting that in Linux and FreeBSD, high intensity read/write workloads cause the controller to reset and elements to any software array are lost.
Then there is the issue of whether or not the third part IC is even supported in Linux or BSD. I've even read cases where the aftermarket controller cards are even defective and cause silent data corruption
http://lime-technology.com/wiki/index.php/Hardware_Compatibility#PCI_SATA_Controllers
Beware!!! There are numerous providers of SiL3132-based addon cards, and a few of them (unknown how many) are known to be faulty, causing unseen (SILENT) data corruption
Based on that I would much rather have a native four Sata port situation than a two native sata port plus two SATA port controller card.
Here is what I wrote in another thread about third party sata controllers:
http://forums.anandtech.com/showpost.php?p=36319091&postcount=208
I'm thinking they are getting lazy and optimizing CPU releases and technology advances based on maximizing profit and returns on investment using marketing policies. They can do so thanks to their close to monopoly situation.
They got lazy one time before, and then AMD surpassed them. I'm saying it can happen again. It's time they get a bit paranoid as Mr Grove would say.
If you think Haswell is a 'lazy' effort then I don't really know what to tell you.
Plenty of innovation/work that went into Haswell. Please peruse the following slide deck:
https://intel.activeevents.com/sz14...hType=session&tc=0&sortBy=abbreviationSort&p=
that is anecdotal...I am sure there are many sata cards that work without issue in both linux and bsd.
If Jim Keller said they will use parts of the dozer family in their next core design (along with parts of their cat core design), it's because there is at least some value behind that design. It is actually in line with what some of us have been saying all along: there were valuable propositions behind the bulldozer uarch. A supposedly unmitigated failure in this context would be scrapped completely when making a new design, even more so when you just come back to AMD and were put into charge of such new design.
Not knowing the context of a cite is the most dreadful mistake one can make when trying to justify your argument by quoting an (supposed) authority. I dont doubt Mr. Feldman might be acknowledgeable in his particular area (building of microservers), but I highly doubt his knowledge in this area. I would rather trust what the autor behind the K8 has to say, dont know about you guys.
Regardless, that's history. AMD is developing a new 64-bit x86 big core and the design is headed by Jim Keller. Likely they'll use the successful cat cores as base, so it's not unlikely we'll see AMD big cores shine once again in a few years time. Intel better watch out, they've forgotten what Andrew Grove said - "only the paranoid survive".
I think you underestimate the lead time for processors. 2 years is nothing. Intel's not going to be able to make changes that quickly to be able to react to any possible threat by any of the announcements made by AMD.
Have you heard the rumor about the Nexus 8? And also in the Bay Trail thread, have you seen the Moorefield MeMO Pad?the same thing that makes people here think intel will actually get meaningful android design wins.
Have you heard the rumor about the Nexus 8? And also in the Bay Trail thread, have you seen the Moorefield MeMO Pad?
AMD is spending less than Nvidia on R&D, let alone Intel. Intel could indeed screw up the R&D pipeline as they did with P4, but I'd say that the odds are higher for AMD not being able to deliver a solution competitive with Intel or the ARM crowd for the markets they are targeting. As much as Keller is good, he is not a god. He can't make multiply R&D resources and budget.
You are creating a straw man here. There are lots of products that used good engineering solutions, solutions that you could find in later successful products.
For example, there were chip designers at Intel claiming that there were a lot of benefits to the company with Netburst, like the expertise to validate extremely clocked parts or their SMT implementation, which appeared again with Nehalen, and I don't think Netburst is what Intel marketing is what they would call a success, but yet a lot of engineering efforts for Netburst found their way in the Core line of products. Keller is doing just that with Bulldozer, getting the good engineering solutions out of that failed product and putting them on a new product.
And the fact that they reused HT only proves that SMT per se cant be judged because of Netburst (like you try to judge CMT because of Bulldozer in about every thread discussing the dozer family of uarchs).
R&D budget of course affects ability to design successful products, but it is not the sole determining factor. Otherwise AMD should not had been able to surpass Intel before either, because AMD had a much smaller R&D budget than Intel then too.
Also, Intel should be leading in the iGPU arena as well since they have a larger R&D budget than AMD/Nvidia, but they don't.