AMD: It Won't Be About 'AMD vs. Intel' Anymore

Page 7 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
It appears MS wants Intel to build those much higher IPC quad or hex cores. A nice side benefit would be to strengthen desirability of the MS desktop OS (which I'm sure would help sales or ASPs in that department).

I wouldn't frame it that way if only for the simple fact that for sure Microsoft could care less how the IPC/core and total core/system numbers work out at an intrinsic business operation level.

MS cares about revenue. And right now counting cores is an easy way to make an argument for the validity of their revenue model in terms of licensing.

But imagine a hypothetical world where we never went multi-core. A world where today's 32nm 2B xtor CPU's were single core, ridiculously wide and ridiculously pipelined for >10GHz operation. A world in which you and I were buying single-core chips and socket count was still the primary difference between us and the enterprise gear.

In that hypothetical world the enterprise revenues models would not be based on cores, they'd find some other means of billing customers $250k for software that is going to run on $10k worth of hardware. It would be based on GHz, or IPC, or the width of the core, etc.

So don't read anything whatsoever into Microsoft's licensing model. It is what it is because being what it is makes it easy(ier) for the microsoft sales team to argue the case in justifying their licensing model and nothing else.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I wouldn't frame it that way if only for the simple fact that for sure Microsoft could care less how the IPC/core and total core/system numbers work out at an intrinsic business operation level.

MS cares about revenue. And right now counting cores is an easy way to make an argument for the validity of their revenue model in terms of licensing.

So don't read anything whatsoever into Microsoft's licensing model. It is what it is because being what it is makes it easy(ier) for the microsoft sales team to argue the case in justifying their licensing model and nothing else.

I completely agree with you that Microsoft cares about revenue.

However, I would make the argument that no matter how Mircosoft slices the revenue pie they win by instituting "per core" server licensing.

Here is why:

Lets say Intel or AMD releases a super high IPC "Wide-like-Itanium" x86 Quad core and profit tremendously by doing so! Sure this reduces Microsoft's server revenue due to the reduction of cores licensed, but MS stands to gain potentially gain much more on the Desktop OS sales (where most of their revenue comes from.)

Why would Microsoft gain on Desktop OS sales if Intel or AMD is able to reduce their server licensing profits? I believe stronger x86 CPUs would help MS maintain a higher ASP for Big Windows (thereby adding value to their x86 ecosystem partnership). As things stand now they appear to have some doubts with the future of x86.

P.S. (Just to give the reader a idea of how OS ASPs have scaled with hardware ASPs over the years) When I recently looked into the price of Windows 95, I came up with a price of $90 to $200 during my internet search. Back in 1995 computer hardware was much more expensive, so paying that kind of money for a OS wasn't such a big deal for the consumer. Now look at the situation? The hardware has gotten so much cheaper in 2011, but Microsoft wants to charge essentially the same price ($100 for OEM Home Premium license and $180 for a retail Home Premium license). IMO, Google sees Microsoft's predicament and thus the opportunity of releasing a "free" OS in the form of Android. This Google strategy essentially begins the process of eating MS's lunch and shifting computing away from x86.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
In reference to post #125 and #149.

40 CPU cores @ $6874 licensing per core= $274,960 per server.
8 CPU cores @ $6874 licensing per core= $54,992 per server.
OK, but what reason would you have for buying Enterprise? Standard will cover the majority of users, and BI almost all the rest. As IDC said, you're reading too much into it. Normal users will have 4-8 core servers, a small handful of CALs, and typically won't even have a use for BI. But, BI has some SMB usefulness, so it also has a not-per-core option. A single SQL Server with >16 cores is going to be used for serious processing queries (and likely application code mixed in), not your typical transactional CRUD work. OLTP and the like are too fast and too scalable to bother with a big server for them, when you can have a high-availability cluster of nice 4/8-core servers, instead.

If you want to expose your SQL server to the outside, you need to pay up, which is where BI's per core licensing comes in. It's not nudging you towards some specific CPU. It's saying you can either pay a lower price, but have to add CALs (internal use by devs/DBAs), or pay through the nose, but not be limited by CALs (external/intranet use). Per core is just a way to not have to say up front that it's going to be tens of thousands of dollars or more, and to allow MS to get more money from users doing complicated analytics, that tend to be businesses with high expenses already.

Enterprise is for businesses who are happy as can be to pay a mint, as long as it works, and has every feature their DBAs asked for. Smaller businesses, who can't afford that, but might want to scale out, are going to go FOSS from the start, if they have any intelligence, so they will be irrelevant to SQL Server's licensing schemes.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
OK, but what reason would you have for buying Enterprise?

Apparently MS sells or plans to sell SQL 2012 Enterprise licenses.

That is all I need to know.

Now how does this factor into the analysis?

Well, that is where the server consultants, CPU engineers, financial people and all the other experts come in. They would assess the situation and come up with the appropriate x86 strategy based on sound financial principles.

I am not worried about AMD's desire (assuming the had the resources) to compete on the top end, but my biggest concern is how Intel views this in light of Itanium?

My hunch is that they had a strong conflict of interest with allowing x86 to grow anymore powerful due to the presence of Itanium. But maybe this new SQL 2012 licensing scheme will cause them to reconsider? Or could it be that Intel was actually part of the SQL 2012 strategy?

Both Intel and MS have obvious vested interest in seeing x86 continue on. Most likely Intel sees the gains on the top end as more realistic than making headway against ARM.

Furthermore, if we look at the current LGA 1155 socket. It almost appears Intel has hit some kind of performance ceiling. Notice the same clocks on the top 22nm and 32nm SKUs....only the power consumption has dropped.

Therefore my guess is that we will see LGA 1155 socket's successor progress to some kind of laptop platform/mini desktop platform. While LGA 2011's successor gets a much stronger CPU core based on High IPC.

Essentially we would see a shift of x86 towards the higher end:

1. Atom either gets dropped or a new Intel designed non-x86 uarch replaces it for phones.

2. Haswell becomes the new low power platform. (mini-desktop/laptop)

3. LGA 2011's successor gets a new higher IPC CPU. (This finally splits the desktop/server CPU core design from the laptop CPU core design).
 
Last edited:

denev2004

Member
Dec 3, 2011
105
1
0
I still can't understand why don't MS think of counting Module as a whole as long as BD need a whole Module to be turly funtional? MS's idea about IPC doesn't make sense, also MS are confronting with many powerful competitors and do not really have the ability to ask the processor company to do what they believe is right.
 

lol123

Member
May 18, 2011
162
0
0
Both Intel and MS have obvious vested interest in seeing x86 continue on. Most likely Intel sees the gains on the top end as more realistic than making headway against ARM.
No, that's really not accurate. It's actually Intel and AMD that have a vested interest in the continued market dominance of x86. Microsoft has always been, or at least tried to be, platform independent and have offered their Windows NT operating systems for MIPS, PowerPC, Alpha and Itanium in the past and now they are extending that strategy to ARM. In fact, Windows NT 3.1 was first developed for a RISC CPU, Intel i860, and then ported to x86 to avoid too great of a prevalence of x86-specific functions in the code. x86 is also the chief platform of Linux, with a great deal of support from Intel, which of course also annoys Microsoft. If they could move away from their special relationship with Intel and the x86 architecture, they would and they have repeatedly tried to do so in the past. But they can't.

No offense, but there's really not much in your hundreds upon hundreds of posts that hits the mark.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Apparently MS sells or plans to sell SQL 2012 Enterprise licenses.

That is all I need to know.
No, it's not. Who might be interested in purchasing them is quite important. They did not separate out features into different editions for no reason. It is a matter of what kind of business will need which set of features, and pricing according to how deep their pockets are likely to be.

SQL standard: using MS tools increases productivity v. other options, but your server and workstation costs matter (you have a small team doing most everything, a small set of servers, modest office space, etc.)...or you're lucky enough to have more, but not need anything in the more expensive SQL Server editions (lucky you!).

SQL BI: as above, but vertical integration into other MS products is something you see as adding value, and/or find self-analysis and reporting features to add value, and/or need to ensure that 3rd parties don't screw up your data or permissions (MDS and DQS).

SQL Enterprise: you have deep pockets, and you need a DB server that makes coffee for you in the morning, never breaks, and never slows down.

The pricing model allows BI and Enterprise to be priced competitively with Oracle, DB2, Informix, etc., yet also allowing SQL BI to offer a similar cost to prior midrange SQL editions that it replaces.

Look at this way: customers won't go for paying Microsoft a % of their yearly revenue. Therefore, Microsoft tries to find other ways to get better margin from those with deep pockets.

Now how does this factor into the analysis?

Well, that is where the server consultants, CPU engineers, financial people and all the other experts come in. They would assess the situation and come up with the appropriate x86 strategy based on sound financial principles.

I am not worried about AMD's desire (assuming the had the resources) to compete on the top end, but my biggest concern is how Intel views this in light of Itanium?

My hunch is that they had a strong conflict of interest with allowing x86 to grow anymore powerful due to the presence of Itanium. But maybe this new SQL 2012 licensing scheme will cause them to reconsider? Or could it be that Intel was actually part of the SQL 2012 strategy?
Forget Itanium. It seems to an Intel engineer playground, as of late, but functionally, it doesn't matter. Itanium hasn't been good for anything but SIMD-unfriendly vector FP loops since the first chip was released. It had more RAS features, but the truly useful ones have all migrated to x86, now.

Itanium lost to x86 before it was even created. Here's a complicated math problem for you: given a CPU that operates strictly in-order, to the point of often relying on software to handle register read and write hazards, how will you statically compile an executable whose instruction paths are ideal for all situations, with limited instruction and data bandwidth, and high memory latency? As of 2011, no one has figured out how to do that.

Both Intel and MS have obvious vested interest in seeing x86 continue on. Most likely Intel sees the gains on the top end as more realistic than making headway against ARM.
MS has a vested interest in MS software selling, and is now working to decouple themselves from x86. X86 needs to be a strength for MS, but not a weakness.
Furthermore, if we look at the current LGA 1155 socket. It almost appears Intel has hit some kind of performance ceiling. Notice the same clocks on the top 22nm and 32nm SKUs....only the power consumption has dropped.
Intel PR said more than once that they were focusing more on power efficiency than raw performance. The mass market is currently limited by Watthours from their batteries. Throw in BD not performing well enough to need super-speedy high-end SKUs, and well, there you go.

Therefore my guess is that we will see LGA 1155 socket's successor progress to some kind of laptop platform/mini desktop platform. While LGA 2011's successor gets a much stronger CPU core based on High IPC.
But why? That would mean another product family to develop, when there is no need. They need a superior Atom, and then just one other high-performance core. There is no need for a middle, as long as they can sell slow SKUs using said high-performance core.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Itanium lost to x86 before it was even created. Here's a complicated math problem for you: given a CPU that operates strictly in-order, to the point of often relying on software to handle register read and write hazards, how will you statically compile an executable whose instruction paths are ideal for all situations, with limited instruction and data bandwidth, and high memory latency? As of 2011, no one has figured out how to do that.

I can't argue with you on CPU architecture. I just don't have the knowledge.

I'd like to point out though, according to this source , Itanium does not operate strictly "in-order".

Poulson abandons the idea of simple hardware controlled by the compiler and is the first dynamically scheduled Itanium design, with modest out-of-order execution.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
No, it's not. Who might be interested in purchasing them is quite important. They did not separate out features into different editions for no reason. It is a matter of what kind of business will need which set of features, and pricing according to how deep their pockets are likely to be.

SQL standard: using MS tools increases productivity v. other options, but your server and workstation costs matter (you have a small team doing most everything, a small set of servers, modest office space, etc.)...or you're lucky enough to have more, but not need anything in the more expensive SQL Server editions (lucky you!).

SQL BI: as above, but vertical integration into other MS products is something you see as adding value, and/or find self-analysis and reporting features to add value, and/or need to ensure that 3rd parties don't screw up your data or permissions (MDS and DQS).

SQL Enterprise: you have deep pockets, and you need a DB server that makes coffee for you in the morning, never breaks, and never slows down.

The pricing model allows BI and Enterprise to be priced competitively with Oracle, DB2, Informix, etc., yet also allowing SQL BI to offer a similar cost to prior midrange SQL editions that it replaces.

Look at this way: customers won't go for paying Microsoft a % of their yearly revenue. Therefore, Microsoft tries to find other ways to get better margin from those with deep pockets.

Not sure how many SQL 2012 Enterprise licenses MS will sell.

But I am sure they will sell some right?

Therefore whatever deep pocket customers MS is able to target with Enterprise are also customers Intel or AMD can target with their Highest IPC x86 Processors designs.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Intel PR said more than once that they were focusing more on power efficiency than raw performance. The mass market is currently limited by Watthours from their batteries. Throw in BD not performing well enough to need super-speedy high-end SKUs, and well, there you go.

The x86 CPUs I am thinking about (New generation of Server/Desktop) wouldn't be running on batteries.

The x86 CPUs for the laptops would be different...think products like Haswell.

And yes, I do agree, the situation for Intel's top 22nm CPU SKUs doesn't look so great from a raw performance standpoint. Yet, another reason a high performance desktop CPU based on a very profitable high IPC x86 server CPU core design would be welcomed.

MS has a vested interest in MS software selling, and is now working to decouple themselves from x86. X86 needs to be a strength for MS, but not a weakness.

Agreed.

This is why I think Intel and AMD releasing more powerful x86 processors makes total sense.

At the rate things are going MS is headed towards a shift from PC OS profits to Server Software profits.

Having more powerful and desirable x86 processors available might help shift some of those profits back to desktop.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
But why? That would mean another product family to develop, when there is no need. They need a superior Atom, and then just one other high-performance core. There is no need for a middle, as long as they can sell slow SKUs using said high-performance core.

I would argue that dropping atom may make more sense. (I just can't see Intel x86 winning against ARM for smartphones in any way that is cost effective.)

With that being said, I doubt all the mobile tech (shrinking Wifi to integrate on SOC, Infineon Baseband, etc) will go to waste.

Instead Intel will likely introduce an new non-x86 CPU to replace atom. Something lightweight.....a new Smartphone specific cpu that doesn't need to be burdened with all the complex instructions that Intel will integrate on its future highest performance Server/Ultra Performance Desktop x86 CPUs.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
In that hypothetical world the enterprise revenues models would not be based on cores, they'd find some other means of billing customers $250k for software that is going to run on $10k worth of hardware. It would be based on GHz, or IPC, or the width of the core, etc.

Maybe there is another way of looking at the problem?

If the current scheme allows for $250,000 software fees on top of $10,000 hardware.....maybe AMD or Intel will find justifiable means to increase the price of Server CPUs?

After doing an internet search I came up with IBM z196 as an example of a really expensive server chip (5.2 Ghz quad core, 512mm2 on 45nm PD SOI)----> http://www.slashgear.com/ibm-z196-5...rds-could-cost-hundreds-of-thousands-2599009/

However, in the Oracle "per core" licensing model that extreme CPU is still rated the same as any Itanium CPU core (purchased after Dec.1, 2010), Power 7...or twice the rating of any x86 Intel or AMD core. Oracle's "Core Factor" listing can be found here.

Therefore couldn't there conceivably exist two tiers of Server x86 CPUs for both Intel and AMD? The low tier would be a continuation of what exists today (Xeon and Opteron)....and the future "Wide-like-Itanium" (for lack of a better description ) higher IPC x86 server chips could make up what will become tier 2. <--Obviously these wouldn't be as expensive as IBM z196, but I would imagine a reasonable mark-up could still be applied.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I can't argue with you on CPU architecture. I just don't have the knowledge.

I'd like to point out though, according to this source , Itanium does not operate strictly "in-order".
Paulson isn't out, yet. There's some chance that it may move Itanium from being comical, to being OK, should you use VMS, HP-UX, Nonstop, etc.. No one in their right minds would use Itanium without needing a feature only it had.

IA64 going OOO is very much showing a flag of defeat (I'll bet the R&D work on it will help future non-Itanium developments, though).

From the link:
"The irony is that Poulson departs from the principles behind Itanium and follows a much more nuanced approach to computer architecture. The Itanium architecture and early implementations were a reaction to the increasing hardware complexity in the early 1990's. They were based around the theory that the hardware should be very simple and almost totally managed by software."

The thesis behind HP-WideWord/EPIC/IA64 was based on an idea that RISC architectures would, as time went on, reach a limit of about 1 IPC maximum. While it was not a proven thing, it was, with assumptions of the time (late 80s/early 90s) not unlikely, if one assumed speeds would keep ramping up (like the 10GHz Netburst CPU by 2011 that never came to be ). Also, at the time, superscalar processors (able to execute more than one instruction in an in-order stream at once) were fairly new, and register renaming more complicated than very simple shadowing was practically unheard of. The Pentium Pro had yet to show the world the future. With that in mind, however, their solution was bass-ackwards.

The answer, as seen by these guys now at HP, was to have the compiler tell the CPU exactly what, when, where, and how, instructions can be executed. By doing so, with VLIW, instructions are neatly aligned, easy to process, and the processor itself can be very simple (because it has to manage very little of the state!).

Such hardware needs great compilers to run effectively, because the whole idea is that the hardware is fast but dumb. Well, there's not merely that, but also that while memory space gets cheaper all the time, memory bandwidth stays very expensive. All that explicit telling the CPU what to do makes for absolutely massive binaries, necessitating massive instruction caches, and that's just the tip of the iceberg.

The reality is that ideal compilers don't exist, and may never exist. A compiler can help the CPU out, through good static analysis, or profiling, but that's as far as it can go. What the CPU needs changes. It is dynamic. The best way we've yet figured out how to predict future needs for a processor is to make the processor do its own analysis of recent events.

Well, at an even lower level than branching, this is true. Register renaming ( I cannot for the life of me come up with a concise explanation, sorry) helps to enable effective reverse-engineering of ILP from an otherwise in-order stream of instructions. Instructions not dependent upon each other can be executed in any order. So, if instruction A's data needs to be fetched, but B, C, D, and E do not depend on A, they can run ahead, while A waits. The end result is similar performance to an in-order CPU, if A did not have to wait. In reality, which instruction is waiting is often unknown, and with data structures difficult to fit in L1, or that cross a few cache lines over the course of just a few instructions, you can pretty much be assured that some are going to wait, while others can run, and it could be different ones each time, or different ones on different CPU families.

Implementing OOOE is complicated an intricate, but it works better than any alternative yet to be found, and could even improve alternatives (such as VLIW, and ISAs that try to map data dependencies). The increase in potential IPC from OOOE, however, has a side effect: now far memory is effective even farther, because groups of instructions are being completed faster. Bring on speculation and caches (xtors, manhours, heavy bags of cash!)!

The core of EPIC was that these kinds of technologies would become too difficult to implement, and run out of steam, while compilers could keep on advancing performance, so simpler faster hardware would be better in the long run. Implementation difficulty is certainly an issue, and we are now in the era of severely diminishing returns, but they haven't nearly run out of steam. Meanwhile, RISC-type developments (simpler faster hardware, more complicated software) have run out of steam, instead, and all the great compiler work to make RISC CPUs fly has ended up helping CISC every bit as much as RISC. Modern surviving CISC and RISC ISAs, that are fairly good anyway, bear little resemblance to old CISC or RISC, each having taken from the better of each other's features over the years.

As such, Itanium was late, hot, and underwhelming when the first Merceds came out, and except for certain niches within the niche that is HPC, it has remained late, hot, and underwhelming. While compilers have advanced, they haven't advanced enough to get rid of the need for increasing hardware-level complexity.
Instead Intel will likely introduce an new non-x86 CPU to replace atom. Something lightweight.....a new Smartphone specific cpu that doesn't need to be burdened with all the complex instructions that Intel will integrate on its future highest performance Server/Ultra Performance Desktop x86 CPUs.
Intel wants to use x86 to help make them successful. I could see them dropping IA32 support, and forcing EUFI, but that would be extent of ISA reductions they could make without shooting themselves in the foot. Complicated instructions are not a burden, you see. That's 80s thinking. They are a method of keeping commonly executed code from getting too big :awe:.

If the current scheme allows for $250,000 software fees on top of $10,000 hardware.....maybe AMD or Intel will find justifiable means to increase the price of Server CPUs?
No, because the market won't pay more. I guarantee you there are more CentOS 5.x LAMP or LAPP installations running on Xeons or Opterons than the entire set of all servers in the world running software purchased on a per-processor-core basis.

Don't look at it from the perspective of CPU value. Look at it from the perspective of an organization where millions of dollars to upgrade software, using talented certified professionals, is a bargain, comparing to risking potentially cheaper options. That the pricing is per-core is just how MS can get away with charging that kind of money, because 99&#37; of the time, small/lean outfits won't have a need many-core MS SQL servers, and huge entrenched businesses have been used to that kind of crap, or worse, since the old super-proprietary mainframe and minicomputer eras.

Therefore couldn't there conceivably exist two tiers of Server x86 CPUs for both Intel and AMD? The low tier would be a continuation of what exists today (Xeon and Opteron)....and the future "Wide-like-Itanium" (for lack of a better description ) higher IPC x86 server chips could make up what will become tier 2. <--Obviously these wouldn't be as expensive as IBM z196, but I would imagine a reasonable mark-up could still be applied.
No. Intel and AMD already have three tiers each (counting PhII w/ ECC as equivalent to a 1155 Xeon). But, it's not by any kind of IPC type metric, because everyone benefits from higher IPC, since higher speeds mean too much heat. It's by number of CPUs, number of memory channels, and RAS features.
 
Last edited:

denev2004

Member
Dec 3, 2011
105
1
0
I can't argue with you on CPU architecture. I just don't have the knowledge.

I'd like to point out though, according to this source , Itanium does not operate strictly "in-order".
Poulson hasn't been released...The newest version is still Tukwila
Poulson is kind of a total change of Itanium's arch.
I think they once said they'll widen the instruction issue unit as well as provide with better RAS level.

The x86 CPUs for the laptops would be different...think products like Haswell.
They won't waiting for the release of Haswell as it's such a long time.
And, I think new arch is not usually a good place for Intel to think of reduce power consumption. BTW, The TDP of the desktop verson of Haswell is said to reach 95W again.
And also Intel has already said they will use IVY as part of its Ultrabook idea.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
And yes, I do agree, the situation for Intel's top 22nm CPU SKUs doesn't look so great from a raw performance standpoint. Yet, another reason a high performance desktop CPU based on a very profitable high IPC x86 server CPU core design would be welcomed.

You seriously misunderstand things. All they need to to do make a higher performing CPU is to increase the TDP limit. Rather than 77W, get it to 95W, or even 130W. That gives you more cores, more clock speed, or a combination of both. It's not like Haswell is using a low IPC core.

And I'm willing to almost bet that one of the goals on 22nm Ivy Bridge processors is to bring top end mobile performance on par with fastest LGA1155 parts.

Prediction 2012: Fastest i7 Ivy Bridge mobile CPU = Fastest LGA1155 Ivy Bridge desktop CPU(like 3770K)

denev2004 said:
BTW, The TDP of the desktop verson of Haswell is said to reach 95W again.

I have a different idea. Something that may not seem as so cool.

They messed up their almost perfect line up because Sandy Bridge E/EP was significantly delayed. Therefore, the desktop(which according to media is "dying" anyway) will be "downscaled" per se, to make Sandy Bridge E look better. Consider further that stock speeds are irrevalent to quite a portion of desktop users due to overclocking. Therefore, reduced clock speeds, and lower TDP that brings with it.

Maybe with Ivy Bridge E, they won't have that problem. They don't need to "downscale" to 77W, the E chips will be fast(er) enough compared to Haswell anyway. Back to 95W.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
No, because the market won't pay more. I guarantee you there are more CentOS 5.x LAMP or LAPP installations running on Xeons or Opterons than the entire set of all servers in the world running software purchased on a per-processor-core basis.

Don't look at it from the perspective of CPU value. Look at it from the perspective of an organization where millions of dollars to upgrade software, using talented certified professionals, is a bargain, comparing to risking potentially cheaper options. That the pricing is per-core is just how MS can get away with charging that kind of money, because 99% of the time, small/lean outfits won't have a need many-core MS SQL servers, and huge entrenched businesses have been used to that kind of crap, or worse, since the old super-proprietary mainframe and minicomputer eras.

Before we say the market won't pay more, I'd like to see some analysis done on the potential for adding an additional level of AMD x86 Server products.

One thing I noticed in my brief reading of "per core" server licensing is that adding additional threads with SMT does not count as additional cores. For example, a large/wide high IPC AMD CPU core with four way SMT would still count as only one core.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
The Fortune 1000/5000 companies dictate licensing to some degree. If they get a kick for these lots of core chips then they will either negotiate site licenses or the software vendors will tweak their points systems. I lean towards the opinion that the current flavor of database list pricing has more to do with selling to governments.

Edit: Oh, and gouging small to medium-size businesses that want 'Enterprise' features.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
You seriously misunderstand things. All they need to to do make a higher performing CPU is to increase the TDP limit. Rather than 77W, get it to 95W, or even 130W. That gives you more cores, more clock speed, or a combination of both. It's not like Haswell is using a low IPC core.

More cores? Why do we need more cores?

One thing that concerns me about Intel and AMD is that they promote and/or use additional non cpu technologies to do tasks that were originally performed on the cpu. In the case of AMD it is the gpu that will replace cpu cores for transcoding.....with Intel it is "Quick Sync".

Yet they want to sell us on the idea of more cpu cores?

This makes me wonder what is going to happen with future Intel enthusiast sockets (ie, LGA 2011 successors)? Are we going to see those turn into "many core" arrangements without IGP? <----Just wondering how many people are going to want "many core" ATX towers if other (non cpu) techs are trying to obsolete this concept?

Surely Intel has a much higher IPC "non mobile" CPU core on the horizon (specifically for the enthusiast socket)? With these small nodes they certainly have the silicon die space to pull this off if processor design does not include IGP.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
Surely Intel has a higher IPC CPU core on the horizon? Isn't not like they don't have the silicon die space to pull this off a processor design with no IGP.

Yes. That's called Haswell. Remember Sandy Bridge? It emphasized power efficiency, but it was faster doing things too. The combination of two factors improved laptops over previous generation chips massively.* Haswell will continue to do so. What you won't see are chips increasing performance per clock damning everything else.

The "core" point I made was to prove that they can do it, if they see the need to do it. After all, what's the difference between Sandy Bridge and Sandy Bridge E again? What about people wanting 6 cores on LGA1155 platforms?

Again, either Ivy Bridge or Haswell, top end laptop chips will equal desktop LGA115x SKUs. Desktop gains are decreasing partly due to them de-focusing on it.

*(Sandy Bridge on mobile: http://www.anandtech.com/show/4084/intels-sandy-bridge-upheaval-in-the-mobile-landscape/3)
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Yes. That's called Haswell. Remember Sandy Bridge? It emphasized power efficiency, but it was faster doing things too. The combination of two factors improved laptops over previous generation chips massively.

See my edit.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
See my edit.

Pursuing power efficiency does not make it a mobile chip. I'm gonna try to make it as least offensive as possible, but that's a fundamental mistake in analyzing modern CPUs.

Before you say anything else, making an entirely seperate core for doing nothing but better per core performance just for enthusiast Sandy Bridge E-class desktops are strategic decisions even a company small as AMD would not make.

Imagine if Sandy Bridge E core tried to improve per core performance by 10&#37;. If they went doing this without caring about power efficiency, you'd probably get 130W chip like Sandy Bridge E but with only 4 cores. High TDP would keep the clocks from going higher, and you wouldn't even get the few % gained by higher Turbo Mode on the 3960X. That's like buying a 3770/3770K chip for $999 and 130 watts!

Are we going to see those turn into "many core" arrangements without IGP?

Who knows what they are doing? BTW, you are talking about the 2-3 years or more into the future, which is an eternity in an Moore's Law world.
 
Last edited:

denev2004

Member
Dec 3, 2011
105
1
0
I have a different idea. Something that may not seem as so cool.

They messed up their almost perfect line up because Sandy Bridge E/EP was significantly delayed. Therefore, the desktop(which according to media is "dying" anyway) will be "downscaled" per se, to make Sandy Bridge E look better. Consider further that stock speeds are irrevalent to quite a portion of desktop users due to overclocking. Therefore, reduced clock speeds, and lower TDP that brings with it.

Maybe with Ivy Bridge E, they won't have that problem. They don't need to "downscale" to 77W, the E chips will be fast(er) enough compared to Haswell anyway. Back to 95W.
Actually the one who told me that 4-core verson of Haswell will go back to 95W claims that the 18W difference from IVY is caused by the new IGP in Haswell...
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
Actually the one who told me that 4-core verson of Haswell will go back to 95W claims that the 18W difference from IVY is caused by the new IGP in Haswell...

That could be true. But if you don't use the integrated graphics, there would be more potential for better CPU performance.

Case in point(Refer to Starcaft 2 performance): http://www.anandtech.com/show/4083/...core-i7-2600k-i5-2500k-core-i3-2100-tested/11

The HD 3000 is on par/beats the HD5450 in most games, but falls far behind Starcraft 2. See also how the higher clocked HD 3000 in the 2600K fails to beat 2500K's HD 3000, while it can maintain ~20&#37; advantage in other games.

That's likely because sharing with integrated graphics hampers CPU performance. Why do I mention Starcraft 2 as an example? Maybe drivers aren't optimized. While that might be true, Blizzard worked very close with Intel to optimize Starcraft 2 on the HD 2000/3000.

95W CPU in CPU intensive applications = 95W all to CPU
95W CPU in CPU and GPU intensive applications = xW for CPU + yW for GPU = 95W

The x in the second scenario has to be less than 95W. In other words, if Haswell's GPU uses that much power, the CPU would be running that much slower to fit in the same TDP.
 
Last edited:

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
That could be true. But if you don't use the integrated graphics, there would be more potential for better CPU performance.

Case in point(Refer to Starcaft 2 performance): http://www.anandtech.com/show/4083/...core-i7-2600k-i5-2500k-core-i3-2100-tested/11

The HD 3000 is on par/beats the HD5450 in most games, but falls far behind Starcraft 2. See also how the higher clocked HD 3000 in the 2600K fails to beat 2500K's HD 3000, while it can maintain ~20% advantage in other games.

That's likely because sharing with integrated graphics hampers CPU performance. Why do I mention Starcraft 2 as an example? Maybe drivers aren't optimized. While that might be true, Blizzard worked very close with Intel to optimize Starcraft 2 on the HD 2000/3000.

95W CPU in CPU intensive applications = 95W all to CPU
95W CPU in CPU and GPU intensive applications = xW for CPU + yW for GPU = 95W

The x in the second scenario has to be less than 95W. In other words, if Haswell's GPU uses that much power, the CPU would be running that much slower to fit in the same TDP.

I like the fact that with power-gating I can sacrifice my iGPU to be the dark-silicon in my 2600K, boosting the power consumption budget for the CPU cores during computations.

Provided Intel keeps giving away the iGPU for free and basically only charging for the CPU performance/capability then I'll be a happy camper.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Pursuing power efficiency does not make it a mobile chip. I'm gonna try to make it as least offensive as possible, but that's a fundamental mistake in analyzing modern CPUs.

AFAIK, the general concept I am thinking about shouldn't make for a less efficient desktop CPU core.

What I feel is needed is higher IPC design with greater width (with the appropriate compensations engineered into the CPU to make design work). With Poulson Itanium increasing to a whopping 12-wide, maybe Intel will permit this change to happen?

Maybe a 6-wide to 8-wide x86 CPU core with clocks starting at 2.0 Ghz to 2.8 Ghz for "non mobile". This rather than trying to press the current 4-wide x86 cpus to a frequency ~ 4Ghz (or beyond) for desktop use.

Before you say anything else, making an entirely seperate core for doing nothing but better per core performance just for enthusiast Sandy Bridge E-class desktops are strategic decisions even a company small as AMD would not make.

The main goal would be a higher end x86 quad core server. The E-class desktops are just a sub market.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |