AMD: It Won't Be About 'AMD vs. Intel' Anymore

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

cbn

Lifer
Mar 27, 2009
12,968
221
106
Apparently Oracle, IBM and Microsoft are the top three for server licensing:

Oracle licenses on a per core basis.(with different scores assigned to cpu cores to rate them. They call this "Core factor", you can find the table here ) In the current system all Intel and AMD cores are equivalent with respect to calculating licensing costs.

IBM also licenses by core, but uses a slightly more precise system to differentiates cpu cores. They call this system "Processor Value Unit" or "PVU". You can see how they score the processors here.

Microsoft SQL 2012 licenses by core, but it appears every CPU core is considered the same with respect to calculating fees.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
heh you are right my mistake, the math is wrong and I did it twice

I apologize for the error, ill corrected.


IntelUser2000 numbers are correct

What do you make of the server numbers? It just seems so bleak, I mean with only 3-6% market share (unit or revenue), how much money is there really for the mobo makers to really extract from such a minimal market?

Can they justify spending development dollars to build next-gen AMD server boards when so few units can be expected to sell? (especially considering that AMD intends to make their CPU's socket-compatible so the customers have even less of a reason to buy new mobo's?)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Just to add more information to the debate of server licensing.

VM ware sells licenses in packs of "six cores".

http://www.vmware.com/download/eula/multicore.html

From the FAQ:

How does this policy affect my licensing costs on servers between 7 and 12 cores per processor?

A. Software Licenses with six(6) cores per Processor restriction: Customers may combine two(2), 6-core license for these processors

B. Software Licenses with twelve (12) cores per Processor restriction: Customers may use one (1), 12-core license for these processors.

Although the licensing is not "per core"......it does appear lower core counts (in general) are favored. This, of course, puts Interlagos ("Bulldozer Server") at a disadvantage against Xeon.
 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
What do you make of the server numbers? It just seems so bleak, I mean with only 3-6% market share (unit or revenue), how much money is there really for the mobo makers to really extract from such a minimal market?

Can they justify spending development dollars to build next-gen AMD server boards when so few units can be expected to sell? (especially considering that AMD intends to make their CPU's socket-compatible so the customers have even less of a reason to buy new mobo's?)

That 3-6% of the market is mind-boggling. They had 3-4x that market share only ~7 years ago! Considering that server market share is also tougher to accelerate quickly, that's a tough hill to climb.

Market share is really, really key for servers because it locks-in substantial business in the long-term. You will sell replacement parts, service, and support for multiple years on a sale. That's good business for not just AMD and their MB partners, but also the company selling the HW like Dell, HP, or IBM.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
That 3-6% of the market is mind-boggling. They had 3-4x that market share only ~7 years ago! Considering that server market share is also tougher to accelerate quickly, that's a tough hill to climb.

Market share is really, really key for servers because it locks-in substantial business in the long-term. You will sell replacement parts, service, and support for multiple years on a sale. That's good business for not just AMD and their MB partners, but also the company selling the HW like Dell, HP, or IBM.

This makes me wonder who AMD is going to compete against?

Bear in mind I am still trying to learn this, but could we see.....

ARM (with larger cores) vs. Bulldozer for Linux Server where "per core" licensing isn't applied?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
What?? I thought the AMD fans were excusing Bulldozer's low IPC and high power usage
on the desktop by saying it is really a server chip. Now you are saying it is a mobile chip?

Yes, that is what appears to have happened.

For server, It looks like AMD simply took the Bulldozer mobile design and multiplied cores (lowering TDP on each core) to make the server SKUs.....the resulting design incurs a penalty from Amdahl's law, but increases performance per watt on each core.

Single threaded performance (from the decreased TDP cores) was regained by use of Turbo. (but remember this comes with a performance per watt penalty due to the higher frequency/voltage needed on the core(s) involved)

My big question: How did virtualization and per core server licensing factor into that strategy?

The way I see things now, the "per core" server software licensing issue wasn't an issue with MS SQL at the time. But could virtualization have helped a "High IPC CPU core" strategy regain any losses suffered from Pollack's rule?

For anyone not familiar with Pollack's rule here is a chart (originally posted by Idontcare in another thread):



As you can see from the chart, a much higher IPC AMD core design would have suffered a performance per watt penalty. This accompanied by a decrease in frequency due to the increase width. However, it would not have suffered an Amdahl's law penalty and probably not much of a turbo penalty. (due to a greater amount of TDP already concentrated in each cpu core.)

That brings us to the effect of vitualization on wider, higher IPC cores? Could running more "OS instances" on each of these hypothetical AMD Jumbo cores improved cpu utilization?.....helping to regain performance per watt decrease brought about by Pollack's rule?

For example, I have read that 10 OS instances can be run on a Quad core Server CPU.

What would happen if those CPU cores became even wider with even greater IPC? Could a greater number of "OS instances".....say 16 to 20.... be run on that hypothetical high IPC quad core? (increasing "cpu core saturation"<---A word I just made up for lack of a better term)
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
In reference to the information in post #125, I find it shocking that replacing a 16 core server cpu with a quad core would result in a savings of ~$81,000 for MS SQL 2012......in each socket!!

I am understanding this correctly? It almost seems absurd.

If true......Does anyone else think this gives enough incentive for Intel to change the strategy of the enthusiast mainboards (eg, LGA 1366 and LGA 2011)?

Larger higher IPC cores for those Mainboards with no IGP? While the mainstream and mobile designs (eg, LGA 1155, etc) continue on with an evolution of the current cpu design towards low power?
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
I'm not shocked at all - but then I've been doing this for 25 years.

That's why it's so hilarious when JFAMD comes in here with his "but we cost a thousand dollars less".

Big deal. I'm not going to let a 1&#37; price difference make my decisions for me. And now that equivalent software performance on BD actually costs a great deal more than on an Intel system, if there actually was any value proposition to a BD server it's been absolutely wiped out.

It's about TCO, not the price of the CPU.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
ARM (with larger cores) vs. Bulldozer for Linux Server where "per core" licensing isn't applied?
Per core licensing works as well on Linux as it does any other OS that a large company wants to support. If you mean the license going to ARM, I have a hard time believing that will significantly affect ASP, since licensing CPUs from ARM reduces other R&D costs.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
That answer which included the wording "10% of the total market" refers to the entire server market? Or does it refer to just the x86 Server market?

P.S. Here is the link to Microsoft SQL 2012---> http://www.microsoft.com/sqlserver/en/us/future-editions/sql2012-licensing.aspx

(It looks like "per core" and "per client" are listed as licensing options<----Hmmm, I just wonder how this will affect the strategy of building server hardware.)

More information: http://redmondmag.com/articles/2011...ls-sql-server-2012-licensing-and-pricing.aspx

Based on this article, High core count processors like Bulldozer will be at a disadvantage.
I would think that would have to be x86 server market. I recently was informed that ARM has a larger presents in server market than AMD does. AMD has been passed by 32 bit ARM CPUs in servers.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
More proof AMD is sticking with x86:

http://vr-zone.com/articles/amd-starts-hiring-world-class-soc-engineers/14113.html

"Thus, it comes as no surprise that AMD's LinkedIn page is now filled with job posts and there is active recruiting taking place behind the scenes. Bear in mind that some of these ads are searching not just for one job position, rather several talented engineers at the same time (depending on interview results). Engineers are even looked for on Twitter.

So, who is AMD looking for? The positions are mostly located in Austin (TX), Sunnyvale (CA), Boxborough (MA) and Fort Collins (CO) and we have to say that job descriptions are intriguing. This is just a small sample of jobs available:

Video Architect, Sunnyvale CA - "Responsible for the design and development of x86/SSE and GPU-based Software video codecs, such as MPEG-2, H.264 and SVC"

SOC Integration Engineer, Austin TX - "Own floor planning (estimates & actual), C4 planning, packaging requirements, top level interconnects, implement overlays, full chip assembly along with associated physical verification of assembled chip for next generation AMD Server and Fusion microprocessors."

SOC Design Verification Engineer, Austin TX "This position is for an experienced verification engineer in AMD's SoC verification group working on next generation high performance x86 client and server microprocessors. The location is the Lone Star Design Center in southwest Austin.

........"
 

iCyborg

Golden Member
Aug 8, 2008
1,330
56
91
I would think that would have to be x86 server market. I recently was informed that ARM has a larger presents in server market than AMD does. AMD has been passed by 32 bit ARM CPUs in servers.
I thought it was only recently announced that HP would have the first ARM servers in 2012. How can they already have larger share?

And if people think BD should be killed by per-core licensing, what about ARM - it's basically a shitload of small cores?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
59
91
For anyone not familiar with Pollack's rule here is a chart (originally posted by Idontcare in another thread):


I appreciate the honorable mention, its rare to come across an individual with such respectable netiquette as yourself, but you really don't have to cite me, use my links freely and with a clear conscience that I'm not offended at all if you adopt them for use as your own :thumbsup:
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
In reference to the second part of post #133, I just wonder if Itanium is part of the reason Intel doesn't want to provide a large increase in IPC or single threaded performance to x86? (Too much overlap between x86 and Itanium CPU performance could lead to an even stronger reduction in Itanium sales?)
 
Last edited:

wlee15

Senior member
Jan 7, 2009
313
31
91
IBM and Oracle both offer different values for cores of different processor architectures for their software (for example IBM rates all Opteron processors at 50&#37; while all post-Nehalem processors are rated between 70,100, and 120 depending on the number of sockets the processor supports). I would imagine Microsoft would have something similar since I doubt Microsoft will casually allow the CPU define what a core is.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
And if people think BD should be killed by per-core licensing, what about ARM - it's basically a shitload of small cores?

Well, that is my point.

ARM and Bulldozer might end up competing on servers where the license fee are different than "per core".

Maybe this is another reason AMD should rethink their CPU strategy?

With higher IPC x86 cores (<---If this is possible) they get away from ARM Server and the resulting product lines don't overlap in anyway.
 

wlee15

Senior member
Jan 7, 2009
313
31
91
In reference to the second part of post #133, I just wonder if Itanium is part of the reason Intel doesn't want to provide a large increase in IPC or single threaded performance to x86? (Too much overlap between x86 and Itanium CPU performance could lead to an even stronger reduction in Itanium sales?)

Itanium hasn't been really marketed as a high performance part for a long time (some say it never was a high performance processor) with it's frequent and long delays killing it on the performance front. It's now mainly sold as for servers that are mission critical.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
IBM and Oracle both offer different values for cores of different processor architectures for their software (for example IBM rates all Opteron processors at 50% while all post-Nehalem processors are rated between 70,100, and 120 depending on the number of sockets the processor supports). I would imagine Microsoft would have something similar since I doubt Microsoft will casually allow the CPU define what a core is.

Yes, I saw IBM has more levels to their "cpu core" rating scheme.

However, with Oracle, Intel and AMD cores are counted the same. (.5 units per core)

So (using Oracle) the old 12 core MC Opterons were twice as expensive to license as a hexcore Intel Xeon.

This leaves the question in my mind: What server software licenses are used the most frequently with Opteron?
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
My big question: How did virtualization and per core server licensing factor into that strategy?
Not all that much, at least not together. More cores for virtualization without oversubscription is not a bad way to go. However, I doubt licensing by processor has, does, or will, affect how either AMD or Intel decide to make the CPUs. The market has choices, so what will happen is companies that balk at the cost will begin to move away from getting their software from companies that use extortive licensing. Such licensing is a managerial decision, to try to get higher margins from companies with deeper pockets, who have more to lose by rocking the boat than to gain by not paying higher software costs.
So (using Oracle) the old 12 core MC Opterons were twice as expensive to license as a hexcore Intel Xeon.
Meanwhile, if you were even mildly worried about that sort of thing, you would have never done more than politely accept the HPOC's business card. To want Oracle in the first place, software licensing costs need to be pretty low on your priority list. Microsoft wants to also extract high margin from those types of customers, yet they also need to not piss off those smaller customers that find coupling hardware performance to software cost a strange idea.

All in all, software licensed per processor core is a medium/big business, vertical market thing, not the common case. Where it is common, software will tend not to account for a large portion of the TCO of the servers, or at least the software in question won't. What's 100k/yr for software when your employees supporting it cost a few million, when customers using it make you a few million, or when moving to something new would cost you a few million?

The way I see things now, the "per core" server software licensing issue wasn't an issue with MS SQL at the time.
MS said they wouldn't go per-core. Hence, it wasn't a problem until they announced licensing changes that would include per-core options. Some businesses will be corner cases for the new licensing, after expecting no worse licensing options that per socket plus CAL. Per socket is generally accepted, because more than 2 sockets/server is quite uncommon for a smaller business to bother with.




But could virtualization have helped a "High IPC CPU core" strategy regain any losses suffered from Pollack's rule?
No. Virtualization let's you gain from any performance improvements, when you have servers that, on average, don't need all the hardware you can give them, or for which the manageability is useful. Why buy 10 servers when you can have a single server do all the same work? In the process, you also get the ability to live system backups, and the ability to bring any of said servers back online on another physical computer, in very short time frame, either a as part of load balancing, or trying to get back online after TSHTF.

As you can see from the chart, a much higher IPC AMD core design would have suffered a performance per watt penalty.
Compared to an ideal narrower core, an extremely wide core designed to extract very high ILP, or with code that perfectly scales out to many cores. Reality keeps those from being common, so the moderately bigger core, with moderately better IPC, will end up with better performance per Watt across a whole task's execution (which may, especially in servers, encompass many programs, hence not limiting the statement to WPP), with modest increases in core counts over time. It's a compromise, and there is some point where a beefier core will be detrimental more than beneficial. Scaling is not perfect, so single threads matter, and will continue to matter.
This accompanied by a decrease in frequency due to the increase width.
Except that frequency has, as of late, been dominated by power consumption, so there's an effective speed ceiling. Or, more accurately, points on a speed v. power curve that they just can't cross, if they want people to buy it. BD, FI, has been OCed very high, already, but it is too hot for normal users at those high speeds, especially those in areas with high electricity costs.

However, it would not have suffered an Amdahl's law penalty and probably not much of a turbo penalty. (due to a greater amount of TDP already concentrated in each cpu core.)

That brings us to the effect of vitualization on wider, higher IPC cores? Could running more "OS instances" on each of these hypothetical AMD Jumbo cores improved cpu utilization?.....helping to regain performance per watt decrease brought about by Pollack's rule?
Each virtual instance gets its own processes, they then get own memory spaces, and their own threads. So, depending on workload and hypervisor OS, you'll get a minor hit in performance (or not), and otherwise get the effect of several separate servers running in one box, which uses less power than many boxes, and adds useful features for your sysadmins. Virtualization is about consolidating and managing computer resources. Virtualization adds its own set of performance quirks, which are worth worrying about, but a faster CPU is a faster CPU is a faster CPU. You can't get more out of a processor's functional units with virtualization.

For example, I have read that 10 OS instances can be run on a Quad core Server CPU.
You can run about as many as you have RAM for. Just that each server you add over the cores that they can have to themselves effectively reduces peak performance of each virtual server, should they both be highly active at the same time.





What would happen if those CPU cores became even wider with even greater IPC? Could a greater number of "OS instances".....say 16 to 20.... be run on that hypothetical high IPC quad core? (increasing "cpu core saturation"<---A word I just made up for lack of a better term)
Scheduling many instructions effectively is hard (IE, more R&D costs, more die space, and power consumption, and then may reach a point where a miracle is needed on top of that). As the core is widened, it will just get harder. As the core has to run so many more cycles out from memory, it will need deeper instruction windows, making it again harder; then it will deeper structures to handle instruction completion and data writing out to cache, then the same for main memory; then more cache is needed, which needs to be made faster to be effective, or the core needs to be even further beefed up to handle longer latencies of a larger/farther cache, and such features tend to induce penalties when things don't go perfectly, or if your program performs data accesses that don't conform to the CPU designers' targeted common cases.

That whole time, nothing gets done about times where you sitting around waiting for something to do, which happens pretty much all of the time. At those times, all the wider and deeper structures get you nothing but more leakage power to deal with, and add more cycles of latencies all over the place, potentially increasing the chances of the CPU waiting to do something, which is what you were trying to prevent by all of this! The answer is to add complexity to other parts of the CPU to reduce those unhidable latencies (IMC, fast on-die networking, cache coherency protocol improvements, larger/better prefetcher and branch predictor histories, etc.), which, well, still makes the thing bigger, hotter, and costlier all around.

After all that, IPC returns are diminishing, for scalar processing, such that even idealistic simulations with extremely large instructions windows and perfect caches have trouble getting higher than ~6-8 IPC, even with what would seem like ideal high-ILP program loops. Much code simply does not have high ILP to begin with, and when that is the case, all you can do is try to reduce latencies. You can't increase IPC when the ILP is not there. Realistically, 2 has historically been a good spot to be at, and 4 a good brick wall to try to reach.

If you used SMT to try to fill all of those execution units with more threads, you'd still have all the complexity (power consumption, R&D costs) and latencies to deal with, and still would be fighting the negatives of SMT. If you ideally implement SMT to prevent that situation (IE, Niagara), then each thread suffers too much compared to faster CPUs made to primarily run single threads.

All put together, TANSTAAFL.

Big wide RISC cores with many GPRs and perfect compilers, that only ran loopy code, were going to rule the world. In the end, it just doesn't work for general purpose computing. Which flows right into...
In reference to post #133, I just wonder if Itanium is part of the reason Intel doesn't want to provide a large increase in IPC or single threaded performance to x86? (Too much overlap between x86 and Itanium CPU performance could lead to an even stronger reduction in Itanium sales)
No. Itanium was destined to be a mass market failure. It succeeded in helping to kill of PA-RISC and Alpha, due in part to management being convinced that Itanium was going to be good, but it is not. Everything bad about RISC went in, and perfect compilers did not come out. Intel has, over the last several years, been implementing useful RAS features from IA64 in high-end x86 parts. If you were building your software systems today from nothing, x86 today offers all the RAS you'll need (what it doesn't have, you can make up for in software).

Intel doesn't want to provide a large increase in IPC or single-threaded performance because even they, the mighty giant Intel, lack infinite funds, manpower, and creativity with which to attempt to do so (unsolved engineering and math problems often need more than money and time to be solved), and as a business, they like making money. They have to choose a point within the good fast and cheap triangle, just like everybody else, and try to make what customers want, without bankrupting themselves, just like everybody else (well, maybe not AMD ). As it stands now, the fastest general purpose computer processors in the world are x86 CPUs designed and manufactured by Intel.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
What do you make of the server numbers? It just seems so bleak, I mean with only 3-6&#37; market share (unit or revenue), how much money is there really for the mobo makers to really extract from such a minimal market?

Can they justify spending development dollars to build next-gen AMD server boards when so few units can be expected to sell? (especially considering that AMD intends to make their CPU's socket-compatible so the customers have even less of a reason to buy new mobo's?)

I believe the Server mobo makers like TYAN and SperMicro (perhaps more), can sustain spending money and resources for low volume server mobos because they have very high ASP at the Dual Socket and 4 socket parts.
On top of that, they also sell at high ASP U1,2,4 Rackmount Server barebones (U1 rackmount case, dual or 4 socket Server motherboards, power supply etc).
Even single socket G34 ATX mobos are at very high ASP like the $250 SUPERMICRO MBD-H8SGL-F-O ATX Server Motherboard Socket G34 AMD SR5650 DDR3 1333

Dell and HP (and more) also sell servers (blades etc) where the mobo as a part is a fraction of the total cost of the entire machine. Im sure they dont care about the motherboard cost that much.

Also, from AMDs Q3 2011 earnings call
http://seekingalpha.com/article/303...sses-q3-2011-results-earnings-call-transcript

Rory P. Read

Now let's take a deeper look at the mobile segment. In the third quarter, we saw a record AMD processor shipment and revenue in the mobile space. Over 90% of our mobile units were Fusion APUs, which are uniquely suited to meet the advanced processing needs of today's mainstream and entry-level mobile PC buyers. For example, we believe AMD's worldwide share of the $200 to $600 retail price bid is now over 28%. These bands make up 45% of the retail notebook volume across the planet, and we believe AMD gained consumer notebook share in the key emerging markets year-over-year, particularly in China, where our microprocessor revenue grew 23% quarter-on-quarter.

We also set another record for Brazos shipment in the quarter, up 36% quarter-to-quarter, and our mobile AMD processor unit sales in total are now up over 50% in the last 2 quarters alone. We clearly outgrew the market in the mobile segment and we believe we took share in the quarter.

Seams Bobcat and Llano (both APUs) really starting to raise both shipments and revenue. They really have to push harder for that market from now on with a new product every year both at entry/low power level as well as up to middle and entry level gaming mobile market.

Now for our server business. The third quarter was the beginning of a move in the right direction, with server revenue up 27% sequentially. Initial production shipments of our new AMD Opteron products gained traction in the high-performance computing space, where with the help of our key partner, Cray, and some of our most notable customers which included the National SuperComputer Centers in Stuttgart, Germany, in the U.K. and in Switzerland, as well as the United States Department of Energy Titan project at the Oak Ridge National Laboratory. This project, in fact, is likely to be one of the world's fastest supercomputers. We are excited about our key partners, including HP and Dell, who are launching new products based on our new Opteron platform this fourth quarter, and we are confident they will do well against competition, particularly in key workload areas like the cloud and virtualization.

Bulldozer has the potential to raise server shipments and revenue if AMD will spend money on the software. They really need to attack the middle level (DP XEONS) server segment where Intel makes most of its server revenue.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
Single threaded performance (from the decreased TDP cores) was regained by use of Turbo. (but remember this comes with a performance per watt penalty due to the higher frequency/voltage needed on the core(s) involved)

That's only true if raising voltage is required to raise frequency. Since Bulldozer is designed for higher clock speeds, it wouldn't need additional voltage.

As you can see from the chart, a much higher IPC AMD core design would have suffered a performance per watt penalty. This accompanied by a decrease in frequency due to the increase width.

You don't increase performance per clock "just like that". There's a reason they moved away from pursuing that alone. The best approach in increasing performance has always been taking a balanced one.

A good chip works well across most, if not all segments of computing. And right now, that's not what Bulldozer is. Saying that its a "server chip" or a "mobile chip" is just an excuse.
 
Last edited:

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Just to add more information to the debate of server licensing.

VM ware sells licenses in packs of "six cores".

http://www.vmware.com/download/eula/multicore.html

From the FAQ:



Although the licensing is not "per core"......it does appear lower core counts (in general) are favored. This, of course, puts Interlagos ("Bulldozer Server") at a disadvantage against Xeon.

vmware Sphere 5

http://www.vmware.com/files/pdf/vsphere_pricing.pdf

VMware vSphere 5 Licensing: Per-Processor with
vRAM Entitlements

VMware vSphere 5 is licensed on a per-processor basis with a
vRAM entitlement.
Each VMware vSphere 5 processor license
comes with an entitlement to a certain amount of vRAM capacity,
or memory configured to virtual machines. Unlike in vSphere 4.x
where core and physical RAM entitlements are tied to a server
and cannot be shared among multiple hosts, the vRAM entitlements
of vSphere 5 licenses are pooled, i.e. aggregated, across all vSphere
servers managed by a vCenter Server instance or multiple vCenter
Servers instances in Linked Mode.

Licensing Unit: Per Processor (CPU)
Like VMware Sphere 4.x, VMware vSphere 5 is licensed on a perprocessor
basis. Each physical processor (CPU) in a server needs to
have a least one vSphere 5 processor license key assigned to be able
to run vSphere. vSphere 5 license keys can be purchased, deployed
and managed in the same way they were with vSphere 4.x.

No Limits on Physical Resources
VMware vSphere 5 licensing removes all restrictions on physical
cores and physical RAM. This change eliminates barriers to deploying
VMware vSphere on new multicore server configurations, improving
customers’ ability to choose server hardware that best meets
their requirements.

 

cbn

Lifer
Mar 27, 2009
12,968
221
106
You don't increase performance per clock "just like that". There's a reason they moved away from pursuing that alone. The best approach in increasing performance has always been taking a balanced one.

Sure building high IPC would be harder, but I feel like MS is sending Intel the appropriate message in the form of SQL 2012 "Per core" licensing with no core performance rating adjustment.

A quote from the article in post #125:

So now there will also be a huge incentive on software licensing to step down from a 4-way 10-core system with 40 cores total to a 2-way system with perhaps 8-12 cores total (going forward, this cost structure essentially kills the new AMD Bulldozer 16-core processor, which had just recently achieved price performance competitiveness with the Intel 6-core Westmere-EP in 2-way systems).

It appears MS wants Intel to build those much higher IPC quad or hex cores. A nice side benefit would be to strengthen desirability of the MS desktop OS (which I'm sure would help sales or ASPs in that department).
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
In reference to post #125 and #149.

40 CPU cores @ $6874 licensing per core= $274,960 per server.
8 CPU cores @ $6874 licensing per core= $54,992 per server.

So going with a 2 quad core server CPUs yields a savings of $219,968 per server.

Of course, that doesn't take into account any possible cpu core rating changes by Oracle or IBM.....but surely MS SQL Market share is robust enough for such a "money saving" x86 server quad core to be designed.

Maybe a nice side benefit would be really good ASPs for the desktop versions of these "money saving" server quad cores.

Combine the two markets together (Desktop Ultra High Performance x86 + Money Saving High IPC Server quad core for MS SQL 2012) and maybe....just maybe.....the appropriate metrics would be there to assign the appropriate engineering budget to the project.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |