alkemyst
No Lifer
- Feb 13, 2001
- 83,967
- 19
- 81
12 cores overclockable? Ridiculous. I mean how big is the market really for users who would need 12 cores anyway? Ehhh I guess I can't be mad because there are folks out there in the world who do but I'd much rather have seen a more marketable product. A bad ass 8 core would do just fine.
The market for an unlocked enthusiast 12 core is so incredible tiny that Intel wont release it. And Francois is trying to create a twitter PR reason for one.
We get 8 cores next year with Haswell-E. Something that ca actually match performance expectations, unlike a 12 core.
Oh those 12 core parts can definitely meet performance expectations... just not for gaming.
Plenty of other things besides gaming too. Since not all apps can scale to 12 cores and 24 threads. But again, gaming is the main driver for the enthusiast class.
I can't think of many apps which demand high performance (other than gaming) which don't scale well with core count. Rendering, image editing, video editing, video transcoding, scientific computing, software compilation (on any well laid out project)... Are there any good counter-examples? (Genuine question, I'm curious.)
As soon as you go away from the imagine/rendering segment, the amount of highly multithreaded applications drop like a stone. Other applications that can scale, might be slower with 12 cores at say 3.5ghz than 8 cores at 4Ghz.
Huge excel sheets for example demand high performance and doesnt scale. Client-server applications dont tend to scale, if concurrency can ofset it. While its an atrocity, one could simply mention Access. Compiling in VS2012 is also faster on a quadcore than a slower 8 core for example due to the scaling issue.
Ah, massive spreadsheets- that eternal bane of the office worker. D:
But compilation in VS2012- it can actually be incredibly parallel, if you structure your solution file (and code) correctly. The more recent Visual Studios will not only compile multiple projects in parallel, but also multiple .cpp files within an individual project in parallel. Break things up into multiple smaller cpp files, instead of individual monolithic ones, and you will start hitting those cores hard. I know first hand that compiling a decent sized app will easily eat up 16 cores!
I definitely agree. There also is a point at which even for multi-threaded apps one will be slowed down by some shared resource. The main example that comes to mind is memory controllers and memory itself. You'll also have more threads competing for level 3 cache.As soon as you go away from the imagine/rendering segment, the amount of highly multithreaded applications drop like a stone. Other applications that can scale, might be slower with 12 cores at say 3.5ghz than 8 cores at 4Ghz.
Huge excel sheets for example demand high performance and doesnt scale. Client-server applications dont tend to scale, if concurrency can ofset it. While its an atrocity, one could simply mention Access.
Do you have a link about such a study, I'm very interested I wonder where the sweet spot for number of cores vs frequency is.Compiling in VS2012 is also faster on a quadcore than a slower 8 core for example due to the scaling issue.
Firefox compile in VS2012 is an example of a fast quadcore easily beating a slower 8 core Xeon. But where the 8 core Xeon got more computational power so to say. So while you see 100% usage, it might simply not scale 100%.
Different for different solutions, depending on how well it's been structured. Certainly the stuff we compile, a 16 core machine finishes _much_ faster than an 8 core machine. Can't comment on how common that is though. *shrug*
I can't think of many apps which demand high performance (other than gaming) which don't scale well with core count. Rendering, image editing, video editing, video transcoding, scientific computing, software compilation (on any well laid out project)... Are there any good counter-examples? (Genuine question, I'm curious.)
I'm interestedYou will be surprised how badly image processing software scales with no of cores. I'm talking Photoshop and Lightroom included, not to mention other garbage like proprietary raw processing software, etc. This kind of software actually scales better with frequency, although there are tasks where multiple cores could be used. I can link to data if interested in on this topic.
Even if Intel released a theoretical 256 core 4Ghz Haswell tomorrow at 200$. You still suffer the same issues of serial code and Amdahls law.
Even with 90% parallel code, a 1024 core CPU is only 10 times faster than a singlecore. And roughly 4 times faster than a quad.
It can and is efficiently done, in some cases already.
A good example (except that it is somewhat naturally parallelizable, weakening my arguments) is gaming, and graphics cards.
I.e. (if you can afford the very high price at the moment) you can take many games (on the PC), and if you have high end enough graphics cards, and multiple ones, and enough output screens, with high enough resolution, you can go really wild with parallelism, and many games today.
I dont think I understand your argument. So could you please elaborate?
Because gaming is one of those thigs that scales poorly. BF4 for example only somewhat scale due to the multiplayer part. Aka concurrency. But in singleplayer its back to old fashion.
What on earth even makes much use of 12 cores? I wouldn't want it unless a good amount of software and games can use all 12.
Sorry, I was mixing together two different concepts at the same time, which has caused confusion, and I was partly wrong! in what I said, as well (due to my mix up in the explanation).
Let's split a typical modern game, into two sections, one section PURELY does the raw, screen graphics.
The other section does everything else, including physics, player interactions, monsters, logic, AI, non-visual aspects of game, etc etc.
The pure/raw graphics processing (i.e. transformations on (various obviously) 10,000,000 pixels (ultra approx), is (in very simple terms, and NOT always as I am about to say), i.e. changing pixels, drawing triangles/polygons, anti-aliasing, etc etc, can (but NOT always) be highly parallelizable with modern graphics hardware.
In other words, a good quality, well written game, will (typically, on a PC), allow going from 1080P, up to 1440P, and maybe more, as long as you have got powerful enough graphic(s) cards, and the particular game allows this.
BUT the other aspects of the game (as I think you were thinking of), MAY NOT allow such multi-core/thread/gpu multitasking to take place. This depends on which game it is (but there is a basic similarity).
---------------------------------------------------
I.e. Some aspects of the graphics (of e.g. games) CAN be speeded up with multiple and very powerful graphics processors, and the necessary cpu power, to keep it running.
But other aspects of the graphics and/or game, are cpu bound, and can only mainly use extra Frequency/IPC of the cpu, and "maybe" some extra cores (in a limited way, in "some" cases).
--------------------------------------------------
I really meant the "graphics" as in updating pixels/triangles/polygons on the screen, NOT other aspects of the graphics and cpu processing of games.
Sorry for my faulty/partly-incorrect, initial explanation.
I should of said raw-graphics-pixel-updates (are usually fully scalable, within limits), rather than saying "games" which was way to broad a term.
But CPU load barely if at all changes between running 480p and 2160p. And all the settings is just GPU load.
I just want an affordable 8 core mainstream K processor. Too much to ask?