a good example of CPU limitations

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

BFG10K

Lifer
Aug 14, 2000
22,709
2,980
126
Originally posted by: MrK6

That's because anything you've stated is a fairy tale.
Nope, try harder.

I live in reality, where the easiest ways to overcome an architecture limitation is to go multi-core (every major hardware produce, AMD, intel, and NVIDIA have realized this, for some reason you still can't grasp it). But I guess if we're going to keep dreaming, I'll take a flying car.
But none of this changes any of my points.

That's because no one's coded for it yet,
I see. You live in ?reality?, but it hasn?t happened yet?

And why do you keep repeating the same stupid arguments every single reply,
Because your responses don?t get it.

If you want to argue the merits of dual-core based on the fact that Solitaire is not coded to support multi-core, my words are lost on such stupidity.
This is a strawman argument on your part.

Perfect example of why I say you've never used a quad-core system. Kind of tough to find merit in something you've never used, I'd imagine.
The perfect example of how because you have no answer, you resort to rubbish like this.

Evidently you don't, which, again, is ironically humorous.
Thanks for confirming you know nothing about SLI/CF AA modes.

If it's not available in the driver by default, what's the point? Besides, are you really trying to argue a benefit of using more than 16x SSAA? Please.
Thanks for confirming you know nothing about SLI/CF AA modes.

Then say what you mean and not some example that has no bearing to the discussion; learn to formulate an argument.
Actually it has every bearing because it benefits old games, unlike you claims to the contrary.

But I thought CPUs all flatline in "real" gaming situations, so there's no point?
Right, so why pay more for a quad?

Which, again, because architectures show similar limitations, makes no significant difference, so you wasted your money. Thank you again for defeating your own argument.
How exactly have I wasted my money given I have a higher performing part (overall) for less money?

And again, your fundamental lack of understanding of gaming performance shows. You constantly make that same statement, but don't even take the time to figure just exactly how wrong you are. Like I said, child covering his ears and yelling.
But I didn?t make the statement, you did. You did it when linking to the 1024x768 benchmarks, and you do it every time you continue to refer to them.

So you're wrong 99% then? Let me revise my comment - the quad core is almost always faster than a higher clocked dual core.
Let revise my comment - the quad-core is almost never faster than a higher clocked dual core for 3D gaming.

So you're changing the subject because you agree that I'm right and you have no idea how to retort because of you're fundamental lack of knowledge? Cool.
Good answer, except it doesn?t address your fundamental flaws in relying on overclocked hardware. So actually it?s quite a worthless response.

Kind of funny because I did, but you conveniently ignore every point I nail home and either change the subject or make some restriction that doesn't pertain to the scope of the argument.
Again, it?s a simple yes/no question, and yet you still fail to answer it.

It's fine, I'm going to see how long this keeps going until you're backed into a corner and either stop replying or (try) to close an "off-topic" thread. Do continue, this is great.
I?m not going to close anything because I can?t moderate thread I post in. Sorry to shatter your dreams like that.

So then you're admitting that you'd like to take up residence at the Dell forum and
have to place discussing taking hardware to the next level in an enthusiast's community. Glad we got that settled.
Again you type responses which totally dodge the issue of your defeated arguments in the hopes that somehow people won?t notice.

And as I already stated, you lack a fundamental understanding of hardware performance in games, why do you even think you can continue to participate in this discussion?
Because your arguments are wrong.

I think the better part is the next time you post and "testing" results, I can just link to this thread to show people to take your findings with a grain of salt. Whatever, it's your hole you're digging.
You do whatever makes you happy, chief.

Overclocking is guaranteed.
No it isn?t.

An overclocked speed is not.
Right, which means even stock + 1 Mhz isn?t guaranteed, according to your own definition. Thanks for agreeing with me.

That's a fundamental difference, one you've missed several times now.
There?s no difference except in your semantic and rhetorical games.

Because you incorrectly interpret results and draw some inane argument that no one else is seeing, doesn't mean my examples aren't robust.
Again overclocking isn?t guaranteed, even by your definition, so I?m not misinterpreting anything.

It means you lack the knowledge to follow the logic or ignore the logic because it defeats your argument. Which is it?
False dilemma logical fallacy on your part. Try option 3: your arguments are wrong.

Except that it follows a general statistic and that a decent overclock is guaranteed for 99%+ CPUs.
Whose statistics? The enthusiast forums which represent a drop in the bucket of all processors sold? Likewise, where are these statistics that show us those two particular processors you linked too overwhelmingly only attain those specific speeds?

The examples I make are fine and completely back up my arguments. The extraneous information and tangents you take from them are not.
No they do not. You use overclocked tests using ?statistics? pulled out from who knows where while completely ignoring the stock guaranteed figures for every such processor ever made. And you wonder why I can?t take your arguments seriously?

In any case, the statistics for stock speeds are inherently flawless given a processor that can?t run said speeds is faulty, so my examples are more robust no matter which way you try to spin things.

Again, completely demonstrating you lack of knowledge regarding hardware and gaming. That hole's like what, 10 feet down by now?
Again, your arguments completely demonstrate being oblivious to the fact that nobody games at 1024x768, but still attempting to draw inferences into the real world based on unrealistic scenarios.

Nope, the majority of CPU limited situations will show a greater benefit from more cores if supported. Like I said before, you're trading 5% improvement (at the absolute best) for a remarkable 60-80% improvement.
The key word being if. And that my friend is my entire point; the if doesn?t come up often enough to justify the clock loss and/or higher price from quad-core. It may do in the future, but not now.

Because any idiot realizes that most people don't game at 1024x768, I don't.
Excellent, so we finally have a have a definite answer. More on this to follow.

And yet we're getting into double digits the number of times you have faithfully demonstrated your lack of understanding of computer hardware and how it pertains to gaming.
So again another total dodge of the issue on your part. You?ve admitted that you don?t game at 1024x768 and neither do most people, yet you still attempt to make inferences into the real world from scenarios you yourself agree largely don?t exist.

At this point I don?t think anything else needs to be said about that.

Those are simple tests to isolate factors of CPU performance in gaming, yet you you can't even understand something so simple.
Again, if all you?re trying to do is isolate CPU performance then why even bother benchmarking games? 1024x768 gaming results are no more relevant than CPUMark because people don?t run such scenarios in the real world.

Like I said, this thread is going to be great to post in reply to any of your "findings" to completely remove any ounce of credibility or authority you think you have on the subject of hardware performance in gaming.
Here?s a quarter; go and call someone who gives a shit.

I'm an amateur enthusiast and make no claims to even having a professional understanding in such an area, but wow, I'm not this thick.
Careful there chief, that looks like a personal attack; the first obvious signs of a failing argument on your part I might add.

Like I said, complete lack of understanding. I'm going to save the thread as well, just so when you realize how it makes you look, you can't magically delete it.
Oh gosh, I didn?t realize you liked me so much. :roll:

In your dreamworld, I bet that's exactly how it works. Too bad this is reality and A)Quad cores and dual cores are very close in price B)The slightly faster/architecturally superior (more cache, w/e) dual core available at the same price does not show enough performance gain in CPU-limited situations to make a difference, more cores do.
Nope, and actually the tests I linked to backed my claims. That?s because they, unlike you, actually tested the CPU load on the cores and found in most cases there was one loaded more than others which was the primary bottlneck. Here, let me quote a portion of their conclusion again:

Triple- and quad-core processors still make little sense for games, it's fast dual-core processors that are the optimal choice for gaming PCs. Most games use one of the cores more actively than the others, and performance is often limited by the speed of that core. In other words, for games you'd better buy a dual-core processor operating at 3.0GHz than a quad-core processor at 2.4GHz (within the same CPU family, of course).
So no, it isn?t a dreamworld, it?s an objectively proven reality.

Again, the vast majority of games are architecturally-limited, so why are you paying more money for the same architecture? Extra cores will show tremendous improvements, 5% difference (that's being generous) doesn't.
They aren?t architecturally limited at any reasonable settings; that?s kind of my point, and kind of why I?ve shown multiple examples of flat-lining to back these claims.

And I see the Dell forums over at http://en.community.dell.com/forums/ . The funny thing is the "newbie" title they'll give you will be accurate.
That?s great and all, but I don?t think Dell forums will help you read those benchmarks any better.

Right here: an overclocked speed is not. That?s a direct quote from you.

That in itself is fallacy because people don't play games with FRAPS on (by introducing a measuring mechanism that affects the outcome you are thereby disqualifying the outcome, basic scientific principle).
I see, so now using Fraps ?disqualifies? the outcome? Tell me then, how do you think those precious benchmarks were attained, the 1024x768 ones that constantly trumpet around as evidence that you?re ?isolating the CPU?, hmmm?

That?s right, likely from Fraps. And if not from Fraps, then from game benchmark runs which according to you also don?t count because they?re ?canned?. Those are about the only two ways you can objectively benchmark a game.

Oh dear, it looks like you?ve dug yourself into quite a hole here. On the one hand you trumpet ?evidence?, but on the other it might?ve come from Fraps ?disqualifying the outcome?, or it might be ?canned?.

Keep going as this is really quite entertaining. LMAO.

FRAPS runs are a different kind of test if that's what you're referring to (see above). You'd have to make the comparison based on the same operating environment in order to gain any insight due to the variable of FRAPS being on.
I?m sure you think that claptrap looks good, but it?s really not related to what I was talking about in the slightest.

Again, double digit representation of a lack of knowledge.
?Because any idiot realizes that most people don't game at 1024x768, I don't.?

Then why are you paying the same money for two less cores?
I?m not paying the same money. I?m paying less to get less cores, but at a faster clock speed and possibly more cache.

You need to learn how your fundamental lack of understanding of computer hardware as a whole makes you think your understanding of bottlenecking makes you correct.
I know what a bottleneck is. I also know when it applies in the real-world, and when it doesn?t. You appear to know the former, but your arguments don?t seem to grasp the latter.

Double digit instance of a representation of a lack of understanding (still).
?Because any idiot realizes that most people don't game at 1024x768, I don't.?

No, that's in non-quadcore games. It's convenient how you've completely "forgotten" to address the fantastic series of benchmarks I've posted that demonstrates this. Amazing.
The ?fantastic? series of benchmarks? You mean the ones you referred to as ?because any idiot realizes that most people don't game at 1024x768, I don't.??

There goes that hope, like talking to a (very thick) wall.
Again with the personal attacks, the sign of the impending collapse of your arguments.

*sings* douuuubbbbllleeee diigiitt*
*sings* ?because any idiot realizes that most people don't game at 1024x768, I don't.?

I bet you thought that by agreeing with me and then changing the subject you could get away without it shown just how incorrect you are/were.
Nope, because that?s not what I?m doing. Again, I never argued a quad at the same clock and same cache sizes is slower than a dual, because that?s theoretical impossible (although they could certainly be the same speed due to flat-lining). What I argued was (1) flat-lining and (2) the cost of quad relative to dual and how this relates to clock speed and cache sizes.

So now it's not that "quad core is not faster" it's just that "quad core is generally faster but there's a GPU bottleneck."
Nope, it?s not generally faster. Again, you said it yourself that's because no one's coded for it yet,. I wouldn?t actually say ?no-one?, more like ?not enough to currently make quad-core worthwhile in general terms?.

So how come you aren't pointing out how completely wrong you are about getting a sick SLI/CF with dual core instead of a better quad core CPU leads to better performance?
I wasn?t wrong. Right from the start I?ve repeatedly said that if you?re that concerned about gaming performance, you should be buying more GPUs before getting more than two cores.

I use the terms interchangeably (I dunno, Valve does, if that's not the correct way to do it, correct me).
You want me to correct you? Why? So that when you save this thread you can see that you were wrong?

It?s funny you should mention Valve too; at the time they made the technology conference about starting work on multi-core support for the Source engine, Half-Life 2 was already running 17 threads. Think about the implications of this for a moment.

Anyway, that specific quote said "multi-CPU" support, so none of those comments address the actual quote (and how correct it is), but more so change the subject because you don't have a decent answer (that's a common tactic of yours it seems).
I?m reading a pile of irrelevant rhetoric which again appears to totally dodge the issue.

Here?s the sequence of events for this particular example:

You first stated: In games that are multithreaded, matching or exceeding the maximum threads with cores generates the highest performance.

This inference is absolutely wrong and cannot be done, otherwise I can infer that X-Wing runs best on six or more cores.

I responded with words to the same effect: Nope, you absolutely cannot infer that because the number of threads does not guarantee any kind of performance gain. Even games ten years old often spawned around half a dozen threads yet show no benefit from multi-core. Again, simply having multiple threads provides no guarantee of performance gains on multi-core.

You responded: That's the exception, not the norm. Programmers don't waste time writing in multi-CPU support for the hell of it.

Again, totally wrong. It is not the exception given practically every commercial Win32 game ever made is multi-threaded, yet almost none show a benefit from more than one core.

As examples, Fear runs eight threads, yet doesn?t even benefit from dual-core. Quake 4 runs 14 threads, yet we know it doesn?t benefit from more than two cores.

Again, you simply cannot infer any correlation between thread count, core count, and multi-core performance gains like you were.

EDIT: If this still doesn't sink in, in my next post I'm just going to summarize my argument in bullet points, as this posting format is tedious.
Fine. You summarize your points if you want and I?ll summarize mine, and we can be done with this because I don?t think we?ll ever agree. But if want to continue then be my guest.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
A dual-core 3 GHz processor can at best match a single core 6 GHz single core (assuming everything else is the same),

I beg to differ. A quad core processor has more cache (and having more registers might also help).
In a multi-threaded environment, the currently running thread has to be pre-empted in order for the next to run on a single core processor. When this happens, the registers are evicted to cache. If there's not enough cache, the cache is evicted to main ram. There could be a performance hit in a highly threaded environment to not having enough cores, possibly a very large one if cache starved. And the Athlon X2's, Celeron and Pentium Dual Cores, and some of the lower end Core 2 Duos could very well end up cache starved in a multithreaded environment.
Each thread needs its own working space, so if 512KB of cache is sufficient for a single thread, 1MB is needed for two threads, and 2MB for 4, and that's only if 512KB per thread is sufficient (any less is noticeably not in many apps). I don't think we're far off from needing 1MB of cache per thread, which the i7s and phenom IIs handle quite well, but the affordable core 2 duos won't.
Once software becomes heavily threaded, the lower end dual cores are going to get reamed, and not just from a lack of cores. A 6Ghz single core would still need the same size cache (and possibly the same number of registers) as a 3ghz dual core in order to run the same threaded load as fast.

It wouldn't be such a big deal, if current quad cores weren't superior to dual cores in almost all respects save clock speed. With i5 looming and i7 and Phenom II already out, I personally wouldn't consider anything in the core 2 series for the future. The affordable ones won't survive the many-threaded world, and the more cache enabled ones are too expensive to be worth considering. Point is, the heavily multi-threaded world is coming, it REQUIRES tons of cache, and in order to get that you basically have to buy a quad core anyway, which will then give you additional compute resources.

BTW, I've got a Core 2 Duo (2MB cache, overclocked to 3.3Ghz) and a 2.6Ghz Phenom (original) X3. The core 2 duo benches way faster, but the phenom system is much more enjoyable to use. Less stuttering, doesn't hang up when running multiple intensive tasks, runs games smoother (but much lower average framerate). And I can run GTA4 on the Phenom system, but I can't on the Core 2 Duo system. The Phenom is in my media center, and it's nice to be able to simultaneously burn a dvd, encode video, and watch a movie without any pause or stuttering.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Well said Fox5, my old Pentium M running at 2.70GHz scored higher with WinRaR than my brothers in law Pentium E2180 at stock speeds, in games the Pentium E2180 wasn't any faster and actually the system stuttered more often than my old system.

Also I have to add the point that the low end Core 2 Duo and Pentium Exxx series have a shared cache design that means that there's more chances to run out of cache in a multi threaded scenario where the threads will fight for cache space, something that doesn't happen with the AMD CPU's which each L2 cache is exclusive (Phenom I and II L3 cache which is shared, but big enough to be efficient).

I can share testament that also there's some other issues in performance, when you are compressing big files with WinRaR for a while, even though the hard drive usage is quite low, the system's performance doesn't feel snappy at all, that's something that doesn't happen with the AMD Phenom, seems that the Northbridge is the culprit, thanks God that Intel got rid of it.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |