Again, what is wrong with synthetic benches? In most cases, they are designed to simulate real world applications.
Ok, let s look at the benches you linked, let s start with Agisoft PhotoScan Benchmark wich is used 6 times with 4 subscores that are averaged for a total score, that s a mean to extract five favourable benches out of a single one.
Then let s look at the Mthreaded apps wich are also tested in single thread like CBench, do you use Cinema 4D or any other rendering engine in single thread.?
No but that s yet another mean to put a favourable although irrelevant bar.
Then we can finish with my favourite, 3D Particle movement.
Is that a real world bench.?
Dont know but what is sure is that Intel CPUs run up to SSE3 while the AMDs are stuck in X87, prove is that my Athlon XP, wich has no SSE2, run this bench with better ST IPC than the current AMDs CPUS.
What is left of this glorious serie of benches.?.
Cinebench MT, Winrar 5.01, H264, Agitsoft for the total score and so on, all bench that mirrors the ones i linked at hardware.fr...
Again, what is wrong with synthetic benches? In most cases, they are designed to simulate real world applications.
I'm not talking about Sysmark. I would like to point out that one of the things computers are best as is simulating. It is asinine to write off synthetic benchmarks as a whole, considering half of what computers do is simulate.So why not directly using real world apps, there s a lot, that said you should know that Sysmark is not credible at all, a google search and you ll know why, i wont digress more in respect of the thread supposed content.
On the contrary, I think it points out multi-GPU performance quite nicely -- multi-GPU configurations are inherently buggy, and lack widespread support. Multiple GPUs are only applicable to a relatively small number of applications.Look at CFX/CF synthetic results and try to type that again. They absolutely do not replicate real-world performance.
They can be useful, but will always be inferior (IMHO) to real-world tests. They are a lot easier to perform, so they are an easy comparison to jump to.
3D PM was self written by Ian Cuttress for his computational chemistry stuff.
http://www.anandtech.com/show/6533/...essor-motherboard-through-a-scientists-eyes/9
I have no idea about compiler options and the like but this is industry standard code (at the pseudocode level).
This code has been used to write scientific research papers that have been peer reviewed.
http://www.sciencedirect.com/science/article/pii/S1572665711001068
http://scholar.google.ca/scholar?q=ian+cutress+particle+movement&btnG=&hl=en&as_sdt=0,5
(not sure if you can access)
Describes the Cosine Method
The is real world code. Its just not code that is perhaps useful to the average person but is a very real indication of general performance for self-written small scale scientific calculations.
I'm not talking about Sysmark. I would like to point out that one of the things computers are best as is simulating. It is asinine to write off synthetic benchmarks as a whole, considering half of what computers do is simulate.
It use up to SSE3 and Intel s MKLs, the AMDs are not running the same path as the Intels, they use x87, that s why Piledriver has much less IPC in this test than the Phenom, the final hint is that Baytrail has better IPC than both the FX and Kabini in this test, yet Kabini has 30% better IPC in FP than Baytrail, that s really too much discretanpcies.
The code provided detects whether the processor is SSE2 or SSE4 capable, and implements the relative code. We run a simulation of 10240 particles of equal mass - the output for this code is in terms of GFLOPs, and the result recorded was the peak GFLOPs value.
I'm not talking about Sysmark. I would like to point out that one of the things computers are best as is simulating. It is asinine to write off synthetic benchmarks as a whole, considering half of what computers do is simulate.
On the contrary, I think it points out multi-GPU performance quite nicely -- multi-GPU configurations are inherently buggy, and lack widespread support. Multiple GPUs are only applicable to a relatively small number of applications.
I'm assuming that's what you're talking about.
No
http://www.anandtech.com/show/6808/westmereep-to-sandy-bridgeep-the-scientist-potential-upgrade/4
No memory access as well (or very little).
This code is pure FP. What did you expect from PD with its 1 FPU per two cores.
You went from 6 FPUs in PII x6 to 4 in BD.
No, it's not competitive. Stop cherry picking already.
http://anandtech.com/bench/product/551?vs=697
Welcome to the real world.
You cannot be serious here...
All it does it paint an inaccurate real-world picture of what your hardware can do. When your 3DMark score is 90% higher with a second card but you only get 40% more FPS, it is absolutely NOT representative of actual performance.
Next, you are going to say throughput is the end all be all of performance too? LOL
If it was FP, or equal FP for everybody, then Baytrail wouldnt have better IPC in this test than Kabini or Piledriver.
Particularly find me a FP bench where Kabini has lower IPC than Baytrail, in Cinebench the IPC difference is 25-30%, in Povray this is 35%, another hint, the FX8350 has more perfs in FP than a X6, set apart with X87 code, see below :
But that s incredible, this J1900 is litteraly a rocket in 3D light speed particle, it s better than a Piledriver 5800K, close to twice the IPC, and of course much better than the Athlon 5350...Seriously.??.
This benchmark is wholly memory independent – by generating random numbers on the fly, each thread can keep the position of the particle and the random number values in local cache.
If you go back to the P4 vs. A64 days, you will see the products pretty competitive on a synthetics standpoint. When you looked at real-world performance, however, the A64 was the clear winner. This is a great example of where synthetics really paint a inaccurate picture.
I can link to a TON of reviews that support exactly this.
Since everyone loves car analogies. Relying solely on benchmarks is like comparing two cars' performance based strictly on dyno results, weight, and 0-60 times to gauge which car will be faster around a track. You can make reasonable guesses, but you need actual times and constant variables to get to the truth.
no competition = no innovations, price wars, comparing, etc..
Thats not true, see my signature.
Thats not true, see my signature.
And we never had cheaper CPUs since Core 2.
Competition only works so far until it becomes a problem. In other sectors competition have ended with being a race to the bottom. Hence why you for example eat machine extracted bone meat. If you saw it in real life you would puke.
In the case of AMD and Intel. Its all about R&D. GPUs are going the same way with AMD and nVidia. R&D wise there is just no room for 2 due to ever increasing cost. Intel is going into mobile/tablets for the same reason. Either you increase volume or you cut back on R&D.
Then Intel has a failed CPU design team structure according to your flawed point of view, as both CPU design teams are competing with each other and are constantly disregarding the innovations the other team did (drop of FIVR in Skylake after it was adopted in HW/BW, separated core and L3 cache clocks in Nehalem only to adopt a synchronous L3 cache clock speed with SB, only to adopt a separated core and L3 cache clock speed, again, in HW) making the R&D expenditures that made those innovations possible somewhat useless because they usually dont last more than 1 uarch redesign until the other team changes them, again.
Pro tip: The statement above only proves competition is good, even inside a company, because not only that drives Intel CPU design teams to innovate with each uarch redesign, but it also makes the R&D expenditure something useful as those innovations can be re-used withing the company. I know you ship IDC as if he were a prophet in the semiconductor area, but his comment disregarded completely the competition a company can have inside itself, and that very competition Intel has between their 2 best design teams is what drove the company foward in x86 CPU uarch design.
I agree with you, but what happens when with the R&D you did its sufficient to make money?
I mean if AMD were at the toes of intel, both companies will need to invest in R&D in order to get an advantage from the other, the downside is the "race to the bottom" as you said, but do you think intel invest all that they can in R&D or just the necessary to stay on top?
I have a bridge that I really want you to look at it...
Sorry, couldn't resist, that just became my new sig.
Who cares? Why would anyone bother to optimize for AMD over Intel? It'd be illogical to do so.
Gee, I don't know -- maybe because every current game console is running AMD hardware.... That was a priceless quote, though.
Even then, it would be optimized for Jaguar and not the big cores.
Also I think we already saw enough console ports to know there is no magic waiting. Even an old outdated i3 beats most AMD products.