Fudzilla: Bulldozer performance figures are in

Page 84 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
How terrible would it be if the two CPU's in the content creation benchmark above the hypothesized 2500K are the 2600K and... the 1100T?

There are a few instances in multi-threaded applications where the 1100T is 5% faster or so than the Core i5-2500K.
 

mosox

Senior member
Oct 22, 2010
434
0
0
In full HD and with a good video card the difference between the 2500K and Ph II X4 is rather small.

Anand's benches are misleading for many users because they're made to "emphasize" the CPU as they stated more than once. Smaller resolutions, no quality settings. Never use their CPU reviews for benches.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
In full HD and with a good video card the difference between the 2500K and Ph II X4 is rather small.

Anand's benches are misleading for many users because they're made to "emphasize" the CPU as they stated more than once. Smaller resolutions, no quality settings. Never use their CPU reviews for benches.

You're right.






In some games the differences will be small, in others huge. Most are in the middle, especially if you look at min. framerate. Give a Phenom II X4 a high-end GPU and it'll quickly become a bottleneck.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Simulators and RTS always need higher performance CPUs, FPS needs GPUs most of the time.

Depends of the game, a Phenom II X4/6 could produce almost the same FPS in 1080p and up with Core i3/5/7 CPUs but Phenom II will be a memory of the past very soon

Edit: Turn AA/AF filters on and the difference in FPS in F1 2010 will be smaller and both CPUs (Phenom & Core i) will be closer.
 
Last edited:

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Simulators and RTS always need higher performance CPUs, FPS needs GPUs most of the time.

Depends of the game, a Phenom II X4/6 could produce almost the same FPS in 1080p and up with Core i3/5/7 CPUs but Phenom II will be a memory of the past very soon

Edit: Turn AA/AF filters on and the difference in FPS in F1 2010 will be smaller and both CPUs (Phenom & Core i) will be closer.

At this point you may as well ask for everything to be at 2560x1600. Yes, the Core i5 is noticeably better in gaming than the Phenom II X6, even with a graphics card that's at the beginning at the high-end: a Radeon HD 6870.

Adding AA/AF only pushes higher VRAM usage for the most part. It doesn't change the fact that there's a CPU limitation.

Anyone claiming that a Phenom II X4 is sufficient is sufficient to keep fed a Radeon HD 6950 or higher is in a bit of negation, not to mention the Core i3 is better than it in gaming.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
Let's try to keep these vaporous posts at least focused on Bulldozer, please.
 

jpiniero

Lifer
Oct 1, 2010
14,845
5,457
136
How terrible would it be if the two CPU's in the content creation benchmark above the hypothesized 2500K are the 2600K and... the 1100T?

My guess for the Content Creation:

112 = 2600K
102.3 = FX 8150
100 = 2500K
98.5 = FX 8120
95.9 = 2400
94.8 = FX 8100
90.2 = X6 1100T
87.4 = FX 6100
77.3 = X4 980
77.1 = FX 4100
71.2 = A8-8350
64.5 = i3 2100

I compared the benchmarks mentioned with Anand's Bench in helping me shape the order. Keep in mind that I think the 8150 is going to be a paper launch, and maybe the 8120 as well.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Last edited:

inf64

Diamond Member
Mar 11, 2011
3,765
4,223
136
There are a few instances in multi-threaded applications where the 1100T is 5% faster or so than the Core i5-2500K.
Few instances?? How about a bit of facts. I posted this a while ago,on XS. I will highlight the instances where 1100T is faster than 2500K (in bold).

informal said:
Let's see what real numbers show us:
http://www.hardwarecanucks.com/foru...i5-2500k-core-i7-2600k-processors-review.html

2600K/2500K vs 1100T
AIDA64 Extreme Edition : Mixed bag,some tests are won by 2600K,some by 1100T and sometimes 2500K is faster than 1100T. Synthetic benchmark.

SuperPi Mod v1.5 : We all know the results in this one,2600K/2500K is faster .Obsolete fp benchmark.

wPRIME 2.03 : 2600K is a hair faster than 1100T. 2500K is much slower than 1100T(40%). Synthetic benchmark,good MT support.

Cinebench R10 : 2600K is solidly faster than 1100T,2500K is slightly faster than 1100T(6%). 3D rendering benchmark,uses obsolete engine now.

Cinebench R11.5 : 2600K is solidly faster than 1100T(16%),2500K is slower than 1100T by 8%. 3D rendering benchmark,uses newer engine with better MT support.

Deep Fritz 12 - Chess Benchmark : 2600K is faster than 1100T (12%),2500K is slower than 1100T (14%) . Benchmark uses chess engine,good MT support.

POV-Ray 3.73 beta 40 : 2600K is faster than 1100T (9%),2500K is slower than 1100T (14%). A ray tracing program,good MT support.

Intel High Definition Experience and Performance Ratings Test 2010: 2600K and 2500K are much faster than 1100T (40%). This is intel's own
benchmarking software
so difference is not surprising(they probably use ICC and generate suboptimal code paths).

PCMark Vantage x64 : 2600K and 2500K are faster than 1100T by 14% (and they are faster than 980x Gulftown for that matter!). Semi-synthetic benchmark
suite.Limited MT support.

Lame Front End : 2600K and 2500K are faster than 1100T by 25-28% (and anything else from intel too). Single-threaded application.

Photoshop CS4 : 2600K and 2500K are faster than 1100T by 40+% (and anything else from intel too,including 980x). Not well threaded real world application.

x264 HD Benchmark : 2600K is solidly faster than 1100T (17%),2500K is slower than 1100T (11%). Well threaded real world application.

WinRAR : Both 2600K and 2500K are much faster than 1100T. Compression test uses 1-2 cores so not well MT aware.

7-Zip : Both 2600K and 2500K are faster than 1100T(14%), but 1100 is placed 3rd,well before 980x Gulftown. Test uses 256 AES NI encryption with HW support in latest intel CPUs (1100T has no such support yet performs good).

Futuremark 3DMark06 : Both 2600K and 2500K are faster than 1100T(2500K just slightly). Semi-synthetic test,has some support for multicore.

Futuremark 3DMark Vantage : 2600K is noticeably faster than 1100T,2500K is practically on par with 1100T.Semi-synthetic test,has some support for multicore.

Games: Generally in games,2600K and 2500K are somewhat faster but advantage varies a lot from very low to somewhat high(percentage wise). Most games are poorly threaded,with thread support ranging from 2 to 4 max.

As can be seen from hard numbers,whenever you have good MT support ,1100T is faster than 2500K. In single and poorly threaded workloads 2500K is faster but not always and not with big advantage. 2600K is usually noticeably faster,by more than 15% in many workloads but it costs a lot more.

As you can see,whenever you have real world workload that has good(>4) MT support, 2500K is slower than 1100T,both @ sock. This is fact.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,422
1,759
136
The decoder is now 4-wide an increase from the 3-wide front end that AMD has had since the K7 all the way up to Phenom II. AMD can now fuse x86 branch instructions, similar to Intel’s macro-ops fusion to increase the effective width of the machine as well. At a high level, AMD’s front end has finally caught up to Intel, but here’s where AMD moves into the passing lane.
The 4-wide decode engine feeds three independent schedulers: two for the integer cores and one for the shared floating point hardware.

There should be a loss compared to having it dedicated with another ALU, and many design elements are now integrated. However, I doubt this would be anywhere near the same performance penalty as from the shared FPU.

The big remaining question is whether the frontend is split statically (every even cycle it decodes for core 0, every odd cycle it decodes for core 1), or dynamically (when core 0 has stalled, or just has more insns in the buffer, decode for core 1 and vice versa).

The distinction is huge -- a statically split decoder would be much cheaper and simpler to make, but would essentially be like a 2-wide decoder for both cores, and that would suck. But because decode both can (and should!) decode much work in advance, and stalls often, a dynamically split 4-wide is effectively much wider than 2-wide, and can often (and exactly in the situations it matters) be effectively wider than the 3-wide that phenoms have.
 

inf64

Diamond Member
Mar 11, 2011
3,765
4,223
136
The distinction is huge -- a statically split decoder would be much cheaper and simpler to make, but would essentially be like a 2-wide decoder for both cores, and that would suck. But because decode both can (and should!) decode much work in advance, and stalls often, a dynamically split 4-wide is effectively much wider than 2-wide, and can often (and exactly in the situations it matters) be effectively wider than the 3-wide that phenoms have.
Decode stage is vertical MT organized. Just like FP scheduler that recieves single thread from front end but executes it via SMT on 2 fmacs (via 2 threads). Every odd cycle the threads in FP scheduler can switch(core 0,core1). Effectively for fp in MT workload,each core has 1 fmac per cycle (in reality each core has 2 fmacs every other cycle which is practically the same).
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
As you can see,whenever you have real world workload that has good(>4) MT support, 2500K is slower than 1100T,both @ sock. This is fact.

That's funny because the benches you highlighted such as x264, Cinebench, Pov-Ray, Deep Fritz 12, wPrime, SuperPi, PCMark Vantage, 3d Mark are not "real world" to me. I don't use or play any of these on a daily or even weekly basis. I use SuperPi to measure IPC improvements from my previous processors but I do not use SuperPi as an actual program! Sure I might run 3D/PCMark, Cinebench, Deep Fritz for kicks and giggles but again I don't actually use these in real life.

From the programs you listed, I use Lame MP3 encoding for FLAC files, play games, and use WinRAR over 7-zip. So for me personally, the 2500k wins every single time. I also run at least 40-50 tabs while browsing the internet, so I prioritized $100 towards an SSD over HT that 2600k offers because that fit my personal needs better for that task at the time.

Moral of the story: people should buy the processor that performs best in the programs they personally use.

For example, if you render 8-10 hours a day or do a bunch of home video encoding, then FX-8150 is going to be a great CPU.

Strategy and role-playing games accounted for > 50% of all PC game sales in 2010, while FPS games accounted for about 10%. I expect the share of FPS/action genre to rise in 2011 due to release of some AA titles such as Rage, BF3. However, in the next 2-4 years, 2 more expansions of SC2 will come out and Diablo 3. I am sorry, but those are HUGE games for the gaming community. Add WOW into the mix too (extremely CPU limited). A CPU that can't perform well in these dual-threaded game engines that require extremely fast IPC per core is a deal breaker out of the gate to many people.

For gamers, there have been plenty of games that eat slower CPUs for breakfast even at 1920x1080





The point is, AMD already tried selling 955/965 BE CPUs against Intel's core i3s and X6 vs. i5s. They have not been able to gain market share using the strategy of selling more slower cores. It will be interesting to see if this changes once AMD throws 8 cores vs. Intel's 4. The greatest threat to Bulldozer is that IVB is going to launch in Spring of 2012.

Another point: sure, in the short-term, AMD may be competitive by putting 8 cores with 3.9ghz Turbo vs. Intel's 4/4+HT. But what in the world is AMD going to do in 2013 when Haswell brings 6 and 8 core CPUs? I can actually see Haswell finally bringing us a 6-core CPU at 4.0ghz+ for $300 by 2013. Is AMD going to announce a 4.0ghz 12-16 core CPU to stay in the game? They have just 2 years to counter this. AMD's strategy is extremely risky since it focuses almost entirely on the idea that increasing performance is going to come from more cores. Eventually, I believe they will have to address their IPC issue, because I really don't see programs using 12-16 threads by 2013-2014.
 
Last edited:

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Few instances?? How about a bit of facts. I posted this a while ago,on XS. I will highlight the instances where 1100T is faster than 2500K (in bold).



As you can see,whenever you have real world workload that has good(>4) MT support, 2500K is slower than 1100T,both @ sock. This is fact.

Sure...













The truth is, they're overall tied in multi-threaded. In anything that's not, the Phenom II X6 is left in the dust.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,422
1,759
136
Decode stage is vertical MT organized. Just like FP scheduler that recieves single thread from front end but executes it via SMT on 2 fmacs (via 2 threads). Every odd cycle the threads in FP scheduler can switch(core 0,core1).

Can, or have to?
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
FP can issue 4 ops from a single Thread per cycle but can execute and retire 4 ops of any thread.

Cycle 1 = FP Scheduler issues 4 ops from Thread 1
Cycle 2 = FP Scheduler issues 4 ops from Thread 2

Cycle 3 = FP can execute 4 ops from Thread 1 or 4 ops from Thread 2 or 4 ops of either Thread 1 and 2.

I believe the Front End can MT dual threads in the same cycle.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
The point is, AMD already tried selling 955/965 BE CPUs against Intel's core i3s and X6 vs. i5s. They have not been able to gain market share using the strategy of selling more slower cores. It will be interesting to see if this changes once AMD throws 8 cores vs. Intel's 4. The greatest threat to Bulldozer is that IVB is going to launch in Spring of 2012.

Another point: sure, in the short-term, AMD may be competitive by putting 8 cores with 3.9ghz Turbo vs. Intel's 4/4+HT. But what in the world is AMD going to do in 2013 when Haswell brings 6 and 8 core CPUs? I can actually see Haswell finally bringing us a 6-core CPU at 4.0ghz+ for $300 by 2013. Is AMD going to announce a 4.0ghz 12-16 core CPU to stay in the game? They have just 2 years to counter this. AMD's strategy is extremely risky since it focuses almost entirely on the idea that increasing performance is going to come from more cores. Eventually, I believe they will have to address their IPC issue, because I really don't see programs using 12-16 threads by 2013-2014.


Glofo 22nm SOI HKMG

 

Tuna-Fish

Golden Member
Mar 4, 2011
1,422
1,759
136
FP can issue 4 ops from a single Thread per cycle but can execute and retire 4 ops of any thread.

Cycle 1 = FP Scheduler issues 4 ops from Thread 1
Cycle 2 = FP Scheduler issues 4 ops from Thread 2

Cycle 3 = FP can execute 4 ops from Thread 1 or 4 ops from Thread 2 or 4 ops of either Thread 1 and 2.

I believe the Front End can MT dual threads in the same cycle.

You are still not answering the question.

Cycle 1 = FP Scheduler issues 4 ops from Thread 1
Cycle 2 = FP Scheduler issues 4 ops from Thread 2.

The big question is, if the queue for Thread 1 is now empty, can cycle 3 be:
Cycle 3 = FP Scheduler issues 4 ops from Thread 2.

And the same for the front end. Static partition would be much cheaper, and dynamic would require much more logic, but would make the effective throughput of the frontend much higher. Think of the case of a branch miss -- on cycle 1, core 0 executes the test+branch instruction and figures out that it has missed a few cycles ago. For the next few cycles, it doesn't have any instructions to decode -- think one cycle for the address, and 3-4 for fetch. During that time, the decode logic can freely decode 4 insns per cycle for core 1. Once the fetch for core 0 arrives, the buffers for core 1 should have plenty of headroom. This means that core 0 can now hog all the decoding resources for a few cycles -- and this is awesome because OOE needs instructions in buffers to keep the units fed, so 4(+1) wide dedicated decode would give the core 0 a running start after the miss.

And this is only for a branch miss. Missing L1 would give the other thread enough decode resources to almost certainly fill any buffers it has. That would give the recovering thread potentially 6 or so cycles of running on super-decode.

As far as I know, AMD has not spoken of how the decoder is shared anywhere. We'll see soon enough.
 

inf64

Diamond Member
Mar 11, 2011
3,765
4,223
136
That's funny because the benches you highlighted such as x264, Cinebench, Pov-Ray, Deep Fritz 12, wPrime, SuperPi, PCMark Vantage, 3d Mark are not "real world" to me. I don't use or play any of these on a daily or even weekly basis. I use SuperPi to measure IPC improvements from my previous processors but I do not use SuperPi as an actual program! Sure I might run 3D/PCMark, Cinebench, Deep Fritz for kicks and giggles but again I don't actually use these in real life.

From the programs you listed, I use Lame MP3 encoding for FLAC files, play games, and use WinRAR over 7-zip. So for me personally, the 2500k wins every single time. I also run at least 40-50 tabs while browsing the internet, so I prioritized $100 towards an SSD over HT that 2600k offers because that fit my personal needs better for that task at the time.

Moral of the story: people should buy the processor that performs best in the programs they personally use.
I didn't highlight Super pi. x264 is free MT aware software codec for encoding video into h.264 or mpeg 4 format. Cinebench uses a real world 3D engine. Wprime is representative of fp SIMD threaded workload. Pov ray is real world ray tracing engine. Deep Fritz is real world chess program that is optimized for MT.

For your usage pattern ,clearly 2500K is better chip. You need few very fast threads. 2600K would also be overkill.

I agree,people should chose according to their own needs.
Can, or have to?

From what I understand it can,it doesn't have to. It all depends on the instruction stream. If there are no threads coming from core 1,then core 0 can execute across FlexFP whole day long.
You are still not answering the question.

Cycle 1 = FP Scheduler issues 4 ops from Thread 1
Cycle 2 = FP Scheduler issues 4 ops from Thread 2.

The big question is, if the queue for Thread 1 is now empty, can cycle 3 be:
Cycle 3 = FP Scheduler issues 4 ops from Thread 2.

And the same for the front end. Static partition would be much cheaper, and dynamic would require much more logic, but would make the effective throughput of the frontend much higher. Think of the case of a branch miss -- on cycle 1, core 0 executes the test+branch instruction and figures out that it has missed a few cycles ago. For the next few cycles, it doesn't have any instructions to decode -- think one cycle for the address, and 3-4 for fetch. During that time, the decode logic can freely decode 4 insns per cycle for core 1. Once the fetch for core 0 arrives, the buffers for core 1 should have plenty of headroom. This means that core 0 can now hog all the decoding resources for a few cycles -- and this is awesome because OOE needs instructions in buffers to keep the units fed, so 4(+1) wide dedicated decode would give the core 0 a running start after the miss.

And this is only for a branch miss. Missing L1 would give the other thread enough decode resources to almost certainly fill any buffers it has. That would give the recovering thread potentially 6 or so cycles of running on super-decode.

As far as I know, AMD has not spoken of how the decoder is shared anywhere. We'll see soon enough.

AFAIK the decode stage works exactly how you described it . This way AMD manages to optimize the throughput and power consumption. For more details you can check HC22 video in which chief architect Mike Butler describes how Bulldozer works.Video can be found here:
http://www.hotchips.org/conference-archives/hot-chips-22
Choose session 7 and fast forward to Bulldozer presentation. There is QnA session at the end,don't miss it.
 
Last edited:

Abwx

Lifer
Apr 2, 2011
11,172
3,869
136
You are still not answering the question.

Cycle 1 = FP Scheduler issues 4 ops from Thread 1
Cycle 2 = FP Scheduler issues 4 ops from Thread 2.

The big question is, if the queue for Thread 1 is now empty, can cycle 3 be:
Cycle 3 = FP Scheduler issues 4 ops from Thread 2.

This has been answered to by AMD.
One core can make use of the full FPU units ,i/e; one thread
can access the FP exe ressources for each cycle if needed.

Of course , if there s two thread , they will share the FPUs,
each one being allocated one cycle.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
You are still not answering the question.

Cycle 1 = FP Scheduler issues 4 ops from Thread 1
Cycle 2 = FP Scheduler issues 4 ops from Thread 2.

The big question is, if the queue for Thread 1 is now empty, can cycle 3 be:
Cycle 3 = FP Scheduler issues 4 ops from Thread 2.

And the same for the front end. Static partition would be much cheaper, and dynamic would require much more logic, but would make the effective throughput of the frontend much higher. Think of the case of a branch miss -- on cycle 1, core 0 executes the test+branch instruction and figures out that it has missed a few cycles ago. For the next few cycles, it doesn't have any instructions to decode -- think one cycle for the address, and 3-4 for fetch. During that time, the decode logic can freely decode 4 insns per cycle for core 1. Once the fetch for core 0 arrives, the buffers for core 1 should have plenty of headroom. This means that core 0 can now hog all the decoding resources for a few cycles -- and this is awesome because OOE needs instructions in buffers to keep the units fed, so 4(+1) wide dedicated decode would give the core 0 a running start after the miss.

And this is only for a branch miss. Missing L1 would give the other thread enough decode resources to almost certainly fill any buffers it has. That would give the recovering thread potentially 6 or so cycles of running on super-decode.

As far as I know, AMD has not spoken of how the decoder is shared anywhere. We'll see soon enough.

2.7 Instruction Fetch and Decode
AMD Family 15h processors can theoretically fetch 32B of instructions per cycle and send these
instructions to the Decode Unit (DE) in 16B windows through the 16-entry (per-thread) Instruction
Byte Buffer (IBB). The Decode Unit can only scan two of these 16B windows in a given cycle for up
to four instructions. If four instructions partially or wholly exist in more than two of these windows,
only those instructions within the first and second windows will be decoded.
Aligning to 16B
boundaries is important to achieve full decode performance.

FPU Features Summary and Specifications:
• The FPU can receive up to four ops per cycle. These ops can only be from one thread, but the
thread may change every cycle. Likewise the FPU is four wide, capable of issue, execution and
completion of four ops each cycle. Once received by the FPU, ops from multiple threads can be
executed.

• Within the FPU, up to two loads per cycle can be accepted, possibly from different threads.
• There are four logical pipes: two FMAC and two packed integer. For example, two 128-bit
FMAC and two 128-bit integer ALU ops can be issued and executed per cycle.
• Two 128-bit FMAC units. Each FMAC supports four single precision or two double-precision
ops.
• FADDs and FMULs are implemented within the FMAC’s.

It says the Thread MAY change in every cycle, not have to change,

so it could very well be,

Cycle 3 = FP Scheduler issues 4 ops from Thread 2.
Cycle 4 = FP Scheduler issues 4 ops from Thread 2.
.
.
.
.
.
.
Cycle 5684 = FP Scheduler issues 4 ops from Thread 2.

until we have Thread 1 again.
 

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de
This has been answered to by AMD.
One core can make use of the full FPU units ,i/e; one thread
can access the FP exe ressources for each cycle if needed.

Of course , if there s two thread , they will share the FPUs,
each one being allocated one cycle.
Noooo!!!11!!1one1!

Marketing language doesn't suit explaining an OOO scheduler

Dispatch (AMD meaning: give decoded uops to the FPU) is one thread per cycle as currently available docs and compiler source codes suggest. Dispatch doesn't have to look at dependencies, so enough instructions per 4 mop dispatch packet could be found.

Execution is out of order, so the scheduler might issue instruction (uop) #1, #4, #5, #9 in one cycle because they are independent from eachother input-operand-wise. But what if only 2 independent ops of one thread could be issued? Wouldn't it be smart to issue some per-definition independent ops of the second thread then?
 

Abwx

Lifer
Apr 2, 2011
11,172
3,869
136
Noooo!!!11!!1one1!

Marketing language doesn't suit explaining an OOO scheduler

Dispatch (AMD meaning: give decoded uops to the FPU) is one thread per cycle as currently available docs and compiler source codes suggest. Dispatch doesn't have to look at dependencies, so enough instructions per 4 mop dispatch packet could be found.

Execution is out of order, so the scheduler might issue instruction (uop) #1, #4, #5, #9 in one cycle because they are independent from eachother input-operand-wise. But what if only 2 independent ops of one thread could be issued? Wouldn't it be smart to issue some per-definition independent ops of the second thread then?

Right , but i was pointing the case where both threads needs
the full FP ressources.

If there s only one thread , the scheduler can pick and schedule
up to four uops for the said thread to be executed by the FPUs.

If there s two threads , the scheduler will pick whatever is at hand
in the fetch window and send it as it comes to the exe units
in a OoO scheme , thus , statisticaly wise , each thread will
be allocated half of the execution bandwith.

The case you re talking about is when both threads do not
require the same execution bandwith , thus the heavier thread can
benefit from the cycles not used by the lighter thread.
 

Abwx

Lifer
Apr 2, 2011
11,172
3,869
136
Found this at hardware.fr , yet another slide leaked by donanimhaber...

Some interessant hints on it , though...

 
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |