Cell: Future of Gaming?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

clarkey01

Diamond Member
Feb 4, 2004
3,419
1
0
Originally posted by: Vee
Originally posted by: mwmorph
and by 2k6, amd and intel wil have vastly faster processors. every 12-24 months cpus double speed remember?

Err, no they won't. Intel have basically been standing still (for sustained load performance) on the desktop for 2 years now. And they will continue to stand still for 2 more years. If there's any light in the tunnel by 2007, I don't know. I do fear that the x86-64 multicore, slowclocked "Conroe" have been cancelled by the Itanium-, Netburst- and marketing crowd at Intel. Reason is Intel's strong push with BTX, Prescott derived CPUs ("Shitfield" and "Pissler") and desktop "Itanic". It may be they are attempting to use brute market share force, to push out x86-64 and AMD, with IA64 - featuring onchip x86-32 core for legacy apps - around 2008, on the desktop. Or they're expecting AMD to go belly up before.

Intel's game would be that the main 64-bit software push, when it comes, would go IA64. They already have Windows64 for IA64, and are applying an effort on software developers.

(The best way for us to contribute to avoid this terrible scenario, is to keep buying AMD, guys)
(And for those who don't understand why this is terrible, IA64 have conclusively prooved, after years and billions, that it has no advantages at all on X86-64. And with AMD gone, the future of processors and computing looks very bleak indeed.)


I think after smithfield, Intel will hit back with 65nano Preslers which are already running, you have to take my word on this. As for X86, x86 does have its problems, but a lot of PPC and IA64 supporters honestly couldn't tell you what these problems are, only that they exist. The PPC supporters are especially guilty of this.. not to say that the latest PowerPC processors are nothing special (they're really quite nice), just that Apple fanboyism often leaks over into architecture.

If you ask me, we're past most of what made x86, well, suck. Modern x86 CPUs are really just RISC in sheep's clothing (hey, who said I couldn't randomly mix analogies?) for the most part, and the only reason x86 is said to drag us down is that CPUs basically have to translate x86 instructions into an easier-to-digest form (basically/semi-incorrectly: CISC to RISC, stupid to smart). Do we lose some speed doing this? Sure! Do we lose enough that we need to go through making an entirely new architecture for PCs? Well.. maybe not.

The trouble isn't so much designing the new architecture, really. The engineers doing this probably find it fun. The trouble is that you have to tell the market "hey, we're going to break compatibility with everything out right now, but look at it this way-- if you buy all this expensive new hardware then run the expensive software coded for this new architecture, you'll get a moderate speed boost over the old hardware!" Who out there is going to say "ooh! Me first!"?

There are more.. delicate ways to handle this situation, of course. Ace's Hardware went over it in that Kill x86 article of theirs. It may not be the easiest thing in the world, but it would be relatively painless for the market. The point is that these light-handed (on the "market treatment" side, nevermind the poor engineers who're told that they have to make a CPU that's effectively two architectures in one) ways of introducing a new instruction set / arch were not the ways that Intel chose.

But I digress, a lot. Let's assume that Intel somehow make the Itanium 2 emulate x86 code at a reasonable pace. Now you just have the issue of getting it to the market, right? Surely that's all? No, sadly, it isn't. The 1GHz LV Deerfield puts out 62 watts of heat over a 180mm^2 die. Does that sound familiar? A die the size of a farm animal, power consumption in the low sixties? Why, that's what the 130nm Opteron 246s look like. Except they're a lot faster than 1GHz Deerfields, even with this horrible "maintaining backwards compatibility" deal, and even running in 32-bit mode only.

But that's not really fair, is it? The 1GHz Deerfield is awfully slow. It's an LV part, after all. But that brings me to my other point: the LOW-VOLTAGE part puts out 62W of heat. Even someone a few crayons short of a box (someone such as myself, I guess) can see that you might just have a few heat output issues with the non-LV parts. Huge die or no, that's a lot of heat to dump into a PC. And what do you get out of it? Something maybe as fast as an Opteron that, if market adoption suddenly grew by an ENORMOUS amount, might not cost TOO much more.

The Itanium may have been promising at its debut, but the fact is that it's ill-suited to anything except massively parallel supercomputers... and even in those, there aren't really many reasons to use them over other, better processors (did someone say POWER5?).

(lifted from a post I did in highly tech)
 

DrMrLordX

Lifer
Apr 27, 2000
22,000
11,560
136
If the Cell works out the way it is described in the article, it's gonna require a hell of a lot of silicon, even on the 65 nm process. One PU and 8 APUs per Cell? 8 "bank controls" each with a seperate memory bus to control an 8 meg bank of memory?

4 Cells in a single PS3?

Holy crap. Even if it *does* function as expected, it's going to be absurdly expensive. That's a lot of processor there.
 

mwmorph

Diamond Member
Dec 27, 2004
8,877
1
81
Originally posted by: TekDemon
Now, while Sony does have a "creative" history with CPU specs, the cell seems at least...somewhat legit.
And apparently there are screen shots that are supposedly showing what Cell *will* look like-which is sketchy since some of them were just done in Rhino, but supposedly there are very impressive real-time demos running on Sony's GSCUBE (multiple PS2 processor based) development systems, and supposedly they're very impressive although not quite as impressive as what cell is supposedly going to be.

Or...something like that.

Of course I guess we'll see when it comes out =)

P.S. Apparently there are two generations of the GSCUBE so it's not running on the older gen one which was just like 16 PS2 chips...the new one is apparently about 125x as powerful as a PS2. Knowing sony's "1000x" claim on the PS3 and their penchant for exaggeration I'm guessing the PS3 will only be 2-5x as powerful as the 2nd gen GSCUBE and not 8x...but we'll see =)

1. THe screenshots are faked. no way they have techdeoms for soeething a year from release
2. not as impressive as the scrnshots? isnt the gscube SUPPOSED to be more powerful? shouldnt they look BETTER?

it's obvious sony is lying it's ass off

Originally posted by: DrMrLordX
If the Cell works out the way it is described in the article, it's gonna require a hell of a lot of silicon, even on the 65 nm process. One PU and 8 APUs per Cell? 8 "bank controls" each with a seperate memory bus to control an 8 meg bank of memory?

4 Cells in a single PS3?


Holy crap. Even if it *does* function as expected, it's going to be absurdly expensive. That's a lot of processor there.

exactly, even if ti does come out, it's goning to flop like Itanium. they are frpcing us to buy entirely new hardware they will probably overcharge for, force us to buy software that works only on their architecture as well as raise dev costs for new software. it does not make economic sense.
 

xbdestroya

Member
Jan 12, 2005
122
0
0
I'm not sure I understand why people feel the Cell wil be all kinds of expensive. It won't be. Let's go over some facts. The fabs for these things are basically already built. The area required by the APU's attached to each chip is tiny. The process will start 90nm, go 65nm. The economies of scale will be enormous - much beyond what most other chip firms (including Intel) are used to. We're talking three or more dedicated lines at 90nm and lower churning out chips that will go in everything from gaming consoles to tv's to cell phones, etc...

It's going to be scale scale scale. There's going to be more fabs producing Cell chips than AMD presently has capacity period. PS3 alone over a five-year period is expected to account for 100 million units; and if it really is four per system, whcih I doubt it would be, 400 million Cell chips right there. To say nothing of the other things it will be going into.

You think that Microsoft will be able to get three multi-core PowerPC chips from IBM for cheaper than Sony can fab these chips for themselves? I don't think so. Plus Sony has such a broad license for NVidia's GPU tech, which they will also be fabbing for the PS3 in their own fabs, that I think they'll be paying less for their graphics engine than Microsoft is for theirs from ATI as well.

I think there's frequently a misconception that just because something is new and awesome it's expensive. That's not the case. It doesn't cost Intel hardly any more to make a Prescott than a Celeron, but they price it differently because that's the market and that's the industry they are in.

Sony is not in this to sell these processors to others, so they will use these chips as a means to an end, not the end itself.

Sure I think PS3 will be expensive, just for what it is - but I think in terms of which console will be more costly to whom, XBox 2 will a year or two down the line end up costing Microsoft more to produce than PS3 will cost Sony.

Microsoft WILL have some help this time around though by being able to outsource and proactively scale cthe chipset manufacturing, where before they had to deal with the contract they'd locked themselves into with NVidia.
 

xbdestroya

Member
Jan 12, 2005
122
0
0
The GSCube was basically the real-world implementation of a concept that led to the development of the Cell chip. I think GSCube debuted in 2001? I could be wrong though - maybe 2000, maybe 2002. It was a number of PS2 chips slaved together for the purpose of rendering/graphics work. GSCube is supposed to be a good deal weaker than Cell. If one is thinking of a Sony/IBM developed workstation MORE powerful than the Cell chip, they are thinkign of the Cell workstation for graphics design, cg rendering, and game development.
 

Ackbar

Senior member
Dec 18, 2004
391
0
0
Not to downplay the discussion at all, but you guys should take a look at this guy's other "thoughts". Such as .... http://www.blachford.info/quantum/gravity.html and http://www.blachford.info/quantum/dimeng.html

He has some type of information related to his thoughts about "gravity" that are pretty much meaningless (but how should I know, I'm only a graduate student in Physics ). I would take anything this guy writes with a grain of salt. It may be true, but it's mostly just his thoughts on the matter and as he says it's mainly just from things he's read from other sources. So it's not like he went out and interviewed the Computer Engineers that designed this.

Whether or not the Cell is going to be good, I can't say. Whether or not the source of the information is good... well, I think I already put my $0.02 into that.
 

mwmorph

Diamond Member
Dec 27, 2004
8,877
1
81
yes! ackbar finally gets what im saying. this guy is a CRACKPOT! you cant trust waht he says. you cant read waht he wrote and accept it as fact!
you have to draw your own conclusion from testing it when it comes out.
 

UzairH

Senior member
Dec 12, 2004
315
0
0
The article makes no mention of how many transistors each APU and the associated memory logic is going to have. Even taken conservatively, 8 APUs plus 1 PU plus all the other switches/logic seems like it would come to over 500 million trasistors. Now how the hell will the power consumption be kept low with so many transistors working at 4.6GHz?

The real test of the pudding will be in the eating. Surely AMD and Intel are looking at the situation and will come out with suitable answers to Cell by 2006. Maybe AMD (more likely than Intel) will have 65nm 4 to 5 GHz dual- or quad-core processors with massive cache by 2007. Anyway one very good point the article does make is that Cell should accelerate the CPU industry - it has been pretty stagnent for the past two years.
 

xbdestroya

Member
Jan 12, 2005
122
0
0
Originally posted by: UzairH
The article makes no mention of how many transistors each APU and the associated memory logic is going to have. Even taken conservatively, 8 APUs plus 1 PU plus all the other switches/logic seems like it would come to over 500 million trasistors. Now how the hell will the power consumption be kept low with so many transistors working at 4.6GHz?

The real test of the pudding will be in the eating. Surely AMD and Intel are looking at the situation and will come out with suitable answers to Cell by 2006. Maybe AMD (more likely than Intel) will have 65nm 4 to 5 GHz dual- or quad-core processors with massive cache by 2007. Anyway one very good point the article does make is that Cell should accelerate the CPU industry - it has been pretty stagnent for the past two years.



My figures on transistor counts don't come from that article - in fact, I haven't even read the whole thing. I've been following the Cell chip for years now, and I can tell you that though that article is a good summary, it's by no means the definitive guide. I will try to find a source quoting estimated APU transistor size; since they are just estimates though I don't know that I will find anything terribly relevant, but I'll do the search. The truth is that things such as transistor counts will probably be February 6th at the earliest before they are known.

I agree that the news surrounding Cell might be over-hyped; but if the hype DID turn out to be true, I would doubt Intel and AMD's ability to counter it.

x86 is a dead man walking, that's just the reality. Power-based solutions are going to start slowly taking over.

And before you flame me, realize I am a huge AMD fan who builds his own systems and would dread a day where I had to buy a computer already made.

But such is life; I can't deny the realities I am seeing.
 

DrMrLordX

Lifer
Apr 27, 2000
22,000
11,560
136
xbdestroya, if they manage to produce chips like the Cell cheaply, it'll be an accomplishment of production rather than design. Sure, they may have already built the fabs, but they're going to have to make that money back somehow. Scaling up the mass-production of the Cell has already cost them a good bit of cash. Unless they intend to eat losses when selling Cell-equipped products, they'll have to make the money back off the buyers.

Also, I don't know why you state that the APUs themselves will be small, or that each Cell will be a small chip requiring relatively little silicon. The PU is, in the author's opinion, likely to be a G5 core, and that's just one part of the chip! 65nm be damned, that's a lot of transistors, and that takes up space on the wafer.

But, that's just my guess. We'll have to see what pricing and availability are like in the future.

I can say one thing for sure: If Sony keeps delaying the release of Cell-based products, they will keep losing money on their investment. Cell's already been pushed back before.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
There are still signs that point to hype in the article.

Each core has its own cache in a revolutionary new design! (so does every dual core product on the way)

Each core has OFF DIE SRAM for cache, a la pentium 3 katmai and earlier. Off die cache is slower and much higher latency.

The core is extremely optimised for parrallel tasks, this means not only do you have to have SMP aware software... you have to code for a lot more than 2 threads and load balance it properly for the software to not run like total garbage.

And as DrMrLordX said, these chips will not be small, or cheap, or high yield.

And as i said before, if this was such an insanely powerful CPU, they wouldnt need to sign on NVIDIA for an NV50 deriviative GPU.
 

clarkey01

Diamond Member
Feb 4, 2004
3,419
1
0
Originally posted by: Acanthus
There are still signs that point to hype in the article.

Each core has its own cache in a revolutionary new design! (so does every dual core product on the way)

Each core has OFF DIE SRAM for cache, a la pentium 3 katmai and earlier. Off die cache is slower and much higher latency.

The core is extremely optimised for parrallel tasks, this means not only do you have to have SMP aware software... you have to code for a lot more than 2 threads and load balance it properly for the software to not run like total garbage.

And as DrMrLordX said, these chips will not be small, or cheap, or high yield.

And as i said before, if this was such an insanely powerful CPU, they wouldnt need to sign on NVIDIA for an NV50 deriviative GPU.


Even still, no matter how powerful a CPU is ( if it turns out to be) it still requires a GPU. The emotion engine in the PS2 is both, I think the Cell processor will have a Power PC base, and likely have eight vectorial processors, each with its own allocation of memory. I cant really think of a chip in any set up with integrated graphics/no stand alone card that runs todays games at high frame rates and all eye candy turned on. You pretty much always need a GPU to boot with.

For some reason I have belief in Sony this time, the sheer amount of money (estimated half a billion dollars on developing the Cell technology) not to mention the fabs they?v built. If they didn?t believe this thing was hugely important and powerful they wouldn?t just spend all this money and build these fabs just for the hype. Not to mention Big blue?s on the scene too.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: clarkey01
Originally posted by: Acanthus
... Not to mention Big blue?s on the scene too.

Actually, I think the Cell processor is really IBM's. Sony is just licensing it. (And the EE was MIPS, CMIIW).

But it's still hype. How much hype, is another question.
 

imported_kouch

Senior member
Sep 24, 2004
220
0
0
after the news of the 100th processor that is going to take over the world, excuse me if I am not excited. There is no way cell could be that powerful. If it were, ps3 wouldn't need 4 of them since graphics would be by far GPU limited. IMO, cell is just going to be a niche product that will be used in very specific applications just like the itanium etc. And btw people flaming x86, neither intel or amd are technically a CISC x86 machines that people say they are. They are basically a RISC machine that breaks down x86 instructs in microcode and support (not based on) x86 because of legacy reasons.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: kouch
after the news of the 100th processor that is going to take over the world, excuse me if I am not excited.

- Definitely agree!

There is no way cell could be that powerful. If it were, ps3 wouldn't need 4 of them since graphics would be by far GPU limited.

Since the cells are just very lean Power cores, there's some possibility to estimate performance, and you're right, the article has to be way over the top

And btw people flaming x86, neither intel or amd are technically a CISC x86 machines that people say they are. They are basically a RISC machine that breaks down x86 instructs in microcode and support (not based on) x86 because of legacy reasons.

It's just that people don't understand what CISC and RISC really is. X86 is CISC, but that's not bad. That is just RISC propaganda from late 80'ies, early '90ies, that has stuck in peoples head.

Breaking down instructions doesn't make anything RISC, and CPUs have always done this in some way.
Microcode, codefission, micro ops, whatever, is NOT RISC.

What is true in this context, is that there is no longer any great difference in the technologies used by later CISC cores and what is (- sofar understood as-) used by RISC. CISCs have since the advent of the MC68040 and Intels lesser but contemporary '486, started to use the same hardware technologies as RISC.

But RISC is NOT these technologies, nor vice versa. These technologies mostly comes from the various generations of supercomputers.

RISC was/is an approach to the design of the ISA (the instruction set architecture). A 'flavor' of ISA, aiming at certain percieved opportunities.
One of these was that it would be possible to do a more advanced, more complex cpu, - featuring some of the previously hinted high performance technologies - , if one reduced the number of supported instructions, and selected them carefully. Like for instance reducing complex addressing. Hence "RISC" = reduced instruction set computing. Another opportunity was compiler optimizations with a large number of visable registers.

(There are today reasons to believe that too intimate CPU-compiler reliance, is a dead end for the evolution of the core performance.)

Edit: I think maybe a good way to put this, is that in RISC, you start with the hardware architecture that you think you want, then design the ISA from that. CISC is the other way round.
Obviously, the benefits of RISC are vulnurable to changes in the environment, like the evolution of technology.


The benefit of RISC was thus, that more registers and advanced and complex hardware architecture could be afforded on a LIMITED AMOUNT OF TRANSISTORS.

But the world moves on. And just as a hedgehog's survival strategies are no longer viable when the sun rises, RISC doesn't compare so well anymore when very large numbers of transistors are available. And the cost for being CISC is just a small part of all those transistors.

This is maybe a good place to note that the central push of RISC was to simplify decoding and increase instruction execution speed. This is not a problem today. What holds back todays CPUs, are branchhandling, false dependencies (many visable register maybe wasn't such a grand idea, after all) and moving all the stuff into and out of the CPU. (again, RISC's larger code and data and less flexible addressing is no help here.)

The reemergence of RISC in IBM's Cell processor and SUN's Niagara is primarily motivated by the need of AGAIN going to a very limited amount of transistors. Another advantage is powerconsumtion and heat.

By the time when AMD and Intel dual/multi core X86 CPUs have matured software to truly take advantage of a large number of cores, it might very well be that Intel and AMD does not need to go 'lean' enough, to give RISC an advantage. Multicore is the future, but I think there is a very good chance that future is X86-64 CISC multicore.
 

xbdestroya

Member
Jan 12, 2005
122
0
0
DrMrLordX: You state that Sony has pushed back the release of Cell-powered products; can you give an example? As far as their five-year development timeline goes, Sony is right on track with Cell's release (so far). If nothing else, THAT's the surprise.

Vee: Cell is Sony's brainchild, go look at the patent; it's issued to Sony. Then there are other patents issued to all three team members, Sony, IBM, and Toshiba. Sony went to IBM early on because they felt they needed IBM's expertise to make it happen. And indeed, a Power core was chosen as the baseline to work from. But Cell does not equal Power. What it does equal, I don't know. We'll know in February.

I do not believe the die area of the PU will be as large as that of a full Power5. I am on several forums that debate the nature of the Cell chip and I have lifted this post from one of them. I'm not sure if someof you will consider it too technical, but read it and consider the implications for die-size. This guy does a lot better job of explaining than I can:

Well, I get that idea to a certain extent from the figures and the patent info as well. If you notice, with the BBE figures, it's all based on a 4 GHz APU that can achieve at best 8 FLOPs per cycle on 4 operands giving 32 GFLOPs and having a total of 4 PUs with 8 APUs each to give you 1 TFLOP... In essence, the PUs aren't even being included in that figure, which gives me the impression that the sole purpose those PUs have is load balancing and code+data distribution... memory access and such. Everything computational would be done on APUs. Yes, it does sound like a bit of a waste, but I imagine that they'd have their hands full doing that much for 8 APUs. If not for that, I think engineers would have no qualms about putting more APUs to a PE.

I look at it this way. Say you have a transistor that can be switched at speeds up to 20 GHz. Not at all unlikely. There are already THz transistors in the labs of almost every manufacturer. Now if the most complex pipeline stage in your CPU has a critical path length that is, at most, 10 transistors deep, then your CPU will not be able to clock higher than 20 / 10 = 2 GHz. And it will probably be a little less due to the delays of routing signals through interconnects and what not.

Anyway, I figure the whole idea of CELL is that the APUs would be very simple in nature to the effect that the APU by itself is not a great performer, but it's cheap to make and very low transistor budget, and a whole collection of APUs makes for some high overall throughput. That suggests to me that these APUs would not be loaded with all nature of extra performance boosters like out-of-order-execution, branch prediction, SMT, multiple issue, etc. They're essentially very basic straight-forward single-issue pipelines. That in turn, makes the pipelines and each pipeline stage very simple. So with that kind of simplicity, high clocks are very easy to achieve. BTW, I don't know how much stock I put in the idea that the APUs would be VLIW in the effect of being able to issue SIMD floating point and integer instructions in parallel. I actually find it easier to believe that you can only issue instructions on separate clock cycles regardless of type.

Now the PUs on the other hand are probably a lot closer to ordinary PPC devices, albeit probably lacking in the SIMD portion since you have all these extra SIMD pipelines attached. My thinking is that since the APUs are essentially all independent anyway, they would also be independent of the PU, which is far more complex in nature. So since the PU is essentially almost a PPC in itself, I'd think that if the APUs run at 4.8 GHz or something, the PUs would run at 2.4. That's certainly a feasible speed for a PPC, and at 65nm, the effective power consumption would be low enough that you may not have to worry.
 

AbsolutDealage

Platinum Member
Dec 20, 2002
2,675
0
0
To reduce the power requirements the entire craft would have to be made from light but strong materials such as carbon fibre, titanium or composite materials. Some electrical components could use Super conductors, these are special materials which have no electrical resistance. It may also be possible to boost the output of the gravitational engine by using a lump of superconductor to produce the gravitational waves since it is thought that superconductors may have antigravity properties. This is thought to be explained be a force predicted by General Relativity called the gravito-magnetic interaction.

BAAAAAAhahahahahahh.... hahahahah... heh......................... BAAAAAAAAAAAAAAAAAAAAAAhahahahahah.

Oh Jesus.... this guy can't really be serious. Let's take an enterprise-shaped carbon fiber model, slap "a lump" of superconductor in there, throw some quantum transistors and a half-wave rectifier in there, and... blammo! Gravity is my B!tch.
 

DrMrLordX

Lifer
Apr 27, 2000
22,000
11,560
136
Originally posted by: xbdestroya
DrMrLordX: You state that Sony has pushed back the release of Cell-powered products; can you give an example? As far as their five-year development timeline goes, Sony is right on track with Cell's release (so far). If nothing else, THAT's the surprise.

.[/i]

Okay, this is the article of which I was thinking:

http://www.theinquirer.net/?article=9053

It's the Inq, but hey, take it for what it's worth.
 

clarkey01

Diamond Member
Feb 4, 2004
3,419
1
0
Originally posted by: DrMrLordX
Originally posted by: xbdestroya
DrMrLordX: You state that Sony has pushed back the release of Cell-powered products; can you give an example? As far as their five-year development timeline goes, Sony is right on track with Cell's release (so far). If nothing else, THAT's the surprise.

.[/i]

Okay, this is the article of which I was thinking:

http://www.theinquirer.net/?article=9053

It's the Inq, but hey, take it for what it's worth.

Playstation 4 ? will have cell and PS3 wont !!!

Wtf ?
 

R3MF

Senior member
Oct 19, 2004
656
0
0
Originally posted by: UzairH
The article makes no mention of how many transistors each APU and the associated memory logic is going to have. Even taken conservatively, 8 APUs plus 1 PU plus all the other switches/logic seems like it would come to over 500 million trasistors. Now how the hell will the power consumption be kept low with so many transistors working at 4.6GHz?
nice assumption, but does it have any validity at all?

it has no cache to speak off, certainly not the 2MB that x86 processors come equipped with, which takes up about 75% of the die space.

how many transistors does an ARM CPU have? 25 million, 30 million? that is probably a reasonable top-end guess for what an APU might take up..........

here is my wild assumptions:
1x PPC core = 50million
8x APU = 200million
logic+IO = 50million

there you go, 300million trannies all in, not much bigger than a NV40 GPU.

here is hoping Ubuntu make a Cell a target architecture.
 

R3MF

Senior member
Oct 19, 2004
656
0
0
i personally think that 300million is an overestimation, with 320million being absolutely the top whack, with 250million or less being more likely.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |