Speculation: Ryzen 3000 series

Page 25 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ventanni

Golden Member
Jul 25, 2011
1,432
142
106
New Ryzen 3000 kinda looks like the Sandy Bridge of AMD: Single thread performance boost, major clock gain, and power reduction all at the same time. Gonna be a great.
 

mattiasnyc

Senior member
Mar 30, 2017
356
337
136
There were plenty of people who felt trapped by Intel's HEDT lineup at the time that saved a bunch of money going AM4 in 2017.



They EoL quickly and go bargain basement. Just like what happened with AMD's AM3+ and FM2+ products in 2017. AMD can't afford to be picky about what happens to yesterday's products when they have a lean and hungry monster chasing them from the back. They need to be positioned against their next competitor. Not just Intel. Hector Ruiz is the kind of a guy who would say, "ehh let's hold back on consumer Zen2 so we can clear out the channel a bit and protect Threadripper". Look at what he did with the x2. Lisa Su can not afford to make the same mistake, albeit for different reasons. The challenges that face AMD in 2021/2022 may be more difficult to overcome than what they saw in 2006. Now is the time to prepare, not only technologically but also in terms of market share and mindshare.

I don't disagree. I'll add however that with a 70+% share price jump AMD needs to be at least somewhat cautious about how it prices its products. My guess is that for everyone involved that has a financial stake in AMD's business a middle ground between significantly undercutting Intel on pricing because it can and on the other hand overpricing because of a performance lead is the best option.
 

mattiasnyc

Senior member
Mar 30, 2017
356
337
136
All current CPU prices will fall, much like Ryzen 1/TR 1 CPU prices fell once Ryzen 2/TR2 CPUs launched. This is progression, old CPU values are falling and new CPUs take their place.

TR2 16C 32T 2950X currently selling for $900, when Ryzen 3 16C 32T will launch it will be on the market for at least 6 months, its price will start to fall to the $600 price level. Also remember that TR platform offers more features than AM4 (Quad Memory Channel, more Cores, more PCI-e lanes etc), so even if you have a 16C 32T AM4 CPU at $499, a TR2 16C 32T CPU can easily be priced at $599. Also users of TR platform will be able to upgrade up to 64C 128T 32C 64T TR3 some time in the future and that is another selling point for the TR platform.

I don't disagree, and I don't think AMD has any problem at all justifying a 'decent' price for the TR4 platform.

I will say this though: If we look at HEDT my guess is that most people who buy into that aren't gamers or people doing general computing, but people doing either content creation or some other type of work that really requires throughput or core count. In content creation specifically there's actually only a part of the market that requires all that I/o, and for those that don't but still want more cores - in addition to those needing 'only' more cores - a 16 core Ryzen 3 would indeed be a preferred option if it has a price that's low enough. So I think that for the immediate future a 16 core Ryzen 3 at say $550 indeed would make a fair amount of those people considering an HEDT platform pick AM4 instead. Not all, but a part of that market. So it would for 'now' take some market share away from TR4.

That's of course not necessarily a bad thing for AMD, and we also have to consider all those with gen 1 & 2 TR CPUs upgrading, so sales might be similar anyway.
 
Reactions: Kenmitch

mattiasnyc

Senior member
Mar 30, 2017
356
337
136
I think you're missing something big here in your analysis. AMD will beat the 9900K with its own 8c 3600X, and this will be the CPU that is priced at half the 9900K price. The 3600X will not be the best CPU that AMD will be offering on AM4, so it cannot then command the premium for being the best.
IMO, AMD will price the 12c 3700X below the 9900K, and the 16c 3800X at 9900K pricing.
We're not saying that the best CPU will be at half the price of a 9900K, only that AMD's most comparable CPU will be. That's what matters. That's what kills Intel completely outside of OEMs.

No I know exactly what the proposition was. I'm just questioning whether or not that 3600X (or whatever CPU it will be in the stack) will indeed be that good for that little money once its released. That's what I'm saying.
 

PotatoWithEarsOnSide

Senior member
Feb 23, 2017
664
701
106
I don't see how else they could price it if a) it is better than 9900K, and b) there are 12c and 16c variants.
As others have said, they can't make the 16c more expensive that a TR, so that immediately places that at current 9900K pricing.
How good does the 8c 3600Xneed to be to be on par with the 9900K?
By my reckoning, it'll be able to run lower clocks if the 9-13% IPC improvements are in the right ball park, so any clockspeed advantage within the 95w TDP rating is a wash.
Sure, the 9900K has plenty of scope above 95w, but the rumoured clocks of the 3600X will be enough to bring parity. That then is where pricing kicks in, effectively rendering Intel's top 4 CPUs massively overpriced.
 

rbk123

Senior member
Aug 22, 2006
745
348
136
New Ryzen 3000 kinda looks like the Sandy Bridge of AMD: Single thread performance boost, major clock gain, and power reduction all at the same time. Gonna be a great.
We don't know how much the increases will be in these areas. Nor do we know the price, regardless of what some are insisting on.

The expectations are through the roof in this thread. If the numbers don't end up being what AdoredTV has listed, the beatdown by the pro-Intel camp will be brutal.
 

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
We don't know how much the increases will be in these areas. Nor do we know the price, regardless of what some are insisting on.

The expectations are through the roof in this thread. If the numbers don't end up being what AdoredTV has listed, the beatdown by the pro-Intel camp will be brutal.

Exactly. The "leaks" are rumors and nothing has been confirmed. I hope that the leaks are true! I'd love to not have to make a huge compromise by going with AMD and there's a sizable amount of ground to catch up with in gaming.
 

mattiasnyc

Senior member
Mar 30, 2017
356
337
136
I don't see how else they could price it if a) it is better than 9900K, and b) there are 12c and 16c variants.
As others have said, they can't make the 16c more expensive that a TR, so that immediately places that at current 9900K pricing.

Well we're assuming there will be 12 / 16 core versions on AM4 of course, and we're sort of assuming the performance of those chips will be significantly better than the 8 core part (which is reasonable). So if we go with a completely linear progression of performance, which is dubious but not super-far from what we've seen so far, then if the 8 core CPU that's slightly better than the 9900K is half price, then for twice the performance we'd be looking at the same price. In other words it'd be about twice the performance for the same price of a 9900K.

Again, to me that just looks like leaving money on the table, and yes I understand some are saying to grab more marketshare and "mindshare".

It'd look more reasonable to me to simply adjust products upwards somewhat to get a higher profit margin while still offer the better value compared to Intel... for example (just out of my chakra, not an exact prediction):

$350 3600x (somewhat better than 9900K)… about $200 cheaper
$520 12-cores AM4
$640 16-cores AM4

Now as for TR4, AMD could maybe 'protect' that lineup by phasing out the current 8-core / 12-core parts and picking it up at 16-core parts;

$650 16-cores TR4
$900 24-cores TR4
$1,400 32-cores TR4
and so on...

So really no need to price the 16-core AM4 at $550, and no conflict with the new TR4. The old TR4 would be sold out at a discount of course, but we should also remember that they've been introduced later in the year.

And on that note, I'd say that AMD could introduce AM4 CPUs in a staggered rollout this year. They could absolutely provide a slightly higher priced product to compete / outperform the 9900k and introduce a killer product later in the year while also adjusting prices downwards. "Early adoption fee" as it's called.

Again - I'm NOT saying the above is what will happen, I'm just offering a scenario where you're not matching Intel's non-HEDT flagship at half the price while still offering a better deal.
 

scannall

Golden Member
Jan 1, 2012
1,948
1,640
136
Exactly. The "leaks" are rumors and nothing has been confirmed. I hope that the leaks are true! I'd love to not have to make a huge compromise by going with AMD and there's a sizable amount of ground to catch up with in gaming.
I do think Ryzen 2 will be an excellent product. But you're right. The speculation is running rampant. Temper your expectations folks. Leave some room to be pleasantly surprised. Allow yourself to be happy. ;-)
 

DrMrLordX

Lifer
Apr 27, 2000
21,805
11,161
136
I don't disagree. I'll add however that with a 70+% share price jump AMD needs to be at least somewhat cautious about how it prices its products.

Intel also has to be cautious about how they price their products. They have a huge margin built into their top-tier desktop CPUs. Intel's average margin across their entire lineup is something like 60%, and on stuff like the 9900k those street prices of $534 or what have you are well above that percentage. It would be silly for AMD to price their products in relationship to Intel's when Intel's pricing becomes out-of-touch with reality (which it will be after April or so, if it isn't already today).

There is really nothing wrong with a hypothetical R7-3600x chip costing $270-$310 (or whatever was the 2700x's launch price). AMD can make money selling their 8c chips at that level and expand market share, too. Then they can charge more for chips with 12c and 16c and all will be well for them. Intel will just have to change their pricing structure.

That chip is a dual core/quad core x86 at best.

Don't be blind. Cavium could license the core from Apple and build a 48c or 64c version out of it and smash records for performance. They could even get core size/cost down by stripping out the fixed-function hardware and GPU (unless they wanted it in there for compute purposes, which would be weird/interesting). They could also stick to the Vortex cores. And that's 2018's chip. If Cavium licensed their designs today, we wouldn't see product for another 2-3 years. We would be facing the next A-series iteration. Like I said, Apple isn't the one that's going to give anyone trouble. Someone is going to either chase Qualcomm into the notebook/desktop space or go after the server room. Apple has the best of the ARM world right now, and Apple would be insane not to license it to someone operating outside of their core market.

HSA is a must.

All I'm saying is, HSA as we know it is dead. If they work towards an SVM model, it will be in a different format. And it will probably be something less proprietary/better supported across the industry.

OpenCL is a joke and their driver/dev stack has a long way to go to even reach parity with Nvidia's CUDA.

CUDA isn't even a factor here. CUDA does have support for UVM (Unified Virtual Memory), but nVidia doesn't have any product on the market that would really benefit from existing SVM models (APUs) outside of their APUs that are . . . well let's just say that they're operating in different ecosystems than AMD and Intel and leave it at that.

As far as OpenCL being a joke goes . . . I don't even know why you would say that? The OpenCL2.0 ecosystem is functional. AMD's driver stack is just restricted, that's all. Maybe you just like CUDA better (I think a lot of developers do, actually). But anything you would want to do with SVM on Raven Ridge is doable under Linux right now, assuming your end-user has amdkfd loaded. Not sure if that's standard to distros yet or if you have to use a hacked kernel.

As far as windows support, no one does any major development for such compute on windows. Everything major happens on Linux. So, that's not surprising. The future is Linux oriented. Even Microsoft gets that and is aligning accordingly.

If we consumers are ever to see SVM or UVM compute becoming "a thing" in our software, the first thing that will have to happen is that Windows will need to support it. At least theoretically, nVidia could already support UVM with their consumer dGPUs assuming someone would improve PCIe performance to the point that we would ever want to use it. But until there is a kfd under Windows, we won't see SVM or UVM support in Windows for any of AMD's products.

My commentary centers moreso around AMD opening up infinity fabric like they said they would to outside third party devices.

Now you are maybe talking about UVM. I think UVM is about treating the memory pool of all devices attached to a system as one large memory pool, and allowing reads and writes to every device in the system. This becomes tricky since you may be crossing clock domains and dealing with heavy latency. Even without heavy latency, you don't want CPU-critical data - stuff you might need to load into cache at some point - winding up in the memory pool of one of your PCIe devices. At the same time, you want to try to keep all your data in one exact memory block to reduce writes and writebacks. SVM solves this problem by forcing the GPU and CPU to share one memory pool. UVM makes things a lot more complicated. And don't expect future consumer dGPUs to ship without memory. That is not going to happen.

The U280 acceleration card includes CCIX support to leverage existing server interconnect infrastructure for high bandwidth, low latency cache coherent shared memory access with CCIX enabled processors including Arm and AMD. (Source: Xilinx Alveo U280 whitepaper WP50 (v1.0) accessed 16 November 2018)

There are proprietary high-end interconnects out there right now that essentially let CPUs do that already, which is what they use in supercomputers. Note that even using those interconnects - or stuff like nVidia's NVLink - you still can't put dGPUs into the system as "equal partners" since their memory controllers and cache structures are different. There's no cache coherence. There will have to be a major change in the way they build dGPUs in the future if they want them to have cache coherency with CPUs.

There's no reason for the CPU to be loaded down so much with cross traffic communication between PCIE devices and memory OPs to and from I/O devices.

Sure there is. GPUs and CPUs are still not built using the same memory model. Why do you think stuff like HSA was so hard to begin with? And that was using an iGPU sharing the same physical memory block as the CPU. Also, do you think anyone wants to build a NIC or SATA controller with a CPU-like cache structure that would be necessary for it to operate in a cache-coherent device ecosystem like CCIX? That would increase costs by quite a bit on otherwise-simple devices.
 
Last edited:

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,249
136
I do think Ryzen 2 will be an excellent product. But you're right. The speculation is running rampant. Temper your expectations folks. Leave some room to be pleasantly surprised. Allow yourself to be happy. ;-)

It is a speculation thread.

Not long before the cats out of the bag anyways.
 

EXCellR8

Diamond Member
Sep 1, 2010
3,982
839
136
Honestly, a higher-clocked 8C/16T chip with lower TDP would be just fine by me. Don't really need more cores for what I use my system for. I really don't want to upgrade from X370 until another year or so but who knows what AMD is cooking up. Could be great, could be meh, and could be something completely unexpected too.
 

JPB

Diamond Member
Jul 4, 2005
4,064
89
91


AMD is heading into CES 2019 with some pretty big rumors of the upcoming Zen 2-based processors, a family of CPUs that will be led by the flagship Ryzen 9 3800X as a huge 16C/32T beast of a processor.

Read more: https://www.tweaktown.com/news/64311/amd-ryzen-7-3700x-12c-24t-huge-5ghz-9900k-killer/index.html

The new rumors are coming from a leak by a Russian retailer that suggests the Ryzen 9 3800X rocks 16 cores and 32 threads of CPU performance with a base clock of 3.9GHz and turbo clock of 4.7GHz, all achieved at just 125W. There will be two 12C/24T processors on the new 3000-series processors with the Ryzen 7 3700X and Ryzen 7 3700, with the Ryzen 7 3700X rocking 5GHz turbo clocks for 105W.

I think the Ryzen 7 3700X is going to be one of the more important CPUs in the Zen 2 processor line up, with 12C/24T rollout out at 5GHz we should see some good overclocks at up to 5.2GHz and 5.3GHz, or even more with better cooling. All of this for 105W is going to be great, while dragging the base clocks down 400MHz to 3.8GHz and the turbo clocks down 400MHz to 4.6GHz from 5GHz, the TDP savings are 10W down to 95W from 105W on the 3700X.

The leap that the rumored Ryzen 9 3800X does over the current Ryzen 7 2700X flagship CPU is huge, as the Ryzen 7 2700X is 'just' an 8C/16T processor while the purported Ryzen 9 3800X is a huge 16C/32T beast. We're talking first-gen Ryzen Threadripper 1990X amounts of grunt, at higher clock speeds.

The Ryzen 7 3700X is better than any previous-gen Ryzen CPU with out-of-the-gate 5GHz with its huge 12C/24T. This is offering next levels of core counts, improved IPC performance from Zen 2, and the first Zen-based CPU to rock 5GHz out of the box.

From there we have the Ryzen 5 family of processors that has been upgraded to 8C/16T as the flagship Ryzen 5 3600X and Ryzen 5 3600, with 4.8GHz turbo clocks on the Ryzen 5 3600X and 4.4GHz on the 3600.

The overclocked Ryzen 5 3600X uses 30W more power to have its CPU clocked 400MHz higher with a 95W purported TDP, while the 4.4GHz clocks on the Ryzen 5 3600 only consumes 65W. 65W is also what the new Ryzen 3 3300X will use, with it rolling out as as 6C/12T processor at 4.3GHz while the Ryzen 3 3300 will reportedly roll out at 4GHz and use just 50W of power.

Read more: https://www.tweaktown.com/news/64311/amd-ryzen-7-3700x-12c-24t-huge-5ghz-9900k-killer/index.html
 
Last edited:

ub4ty

Senior member
Jun 21, 2017
749
898
96
Don't be blind. Cavium could license the core from Apple and build a 48c or 64c version out of it and smash records for performance.
Cavium already has product :
https://www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/

The barrier to adoption is that its non-x86 is it not? I'm not sure what Apple's custom ARM IP has to do w/ a server chip as Apple's ARM complex is tuned mainly for specific application flows (mobile) which is why its a screamer in that domain. Then comes the realization of the pricing Apple would apply to it. Apple is facing headwinds because they have ridiculous margins on mature products and have hit a price point no one wants to entertain. Then there's this : https://www.reuters.com/article/us-...e-apple-iphone-sales-in-germany-idUSKCN1OX12O

I stand by my statement that you eventually pay for milking customers.

They could even get core size/cost down by stripping out the fixed-function hardware and GPU (unless they wanted it in there for compute purposes, which would be weird/interesting). They could also stick to the Vortex cores. And that's 2018's chip. If Cavium licensed their designs today, we wouldn't see product for another 2-3 years. We would be facing the next A-series iteration. Like I said, Apple isn't the one that's going to give anyone trouble. Someone is going to either chase Qualcomm into the notebook/desktop space or go after the server room. Apple has the best of the ARM world right now, and Apple would be insane not to license it to someone operating outside of their core market.
My understanding is that companies other than apple already have competent ARM complexes they can leverage. I'm not sure why someone would go out of their way to license a mobile first platform with tons of fixed accelerator functions that they'd have to strip out. Regarding Qualcomm, this just happened today :
https://www.reuters.com/article/us-...e-apple-iphone-sales-in-germany-idUSKCN1OX12O

Apple would be better off proving the versatility of their ARM IP within their own ecosystem
by extending that chip into a range of MBPs. Again though, just like Intel, people are getting vocally tired of their milking. Popping an even cheaper arm processor in a mbp and charging $1,500... People are going to get tired of that.

All I'm saying is, HSA as we know it is dead. If they work towards an SVM model, it will be in a different format. And it will probably be something less proprietary/better supported across the industry.
Tomato tomatoe, essentially I/O devices need to be freed up from the CPU and have direct access to ranges of memory/storage and be able to directly work with each other on an open standard. Everything is going towards this. This paradigm is where computing goes next. Gate shrink beyond 7nm will be costly. Were up to 64 cores now. Storage and memory need to see some innovation and peripherals need to be freed from prison. That's where the slow downs are. My point was that the core war is almost over. AMD won. Intel is mia until 2020/2021. By then, the next stage of focus is on interconnects/new io/mem paradigms.

CUDA isn't even a factor here. CUDA does have support for UVM (Unified Virtual Memory), but nVidia doesn't have any product on the market that would really benefit from existing SVM models (APUs) outside of their APUs that are . . . well let's just say that they're operating in different ecosystems than AMD and Intel and leave it at that.
The problem is that OpenCL is a nightmare to develop on and AMD has problems when it comes to driver/dev support likely due to minimal staff. This kills off any adaptation. CUDA is a factor because its what you can far more easily develop on now. Given this, there are a lot of things left on the table which Nvidia keeps segmented in pro hardware that needs to be freed up.

As far as OpenCL being a joke goes . . . I don't even know why you would say that? The OpenCL2.0 ecosystem is functional. AMD's driver stack is just restricted, that's all. Maybe you just like CUDA better (I think a lot of developers do, actually). But anything you would want to do with SVM on Raven Ridge is doable under Linux right now, assuming your end-user has amdkfd loaded. Not sure if that's standard to distros yet or if you have to use a hacked kernel.
It's a mess and "restricted" is code word for that. No mention of when you get deep into develop and things just aren't there. I considered it for a while and abandoned it. Maybe at some point it will mature and I will be more than happy to jump on board. Apple dropped it as well.

If we consumers are ever to see SVM or UVM compute becoming "a thing" in our software, the first thing that will have to happen is that Windows will need to support it. At least theoretically, nVidia could already support UVM with their consumer dGPUs assuming someone would improve PCIe performance to the point that we would ever want to use it. But until there is a kfd under Windows, we won't see SVM or UVM support in Windows for any of AMD's products.
https://www.zdnet.com/article/microsoft-releases-its-first-linux-product/
Microsoft knows what time it is. Their crappy NUMA implementation couldn't even properly schedule tasks on the high core sku threadripper. No one uses them as a benchmark for high end computing support. It's all linux.

Now you are maybe talking about UVM. I think UVM is about treating the memory pool of all devices attached to a system as one large memory pool, and allowing reads and writes to every device in the system. This becomes tricky since you may be crossing clock domains and dealing with heavy latency. Even without heavy latency, you don't want CPU-critical data - stuff you might need to load into cache at some point - winding up in the memory pool of one of your PCIe devices. At the same time, you want to try to keep all your data in one exact memory block to reduce writes and writebacks. SVM solves this problem by forcing the GPU and CPU to share one memory pool. UVM makes things a lot more complicated. And don't expect future consumer dGPUs to ship without memory. That is not going to happen.
I linked to a talk. It centered on CCIX, GenZ, and OpenCAPI.
CCIX is likely coming with Rome w/ peripheral support.
So, this year... more open access is becoming a thing.

Architecturally things are shifting towards this. I can currently run a software stack that exposes an NVME drive on one computer to another over a nic w/o involving the CPU. NVME-OF. Everything is breaking away from the CPU centric model and going towards a more distributed model. There are tons of names for movements towards this but there is no denying this shift.

There are proprietary high-end interconnects out there right now that essentially let CPUs do that already, which is what they use in supercomputers. Note that even using those interconnects - or stuff like nVidia's NVLink - you still can't put dGPUs into the system as "equal partners" since their memory controllers and cache structures are different. There's no cache coherence. There will have to be a major change in the way they build dGPUs in the future if they want them to have cache coherency with CPUs.

https://www.servethehome.com/xilinx-alveo-u280-launched-possibly-with-amd-epyc-ccix-support/
Can you comment on CCIX over PCIE gen4 in upcoming epyc processors?

Sure there is. GPUs and CPUs are still not built using the same memory model. Why do you think stuff like HSA was so hard to begin with? And that was using an iGPU sharing the same physical memory block as the CPU. Also, do you think anyone wants to build a NIC or SATA controller with a CPU-like cache structure that would be necessary for it to operate in a cache-coherent device ecosystem like CCIX? That would increase costs by quite a bit on otherwise-simple devices.

https://semiaccurate.com/2017/10/26/arm-talks-ccix-details/
AMD could graft on CCIX coherency ports on the updated Vega to give Radeon Instinct cards something that looks and smells like NVLink ports on the high-end Tesla cards; we expect that the updated Vega GPU will plug into PCI-Express 4.0 x16 slots, which means a lot more bandwidth between the CPU and the GPU.


https://www.enterprisetech.com/2018...ms-the-next-big-thing-in-data-center-servers/
I thought this is what all the excitement was about with EPYC?

Mellanox/ARM/AMD/Nvidia etc are a part of all the consortiums centered on this and we might even see it being launched this year... Of course this is where things are going. Intel is hilariously not present. My point was that this is the future of computing. Were hitting the limit of core count and gate shrink. The next processors will focus on interconnects/memory/storage interface innovation and the overarching architecture will be less CPU centric.

SmartNics are what everyone's working on in order to bypass and negate the need for a CPU.

No CPU involved... Just smart nics attached to nvme


So yes, that's exactly what's happening and planned.

My inquiry centered on literally what is about to fall out of the back of the van at AMD because it seems like they are shipping their new EPYC processor and high end video cards with a CCIX interface this year. HSA isn't dead. Their whole lineup and roadmap is an iterative progression to it.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,805
11,161
136

Cavium's product is their core interconnect tech. They use standard ARM cores otherwise. But . . .

The barrier to adoption is that its non-x86 is it not?

Yes and no. It's not like the 90s/aughties desktop world where everyone was holding on to old software. Stuff can (and often is) recompiled, especially if you're going to run it on thousands of machines at once. The software ecosystem needs time to grow, but compelling hardware will accelerate the process. Cavium hasn't really brought that to the table. Yet.

I'm not sure what Apple's custom ARM IP has to do w/ a server chip as Apple's ARM complex is tuned mainly for specific application flows (mobile)

That is the curious thing about Apple's Vortex. It performs so well outside of its intended ecosystem. Take a look around at some of the A12x benchmarks that are not Geekbench 4. It is a bit surprising. Cavium could ditch their current vanilla ARM cores for Vortex cores and maybe utilize Apple's GPU core (if the needed/wanted them) and use their interconnect experience to produce a monstrosity.

Then comes the realization of the pricing Apple would apply to it.

Apple has no skin in the server market. None. They wouldn't devalue their brand by cutting a backroom deal with a company like Cavium for a n% licensing fee and letting Cavium charge what they will. Apple is a lifestyle company. They charge you for the experience and the brand. That doesn't come into play when they license something.

I stand by my statement that you eventually pay for milking customers.

Their reward for milking customers is that they now have possibly the best CPU design team in the world, using the best fab in the world (TSMC).


I'm not sure why someone would go out of their way to license a mobile first platform with tons of fixed accelerator functions that they'd have to strip out.

All of the mobile SoCs have "fixed accelerator functions" they'd have to strip out. If you want fast and you want ARM, you're looking at mobile-first designs. That's why you license the design and rebuild your core with that IP rather than trying to drag-drop the cores directly into other application environments.

Apple would be better off proving the versatility of their ARM IP within their own ecosystem

Why? They already have the fastest mobile SoC out there. Nobody really cares. That's not why the remaining phone/tablet buyers choose their devices anyway.

by extending that chip into a range of MBPs. Again though, just like Intel, people are getting vocally tired of their milking. Popping an even cheaper arm processor in a mbp and charging $1,500... People are going to get tired of that.

Apple is probably going to replace Intel's chips with A-series chips in the MacBook series soon enough. That is exactly where Qualcomm is going with their products, into Win10/Win!0S machines. Qualcomm gave up on their server chip, though. Apple has a chance to hit Intel and AMD from both sides, if they play their cards right.

Tomato tomatoe

No, not "tomato tomatoe". If you want devices working as equal partners on the CPU interconnect, then you need at least a small CPU with full cache coherency and enough bus links/interconnect links to function properly. Just equipping a server-class CPU with extra interconnects typically jacks up the price by 10x (remember how expensive 8-series Opterons were in the x2 days versus the 1-series Opterons? It was all down to HT links). Even a barebones client system with a dGPU and one CPU would still need all the hardware that you see today in a 2P system, and that makes things expensive. You can argue that the expense is superficial (markups from the manufacturer) and I might partially agree. Now imagine a standard 3-slot PCIe system with one CPU socket in a system that provides low latency and cache coherency between the CPU and all PCIe devices. Now you're dealing with a 4P system. 4P CPUs often cost $5000 or more. Again, some of that is market segmentation, but not all of it is . . .

essentially I/O devices need to be freed up from the CPU and have direct access to ranges of memory/storage and be able to directly work with each other on an open standard.

Sounds great for the server room. For us desktop losers? Maybe not.

It's a mess and "restricted" is code word for that.

Well shoot I coulda told you that HSA is a mess. OpenCL2.0 actually works though. Nobody particularly wants to use the SVM function since it only supports AMD and Intel APUs, and mostly only under Linux. Plus Intel's OpenCL drivers are kind of a mess for their iGPUs. Lots of people use standard OpenCL1.1 functions (like a ton of mining software programmers).

CCIX is likely coming with Rome w/ peripheral support.

Like I said, server room. Where they already spend millions per year on interconnect technology alone. Wake me when someone tries bringing it to the desktop

https://semiaccurate.com/2017/10/26/arm-talks-ccix-details/
AMD could graft on CCIX coherency ports on the updated Vega to give Radeon Instinct cards something that looks and smells like NVLink ports on the high-end Tesla cards; we expect that the updated Vega GPU will plug into PCI-Express 4.0 x16 slots, which means a lot more bandwidth between the CPU and the GPU.


More bandwidth is always welcome. AMD already has standards that permitted HT links through slots (HTX) so I see no reason why they couldn't update the standard and contribute it to the CCIX standard (IF is just an update of HT anyway). Previously the only thing people used HTX slots for were InfiniBand controllers. Gee, wonder why . . .

Also, we never saw HTX on the desktop. I was sorely disappointed by that. HTX video cards would have been the hotness.

No CPU involved... Just smart nics attached to nvme

In the case of storage devices, you're not actually performing any compute work, so coherency isn't an issue. In theory the controller chips already on NVMe devices could communicate directly with one another. If you want GPUs and similar being exposed as compute devices, things get a lot more complicated. It would also be potentially messy to try to have non-compute I/O devices sharing the same interconnect as the CPUs and GPUs, unless they could be flagged as non-compute devices so that they wouldn't be involved in any snoop or redundant write activity.

The problem I see here is that traditionally, on the desktop, you want your expansion slots to handle any kind of device you plug in there: dGPU, NIC, controller card, sound card, whatever. That's what ISA, PCI, and PCIe have all been about (AGP not so much). With CCIX it looks like we want some devices (notably dGPUs) to maintain cache coherency with other "partner" devices (CPUs). If I stay in the spirit on the existing PCIe paradigm, I at least have to wire all my full-length PCIe slots like I'm running a MP system in the off chance that I'm gonna use multiple dGPUs. That comes at a price, usually in the form of additional traces and socket pinouts. Maybe they can find ways around that, but I have my doubts. MP server boards are beasts. Now imagine trying to sell that tech to board OEMs that want to sell $90 single-socket systems with maybe 3 full-length PCIe slots. Not gonna happen.

The other option is to have maybe one full-length PCIe slot wired for a coherent bus/interconnect link and the rest not, making multiGPU basically as bad (if not worse) on client systems as it is today, and potentially breaking the whole point of the standard in the first place.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
The problem I see here is ..

Thank you for sticking with my commentary and providing detailed replies. It was very informative and provides me some pointers to look into. I do agree that ARM is going to play a much more prominent role in the future beyond their current ecosystem. On HSA, I had a rough idea and picture but I'm definitely technically out of my depth in terms of navigating a discussion. Thank you for clarifying and correcting things for me. I also seemed to have ignored the important thing you pointed out which is how much the board and board components would run to achieve this.

To loop back around, where do you think AMD is possible going to go w/ this Rome launch (PCIE 4.0/CCIX)? What would you compare this to w.r.t existing tech? Is it like Nvlink over PCIE 4.0? Do you think they will expose this on AM4? Threadripper? Where could this go down the line? Have any thoughts on NvME 1.4 and beyond and where that's headed and how that might integrate with AMD's roadmap? Any new sys memory paradigms on the horizon? When can we expect something beyond DDR4 sys mem?

I really look forward to the future of computing. So many exciting things happening in enterprise that I want to become cheaper and more wildly available. I just feel like so much potential is locked out w/ these pro segment partitions.

Has been a wild ride learning about all of these interesting technologies that artificially get locked out of the consumer space. Definitely going to do some reading on Cavium's interconnect IP.
 
Last edited:

moonbogg

Lifer
Jan 8, 2011
10,637
3,095
136
Is AMD going to fix whatever made Ryzen slower in gaming despite all else being basically equal? Same clocks, same RAM, same number of cores and threads, yet Ryzen was often significantly slower than Broadwell-E despite all these things being equal. Sometimes it was a nasty difference and that hasn't changed enough even to this day, so I'm wondering if these new CPUs will have some changes that mitigate that odd disparity in gaming performance. I'm just being real here from a performance gaming perspective.
I see current benchmarks from the likes of gamersnexus doing yet another revisit of the 2600K, and guess what? A Ryzen 7 2700X still can't convincingly beat an overclocked 2600K. They are in the same general performance bracket down on the bottom half of the charts, consistently, while every Intel CPU starting at Haswell and newer occupies the top half of the performance chart. Can AMD fix whatever is causing THAT to happen? Because if they add 15% IPC but don't fix whatever the hell is causing that performance issue, then they still won't be able to reliably catch something like an overclocked Haswell or newer.
 

tamz_msc

Diamond Member
Jan 5, 2017
3,865
3,729
136
Is AMD going to fix whatever made Ryzen slower in gaming despite all else being basically equal? Same clocks, same RAM, same number of cores and threads, yet Ryzen was often significantly slower than Broadwell-E despite all these things being equal. Sometimes it was a nasty difference and that hasn't changed enough even to this day, so I'm wondering if these new CPUs will have some changes that mitigate that odd disparity in gaming performance. I'm just being real here from a performance gaming perspective.
I see current benchmarks from the likes of gamersnexus doing yet another revisit of the 2600K, and guess what? A Ryzen 7 2700X still can't convincingly beat an overclocked 2600K. They are in the same general performance bracket down on the bottom half of the charts, consistently, while every Intel CPU starting at Haswell and newer occupies the top half of the performance chart. Can AMD fix whatever is causing THAT to happen? Because if they add 15% IPC but don't fix whatever the hell is causing that performance issue, then they still won't be able to reliably catch something like an overclocked Haswell or newer.
That performance discrepancy happens due to higher memory latency, cross-CCX latency penalty due to threads being shuffled across the CCXs and L3$ being a victim cache. AMD has not provided any detailed architectural overview of Zen 2 so we don't know how these issues have been addressed, if they've been addressed at all.
 

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136
Is AMD going to fix whatever made Ryzen slower in gaming despite all else being basically equal? Same clocks, same RAM, same number of cores and threads, yet Ryzen was often significantly slower than Broadwell-E despite all these things being equal. Sometimes it was a nasty difference and that hasn't changed enough even to this day, so I'm wondering if these new CPUs will have some changes that mitigate that odd disparity in gaming performance. I'm just being real here from a performance gaming perspective.
I see current benchmarks from the likes of gamersnexus doing yet another revisit of the 2600K, and guess what? A Ryzen 7 2700X still can't convincingly beat an overclocked 2600K. They are in the same general performance bracket down on the bottom half of the charts, consistently, while every Intel CPU starting at Haswell and newer occupies the top half of the performance chart. Can AMD fix whatever is causing THAT to happen? Because if they add 15% IPC but don't fix whatever the hell is causing that performance issue, then they still won't be able to reliably catch something like an overclocked Haswell or newer.

Nobody knows but my guess is nope. AMD has ALWAYS had problems with latency compared to Intel and that hurts gaming performance significantly. It should be better than Zen+, but I think halo gamers will be disappointed with the gaming performance from Zen2. The 9900k will retain the gaming crown but Zen2 should narrow the gap some.
 

DrMrLordX

Lifer
Apr 27, 2000
21,805
11,161
136
Thank you for sticking with my commentary and providing detailed replies. It was very informative and provides me some pointers to look into. I do agree that ARM is going to play a much more prominent role in the future beyond their current ecosystem. On HSA, I had a rough idea and picture but I'm definitely technically out of my depth in terms of navigating a discussion. Thank you for clarifying and correcting things for me. I also seemed to have ignored the important thing you pointed out which is how much the board and board components would run to achieve this.

Not a problem. HSA is still pretty obscure really, and I only know about some of the issues surrounding it from tinkering with my Kaveri back in the day and trying to launch HSA kernels using a Java toolchain. It was some wonky stuff let me tell you. Regardless, lots of the underlying tech opened up all the issues surrounding SVM and (sort of) UVM and why HSA could have been so important. When it worked, it was awesome. Sort of.

To loop back around, where do you think AMD is possible going to go w/ this Rome launch (PCIE 4.0/CCIX)?

I can only offer speculation.

First they need to lay the groundwork for future tech: higher bandwidth, lower latency. The old dream AMD laid out back in the late x2/Opteron era - before they had even launched their first APU - was the Fusion concept. The Future is Fusion! Etc. And key to that strategy was HTX. The entire idea behind HTX was making as many of the computers slots and sockets equal partners in the Hypertransport hierarchy. So a 4P board with 4 HTX slots would be the same as an 8P board, effectively. There were all kinds of fun ideas there, like . . . having consumer boards in 2P configurations, allowing you to plug a "naked' GPU into the second socket and letting it use main memory like a CPU! And stuff like that. Or you could have an HTX CPU daughtercard and add a CPU or what have you.

It never materialized except on some server boards. What is interesting about HTX as it was implemented back in the day, is that the slots themselves were nothing but PCIe slots, albeit keyed in reverse (so HTX cards and PCIe cards were not physically interchangeable). What I never did figure out is how much extra board infrastructure was require to service those slots at full speed. HT-over-PCIe was a thing, and given the slot, it's tempting to think that HTX slots were exactly that: HT over PCIe. I'm too ignorant to know if that was the case. If it was, then the amount of board infrastructure (traces, layers, etc) required to service the slot would be about the same as a full length PCIe card. Which really wouldn't be bad, even for us consumers. All you're doing is bypassing the PCIe controller (wherever it may be) and forcing the connected device to handle its own traffic over the Hypertransport. Any device that goes into such a slot would have to have enough processing power to handle all that, and of course have enough HT links (on old AMD systems, it was 1 link for 1P, 2 links for 2P, or 3 links for 4-8P, if I recall correctly).

So looking to the future with Rome, PCIe4, and CCIX, it may be HTX all over again. I think the idea would be to make PCIe slots IF-capable, so if the right device plugged into the slot, you can bypass the PCIe controller in the CPU/CPUs and work as an IF device instead. Or the slots may have to be dedicated to IF functionality, I don't know. In theory this would require no more board support than that required for standard PCIe4.0. Put a dGPU in there equipped with a tiny ARM core that has enough cache (notably L3 or L4, or whatever it is Rome systems will use for cache coherence) and now you can treat your GPU as a full member of the IF. Unfortunately, dGPUs still have their own memory banks, and they probably will for the foreseeable future due to memory performance concerns. You would not want a "naked" dGPU trying to use system memory, even if it would be cool to let the dGPU manipulate data in RAM without having to do a lot of redundant copying. It is possible that each dGPU's VRAM bank can be treated as a separate NUMA node, and that might bypass the problem . . . but now you have different NUMA nodes with RAM that sports non-uniform latency and bandwidth. Sure we have MP systems with multiple memory controllers trying to balance reads and writes to minimize link traffic and make sure that data gets exactly where it needs to go, but they do so with multiple memory controllers with uniform access latency and read/write speeds. Throw some dGPUs into the mix and you have something quite different. They might have to tag link members based on which one is a CPU, dGPU, or "other" and handle traffic accordingly. Done correctly, AMD should be able to port all their existing SVM tools to CCIX systems, giving us the same basic benefit in the future that you today can have running OpenCL2.0 applications on a Raven Ridge system.

Outside of possible NUMA madness, there's also the issue of whether those IF-capable PCIe slots would really be as cheap to implement as I think. It's possible that it might not be so damn cheap. And you are still talking about putting server-class hardware on every device that has to plug into the system. For Rome, there is no problem.

What would you compare this to w.r.t existing tech? Is it like Nvlink over PCIE 4.0?

I know less about NVLink. I'm interested in it (and profoundly disappointed that nVidia apparently hasn't chosen to open up NVLink to the PCIe consortium). My understanding is that NVLink is configured to facilitate UVM support under CUDA. The entire idea behind UVM is basically . . . treat all memory of all devices connected to each other as a memory pool, and let devices read and write to other devices. The bandwidth is high and the latency is low, so it's functional. nVidia does not control the underlying platform of any system where its dGPUs operate, nor does it control the underlying CPU tech. So they implement NVLink at the pleasure of whoever chooses to host it - usually IBM POWER systems, or custom Intel boards (maybe?). Interestingly enough, it competes with the OpenCAPI which is featured on OpenPOWER systems. Or if it isn't in use today, it may see use in the future. I haven't heard much about real-world implementations of CAPI devices yet.

For more on UVM, check this out as an example:

https://devblogs.nvidia.com/unified-memory-cuda-beginners/

At first glance, it doesn't really see that different from SVM, but it really is because there is no need in the UVM model to a). ensure that all compute devices have equal access to the same memory controller or b). ensure that all compute devices are working together in a coherent NUMA environment. One of the critical lines from the above blog post is:

When code running on a CPU or GPU accesses data allocated this way (often called CUDA managed data), the CUDA system software and/or the hardware takes care of migrating memory pages to the memory of the accessing processor.

Which is pretty clever, if you think about it. NV needs tech like that since they can't control any part of the underlying platform on which their cards run. So they handle it in software with the aid of their GPUs. I still expect AMD's CCIX implementations to rely on hardware-level solutions, like NUMA nodes with atypical memory performance or something weird like that. From a performance perspective, doing it in hardware should deliver superior results.

Do you think they will expose this on AM4?

It's all a matter of cost/benefit. As I stated, AMD hasn't even rolled out the amdkfd on Windows yet. No kernel fusion driver means no SVM functionality, which is probably going to be core to everything AMD tries to do. Next you will have to get motherboard OEMs to support PCIe4.0. That may be cost-prohibitive at some price levels, for awhile at least. After that point AMD has to provide usable libraries so developers can start using all this tech seamlessly as a part of their workflow. I *think* they've already got the ball started with Mantle/Vulkan and maybe DX12 for game devs. Outside of that, everything they have is pretty janky. OpenMP to the rescue? I dunno. Ultimately it will be developers and software that drives the whole thing. It takes at least one shop deciding to offload a bunch of stuff to a GPU to get the ball rolling. LibreOffice can already use SVM to offload work to an iGPU on Kaveri, Carrizo, and Raven Ridge, but that's just one tiny example.

Threadripper? Where could this go down the line?

Threadripper is a questionmark. Again, though, it'll probably be down to the software. If AMD rolls out all the tools to clean up their OpenCL2.0/SVM compute model AND provides the hardware to support it with their dGPUs, someone will figure out how to use it to their advantage. Think small tight loops spawning in dozens of different threads, full of fp calculations being sent asynchronously to the dGPU and returned out-of-order with extreme low latency. That is something you can't really do with OpenCL 1.x or even CUDA (I don't think). With OpenCL2.0 it might be possible.

As a precaution, though, I will point out that AMD has invested in making Zen2 wider on the SIMD front. And the example I give above is one where you might want to use a CPU's SIMD capabilities instead of trying to offload to a dGPU. So AMD is at least hedging their bets.

Have any thoughts on NvME 1.4 and beyond and where that's headed and how that might integrate with AMD's roadmap?

Honestly, not really. I'm still hoping for people to stop using the m.2 form factor so damn much. But I probably won't get my way.

Any new sys memory paradigms on the horizon? When can we expect something beyond DDR4 sys mem?

HBM for everybody? DRAM's days are numbered thanks to the general inability of anyone to make it work on processes smaller than 10nm. So the switch will need to be made eventually to a memory standard that can work on 7nm, 5nm, or even 3nm.

Nobody knows but my guess is nope. AMD has ALWAYS had problems with latency compared to Intel and that hurts gaming performance significantly. It should be better than Zen+, but I think halo gamers will be disappointed with the gaming performance from Zen2. The 9900k will retain the gaming crown but Zen2 should narrow the gap some.

You mean since they introduced Zen, right? As @tamz_msc articulated, inter-CCX latency is one of the things that's punishing Zen and Zen+ in some games. Speed up the IF link and move all 8 cores to the same chiplet and things get intereseting.
 
Last edited:

krumme

Diamond Member
Oct 9, 2009
5,956
1,595
136
Is AMD going to fix whatever made Ryzen slower in gaming despite all else being basically equal? Same clocks, same RAM, same number of cores and threads, yet Ryzen was often significantly slower than Broadwell-E despite all these things being equal. Sometimes it was a nasty difference and that hasn't changed enough even to this day, so I'm wondering if these new CPUs will have some changes that mitigate that odd disparity in gaming performance. I'm just being real here from a performance gaming perspective.
I see current benchmarks from the likes of gamersnexus doing yet another revisit of the 2600K, and guess what? A Ryzen 7 2700X still can't convincingly beat an overclocked 2600K. They are in the same general performance bracket down on the bottom half of the charts, consistently, while every Intel CPU starting at Haswell and newer occupies the top half of the performance chart. Can AMD fix whatever is causing THAT to happen? Because if they add 15% IPC but don't fix whatever the hell is causing that performance issue, then they still won't be able to reliably catch something like an overclocked Haswell or newer.
As a 8700k owner I think the relevant question for gamers at the end of the day is; at a certain price point, who gives the best cpu for gaming?

Its not about cores or memory latency it's about performance at a certain price.

Tested in a realistic user scenario at 60 or 144hz.

To me the relevant criteria here to test is can you consistently 99.99% of the time stay above those 60/144hz in your current game portfolio and the portfolio you will have in 3/xx years. Set your % targets. If not 99.99% mins what is your target then?

If you play eg arma3 it's only a question of memory latency and Intel will probably stay ahead for 144Hz gaming. Drawcall limitations for games based on older engines. They still is a huge part of the games out there.

If you play eg bf it becomes more muddy. At 6/12t an Intel will probably stay ahead for current bf5. But what happens in the comming crop of games? When the new consoles arrives?

Any 4 thread cpu is imo next to unplayable in bf for 60hz and even a 8t takes 40hz dips regularly. In games like overwatch it's the same problem just at 144hz. The games becomes throughput limited. When a games becomes throughput limited it just tanks.

Those people getting a 1600x instead of a 4c/4t Intel at same pricepoint in 2017 imo for the most part have by far the best cpu. Especially for the future. They can also upgrade now.

But Intel portfolio is far better today than in 2017 start 2018 for gaming. 6c cores is mainstream and it makes a huge difference. 8400 is a darn nice processor for gaming and you can use cheap ram.

The important thing imo is will Amd improve the memory controller so much that for the mainstream, will it not be the weak link any more? Is it good enough so to speak?

With the 9900k you can just have your cake and eat it. But it comes at at cost. If you can have 99% of the same gaming performance for half the price that's a huge plus. The mainstream is imo the most important market. Also as an enthusiast that can buy whatever suits me.

So it's not only a matter of memory latency but also how amd chooses to price the processor.
Gaming performance matters hugely for the b2c market. Imo they need to price the 3600x at same price as current 2600x if not lower to take a step forward in market share.

They simply need to offer the same advantage vs Intel for the mainstream as they did 2017. If not, who cares, as a gamer I will just get Intel cpu.
 

NostaSeronx

Diamond Member
Sep 18, 2011
3,689
1,224
136
Any new sys memory paradigms on the horizon? When can we expect something beyond DDR4 sys mem?
Gen-Z Memory Module (ZMM); http://www.smartm.com/products/dram/Gen-Z_ZMM.asp
// It has some their Gen-Z NVRAM/LRDIMM stuff in it for some reason.
It's memory module that goes into 4C... but not a lot of coverage.
https://genzconsortium.org/gen-z-scalable-connector-a-high-volume-universal-connector/

While it is DDR4 @ 96 Gigabytes... with the back-end being 192-bit + 24-bit ECC @ 1.6 GHz, so 38.4 Gigabyte/s. In a slot, close to 1/3rd to 1/2th a single desktop DIMM.
 
Last edited:

DrMrLordX

Lifer
Apr 27, 2000
21,805
11,161
136
So the engineering samples have come out to play. Excellent.

edit: if you do a Google search on the SKU you'll find evidence that there's an 8c ES floating around as well.
 
Last edited:

exquisitechar

Senior member
Apr 18, 2017
666
904
136
1D1212BGMCWH2

If we were to decode it like previous ESes, the 121 would mean that the base clock is only 1.21GHz, but I think it's more likely that the meaning of these numbers has changed, especially because of the previous 8 core sample.
edit: if you do a Google search on the SKU you'll find evidence that there's an 8c ES floating around as well.
Yes, 5D0108BBM8SH2_37 . This one would be even stranger for obvious reasons if those three numbers were about the clockspeed, we have only a 010 there! As for the 37 at the end, it could mean either a 3.7GHz base or boost (fixed frequency sample?) going purely off of the old way of decoding these, IIRC. So yeah, I don't think those three numbers have anything to do with the clock, they refer to something else. The clocks of this 12 core sample are therefore a mystery, since it doesn't even have the two numbers at the end like the 8 core did.
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |