Question Speculation: RDNA3 + CDNA2 Architectures Thread

Page 17 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

uzzi38

Platinum Member
Oct 16, 2019
2,745
6,627
146

ModEl4

Member
Oct 14, 2019
71
33
61
nearly all your assumptions is wrong, I shouldn't bother reply but just 2 examples below:
First, rumors for the top end NV part speak about 144 Ampere-like SM. 82 of such SM took 626mm^2 on 8nm Samsung process with 28,3 Billion transistors...
lol 82 SM=626mm^2, what then 84SM give us? Isn't 628mm^2 the entire die
Anyway I suppose you meant 84=626mm^2 which is off, way way off, just by looking at the die shot, you can see how insanely off is your calculations, 2mm^2 for the rest of the chip (command front, set up, display io, memory controllers, etc...) what where you thinking?
Same for a "5nm" Samsung process, considering that by area and power it seems to be roughly on par with TSMC's 7nm https://www.anandtech.com/show/16463/snapdragon-888-vs-exynos-2100-galaxy-s21-ultra/2.
I am too lazy to read the article, but I bet it doesn't say anything about area, or if it says probably doesn't imply that 7nm TSMC is roughly on par with 5nm Samsung like you suggest, another way way off assumption.If you want to calculate, calculate with wikichip numbers, or just use Ian's
If you are wondering what 8nm is vs 10nm Samsung in area reduction is 10%
and I will give you what Nvidia was able to achieve with Samsung in their tweaked 8nm node, another nearly 10% so you can start calculations (please don't) Also don't compare AMD quoted transistors numbers vs Nvidia the method is flawed...
Edit: regarding Navi 31 MCM the 940mm2 I gave is calculated with very specific assumptions and may be wrong, when we have more info about specs, I will redoing that to check, but even if we take the rumored 800mm2 number is 1,7 vs a possible 471mm2 Nvidia...
 
Last edited:
Reactions: exquisitechar

ModEl4

Member
Oct 14, 2019
71
33
61
I just want to clarify regarding the 471mm2 Ada Lovelace possible scenario I mentioned in my previous post, is that I just wanted to point out the capability if Nvidia wanted what it could (so to counter the doomsday scenarios regarding what Navi 31 will bring to NV) It is a possible scenario, but with very little possibility, mainly for 3 reasons:
a 471mm2 5nm TSMC with cache+regular GDDR6 with clocks pushed moderately for reference designs (in the sense how high it is clocked vs what can be pushed, like 980ti/1080ti) it will certainly not need anywhere near 350W, and I don't see the change to 350W (3090) vs all the previous 250W 80Ti's to be a one off scenario, but an indication for the future and also we are starting to see an acceleration in the industry and also there are leaks about the various possible contributions that AI can bring to rendering so the near future path is already visible and I don't see how an ampere like feature wise Ada Lovelace can support well that role and build for the future like Turing before it (in the sense that the developers need features-wise ready parts/install base/proof of concept and of course time to implement all the new stuff)
 

leoneazzurro

Golden Member
Jul 26, 2016
1,102
1,828
136
nearly all your assumptions is wrong, I shouldn't bother reply but just 2 examples below:

lol 82 SM=626mm^2, what then 84SM give us? Isn't 628mm^2 the entire die
Anyway I suppose you meant 84=626mm^2 which is off, way way off, just by looking at the die shot, you can see how insanely off is your calculations, 2mm^2 for the rest of the chip (command front, set up, display io, memory controllers, etc...) what where you thinking?

I am too lazy to read the article, but I bet it doesn't say anything about area, or if it says probably doesn't imply that 7nm TSMC is roughly on par with 5nm Samsung like you suggest, another way way off assumption.If you want to calculate, calculate with wikichip numbers, or just use Ian's
If you are wondering what 8nm is vs 10nm Samsung in area reduction is 10%
and I will give you what Nvidia was able to achieve with Samsung in their tweaked 8nm node, another nearly 10% so you can start calculations (please don't) Also don't compare AMD quoted transistors numbers vs Nvidia the method is flawed...
Edit: regarding Navi 31 MCM the 940mm2 I gave is calculated with very specific assumptions and may be wrong, when we have more info about specs, I will redoing that to check, but even if we take the rumored 800mm2 number is 1,7 vs a possible 471mm2 Nvidia...

I considered the entire die, and proportioning everything to the SM is quite more conservative (in favor of Nvidia) than you think, because as said memory interfaces do not scale linearly, or better they don't scale practically at all. Basically saying "eh, the SM are a smaller part of the die" is against your argument, because that and the frontend are the only parts that will scale in line with the theoretical TSMC projection. Cache will scale worse (1,25x going from 7nm to 5nm, according to TSMC itself). Memory interface will not scale. Crossbars will scale worse because when you are connecting more clients (SM to the cache, i.e.) the number of connections goes up exponentially. But if you unironically still believe a 144 SM (Ampere-like) die with a 384 bit GDDR6-6X bus will be 471 mm^2 on TSMC 5nm, I will leave you with your delusions, only thing, don't be sad when they will not become reality.

PS: my calculations are based on actual die sizes and transistor numbers from both Nvidia and AMD, that is, actual transistor density achieved in reality (which is often far from what foundries state as a maximum density). Also, the comment in the article I quoted states explicitely "In that context, Samsung’s 5LPE process node improving power by 20% would mean they’d only be catching up with TSMC’s 7nm nodes " (power and performance) and the article you quoted by yourself states a peak transistor density that is only marginally higher on Samsung 5nm than TSMC 7nm, and this is not even referring to the TSMC 7nm EUV or 6nm.
 
Last edited:

Glo.

Diamond Member
Apr 25, 2015
5,924
4,978
136
If the rumors are true for a 440mm2 128bit memory bus etc based Navi 33, then I think is very difficult for Navi 32 (2x) to be below 690mm2 and Navi 31 below 940mm2 (including everything, infinity cache etc). In David Wang presentation there was a slide shown indicating +50% performance/watt initial target for RDNA3 vs RDNA2 (not present in Tech ARP video with the static split screen slides...) and also Rich Bergman indicated similar performance/watt improvements as with RDNA2 vs 1
I think AMD will be able to double (+100%) the performance/watt vs RDNA2 for some of the MCM designs so at 420W TBP (which I think is the limit for AMD reference designs with 3x 8pins) we will have 2.8X max increase in performance vs 6900XT.
So according to TPU Nvidia will need 2.6x the performance of 3090 to match Navi 31 at 4K:
I guess many people they don't realize that Nvidia (if rumors are true also and we are talking about a 5nm TSMC based design with 18432 Cuda cores etc) can do a Maxwell2->Pascal minor features facelift add 96MB of NV cache lol and still be at 471mm2 size EASILY at 5nm TSMC. And also the 2.6X is quiet doable for NV depending the clocks their designs can hit.
My prediction is 2 chip designs in 5nm TSMC and 3 designs with 5nm Samsung probably all with NV cache (for TSMC is certain imo) and we are talking about minor die area anyway.
1)TSMC 5nm 192 ROPs/18432CC 384bit memory bus with 96MB NV cache
2)TSMC 5nm 128 ROPs/12288CC 256bit memory bus with 64MB NV cache
etc...
The 3rd (Samsung based) will have higher than 3090 performance at $499 and the 2nd TSMC based will have around 1.7X higher performance according to my napkin calculations. I'm a little bit worried about the size of cache but if AMD can support 1080p with 16MB for Navi24 and with 32MB 1440p with only 2-3% hit on average then at 4K the 64MB will be enough with similar minor hit (also Nvidia will have much higher additional memory bandwidth through the regular bus vs AMD designs so maybe 96MB for 2880p and 64MB for 2160p will be enough with minor hit) So similar performance in the end at maybe half the die size? (or with sizeable features gains otherwise) is this a success for AMD?
Navi 33 is on 6 nm TSMC(essentially the same as RDNA2 GPUs) while 31 and 32 are made on TSMC N5 process.

So your assumption about N31 and 32 die sizes is wrong.

N5 TSMC is 55% density increase over N7. Apple went from around 80 mil xTors/mm2 to 134 mil xTors/mm2.

Even if AMD achieves "only" 80 mil xTors/mm2 - the Chiplets will be relatively small - under 300 mm2.
 

ModEl4

Member
Oct 14, 2019
71
33
61
my calculations are based on actual die sizes and transistor numbers from both Nvidia and AMD, that is, actual transistor density achieved in reality (which is often far from what foundries state as a maximum density)
Sure, like you said 82SM is 626mm2, Samsung 5nm has the same density as 7nm TSMC but of course your calculations is done with the "actual" die sizes somehow🤪
Basically saying "eh, the SM are a smaller part of the die" is against your argument, because that and the frontend are the only parts that will scale in line with the theoretical TSMC projection. Cache will scale worse (1,25x going from 7nm to 5nm, according to TSMC itself)
and in the new reply we have that SM & frontend will scale in line with theoritical projections but SRAM (cache) will scale worse?
Nothing anymore scales according to the theoretical projections, not even SRAM anymore. But since your calculations is based on actual die sizes and transistor numbers from both Nvidia and AMD I have a riddle for you:
Can you please tell me what density scaling Nvidia achieved going from 14nm Samsung 3300M transistors 132mm2 3050ti (their largest 14nm Samsung based chip) to 8nm Samsung 13.250M transistors 276mm2 3060 (their smallest 8nm part that we know die size and transistors) and how off is in relation to what I told you based on wikichips figures etc (64 MTr/mm2 for 8nm Nvidia tweaked vs 33,32 MTr/mm2 for 14nm) or is (ironically?) exactly the same result regarding density scaling? Sure the architecture is different for Pascal vs Ampere, but also possibly the architecture for Ada Lovelace is going to be different vs Ampere (it's not Ampere 2 after all) it doesn't matter if feature wise will be similar in that theoretical 471mm2 scenario (or a minor feature facelift Maxwell2->Pascal) Look, you have your opinion and I have mine and in any case the 471mm2 is a theoritical exercise I did, as I explained in my other post I think it will probably be bigger with more features vs Ampere.
I will leave you with your delusions, only thing, don't be sad when they will not become reality.
I won't reply anymore, because you will make me see the "truth" and then I will be sad 😭
 

leoneazzurro

Golden Member
Jul 26, 2016
1,102
1,828
136
Sure, like you said 82SM is 626mm2, Samsung 5nm has the same density as 7nm TSMC but of course your calculations is done with the "actual" die sizes somehow🤪

and in the new reply we have that SM & frontend will scale in line with theoritical projections but SRAM (cache) will scale worse?
Nothing anymore scales according to the theoretical projections, not even SRAM anymore. But since your calculations is based on actual die sizes and transistor numbers from both Nvidia and AMD I have a riddle for you:
Can you please tell me what density scaling Nvidia achieved going from 14nm Samsung 3300M transistors 132mm2 3050ti (their largest 14nm Samsung based chip) to 8nm Samsung 13.250M transistors 276mm2 3060 (their smallest 8nm part that we know die size and transistors) and how off is in relation to what I told you based on wikichips figures etc (64 MTr/mm2 for 8nm Nvidia tweaked vs 33,32 MTr/mm2 for 14nm) or is (ironically?) exactly the same result regarding density scaling? Sure the architecture is different for Pascal vs Ampere, but also possibly the architecture for Ada Lovelace is going to be different vs Ampere (it's not Ampere 2 after all) it doesn't matter if feature wise will be similar in that theoretical 471mm2 scenario (or a minor feature facelift Maxwell2->Pascal) Look, you have your opinion and I have mine and in any case the 471mm2 is a theoritical exercise I did, as I explained in my other post I think it will probably be bigger with more features vs Ampere.

I won't reply anymore, because you will make me see the "truth" and then I will be sad 😭

LOL. Continue to be delusional. Imagine TSMC itself giving the number for SRAM scaling, or you giving peak numbers for density as a measure of actual chip size. You threw a number not even explaining how you get there and you pretend to lecture others. Again, LOL.
 

ModEl4

Member
Oct 14, 2019
71
33
61
Navi 33 is on 6 nm TSMC(essentially the same as RDNA2 GPUs) while 31 and 32 are made on TSMC N5 process.

So your assumption about N31 and 32 die sizes is wrong.

N5 TSMC is 55% density increase over N7. Apple went from around 80 mil xTors/mm2 to 134 mil xTors/mm2.

Even if AMD achieves "only" 80 mil xTors/mm2 - the Chiplets will be relatively small - under 300 mm2.
Yes if Navi 33 is on 6nm and Navi 31&32 MCM is on 5nm entirely then the assumption I made is wrong that's why I clarified that my calculations was based on the leaks (6nm+5nm) and with specific assumptions regarding specs and when I have more info I will redo them.
Even if it is 6nm+5nm I can still come with difference numbers depending which part is 6nm and which 5nm, how much cache Navi 33 have vs Navi 32&31MCMs and what other parts AMD choose to scale other than the CUs...
So you are saying that all the chips modules in the MCM are 5nm based?
 

Kepler_L2

Senior member
Sep 6, 2020
783
3,176
136
What're you expecting the memory bandwidth to be for the card. I don't know what Navi 31 will ultimately be, but a two or three die MCM approach seem to be bandied about as the most likely.

To limit mining performance to that of a 3060 Ti essentially means no more than a 256-bit memory bus. The 6600 XT does surprisingly well with only a 128-bit bus, but you'd need a lot more cache to keep that GPU from starving with the other rumored specs.

Even if the cost of the GPU is high, as long as it gets a certain MH/s per dollar it will float to that price. The only good news is that price may just be a smaller fraction of the initial MSRP. If it went from $2,000 to something like $2,400 then no one would really complain given some AIB models would probably encroach on that price anyways.

If it had three dies, each with a 128-bit bus the total memory bandwidth would be equivalent to a 3080 Ti and those currently go for around $2,200 or more so it certainly is possible for AMD to launch a card that wouldn't inflate in price too much.
Navi 31 256-bit, Navi 32 192-bit and Navi 33 128-bit. All using regular GDDR6 probably at 16Gbps. The MH/$ of these GPUs will be awful.
 

Saylick

Diamond Member
Sep 10, 2012
3,876
9,001
136
Navi 33 is on 6 nm TSMC(essentially the same as RDNA2 GPUs) while 31 and 32 are made on TSMC N5 process.

So your assumption about N31 and 32 die sizes is wrong.

N5 TSMC is 55% density increase over N7. Apple went from around 80 mil xTors/mm2 to 134 mil xTors/mm2.

Even if AMD achieves "only" 80 mil xTors/mm2 - the Chiplets will be relatively small - under 300 mm2.
Correct. At worst AMD will likely achieve around ~80 MTr/mm2 and at best ~90 MTr/mm2.

RDNA2 was 26.8B Xtors @ 520 mm2 (51.54 MTr/mm2). If they achieve Apple's density gain of 1.5x going from N7 to N5, you get 77 MTr/mm2 (~80). The 1.5x is a blended number accounting for cache, IO, logic, etc. We know that N31 chiplets will likely strip out the Infinity Cache so it's possible the density increase skews towards the 1.8x TSMC claims for logic scaling, in which case you end up with the upper limit of 92.78 MTr/mm2.

Just scaling down N21 from N7 to N5 using assuming 1.5x scaling should result in a die size of 350 mm2. If you strip out the Infinity Cache from N21 (~80 mm2), the remaining is ~440mm2, which should just be mostly logic and IO. Scale that down by 1.8x and you're left with 244 mm2. Scale that back up by 1.5x to account for the 50% more CUs that it's rumored to have and you're back in the ~350 mm2 range.

Edit: Revised die area of Infinity Cache from 128mm2 to 80mm2.
 
Last edited:

Mopetar

Diamond Member
Jan 31, 2011
8,334
7,381
136
Navi 33 is on 6 nm TSMC(essentially the same as RDNA2 GPUs) while 31 and 32 are made on TSMC N5 process.

What's the product stack end up looking like across those as well as the potential for some cards being MCM while others just use a single die?

Navi 31 256-bit, Navi 32 192-bit and Navi 33 128-bit. All using regular GDDR6 probably at 16Gbps. The MH/$ of these GPUs will be awful.

If it's an MCM card then the bus size increases due to multiple dies and mining performance improves. Miners already pay more in terms of MH/$ than any card's MSRP which will lead to higher prices.
 

Mopetar

Diamond Member
Jan 31, 2011
8,334
7,381
136
That's the thing I don't get about the rumors. The second tier product should be one die and not two smaller ones.

The bit about the small die not having infinity cache also seems weird. I had assumed that the 6600 XT wouldn't have performed quite as well as it did based on the paper specs, but even the 32 MB of cache made a pretty big difference.
 

leoneazzurro

Golden Member
Jul 26, 2016
1,102
1,828
136
The bit about the small die not having infinity cache also seems weird. I had assumed that the 6600 XT wouldn't have performed quite as well as it did based on the paper specs, but even the 32 MB of cache made a pretty big difference.

Having the cache on a separate die in a stacked configuration gives very small penalties compared with the on-die cache. This is because the signal paths are very short anyway. This is the same principle as the V-cache on Zen3, and that is a very high speed cache. So if AMD chose to have a only-compute die + cache and memory interface die with 3d stacking it is like having a very big "die" capable of going beyond the reticle limit and with better yields. This is the very purpose of 3D stacking.
 
Reactions: Tlh97

jpiniero

Lifer
Oct 1, 2010
16,059
6,532
136
Having the cache on a separate die in a stacked configuration gives very small penalties compared with the on-die cache. This is because the signal paths are very short anyway. This is the same principle as the V-cache on Zen3, and that is a very high speed cache. So if AMD chose to have a only-compute die + cache and memory interface die with 3d stacking it is like having a very big "die" capable of going beyond the reticle limit and with better yields. This is the very purpose of 3D stacking.

That would be something if they did chiplets AND cache stacking.
 

Kepler_L2

Senior member
Sep 6, 2020
783
3,176
136
If it's an MCM card then the bus size increases due to multiple dies and mining performance improves. Miners already pay more in terms of MH/$ than any card's MSRP which will lead to higher prices.
That is the total bus size, not the per-die bus size. As in, the $1999 RX 7950 XT will have the same memory bandwidth and mining performance as the $399 RTX 3060 Ti.
 
Reactions: Tlh97 and Saylick

GodisanAtheist

Diamond Member
Nov 16, 2006
7,853
8,938
136
That is the rumor.

-I have to imagine that stacked cache will play some sort of major role in keeping the chiplets synchronized and scaling properly by keeping interdie communication super fast and limiting trips to the system or even board ram.

Otherwise we'd have had muti-chip GPUs years ago if you could do it with an I/O die and CCXs like CPUs do.
 

leoneazzurro

Golden Member
Jul 26, 2016
1,102
1,828
136
-I have to imagine that stacked cache will play some sort of major role in keeping the chiplets synchronized and scaling properly by keeping interdie communication super fast and limiting trips to the system or even board ram.

Otherwise we'd have had muti-chip GPUs years ago if you could do it with an I/O die and CCXs like CPUs do.

The idea is exactly that (and maybe there are other inter-die communication channels, too): Surely there is some penalty in going through L3 instead of L2 for two workgroups not part of the same die but in a stacked structure the inter-die bandwidth can easily reach the several TB/s figure (and latency penalty seems to be minimal due to the vertical -short- paths compared to a monolithic structure). Nothing to do with the limited synchronization in the SLI/CF setups.
 

Timorous

Golden Member
Oct 27, 2008
1,893
3,638
136
That is the total bus size, not the per-die bus size. As in, the $1999 RX 7950 XT will have the same memory bandwidth and mining performance as the $399 RTX 3060 Ti.

What is the layout though, 128bit per die, 256bit per die but only 1 connected like CDNA 2 MCM, 256bit in the cache die(s)?

4 high stacks with 128MB L3 and a 64bit MC would work for N31 and a 3 high stack (maybe failed 4 high stacks with a 1 stack pass through?) for N32?
 

Kepler_L2

Senior member
Sep 6, 2020
783
3,176
136
What is the layout though, 128bit per die, 256bit per die but only 1 connected like CDNA 2 MCM, 256bit in the cache die(s)?

4 high stacks with 128MB L3 and a 64bit MC would work for N31 and a 3 high stack (maybe failed 4 high stacks with a 1 stack pass through?) for N32?
128-bit per GCD + 512MB 3D IC connecting both GCD for Navi31. Navi32 is still a bit of a mystery, but most likely 96-bit per GCD + 384MB IC.
 
Reactions: Tlh97 and Joe NYC

leoneazzurro

Golden Member
Jul 26, 2016
1,102
1,828
136
It makes much more sense to tie the memory controllers to the cache die(s) than to the GCDs, especially when your memory hyerarchy goes to L3 first and VRAM later, and with the memory interfaces being the parts scaling worse, it is really wasted to put them on a costly N5 die.
 

Timorous

Golden Member
Oct 27, 2008
1,893
3,638
136
It makes much more sense to tie the memory controllers to the cache die(s) than to the GCDs, especially when your memory hyerarchy goes to L3 first and VRAM later, and with the memory interfaces being the parts scaling worse, it is really wasted to put them on a costly N5 die.

Maybe having the MCs on the GCD allows AMD to sell versions without L3 in a different segment where memory bandwidth is not as important so they end up with more re-usability that way.

128-bit per GCD + 512MB 3D IC connecting both GCD for Navi31. Navi32 is still a bit of a mystery, but most likely 96-bit per GCD + 384MB IC.

Well it appears that N32 is coming later than N31 and N33. Which at first does not make sense because it leaves a gap between the probable x9 and x7 parts but after thinking about it for a bit it does make sense because it means AMD can make better use of the wafers they have available to them. The make N31 GCDs and Zen 4 parts on 5nm and that is it and then on 6nm they make N33, cache dies (which maybe are cross compatible between Zen 3, Zen 4 and RDNA 3 - if they did this it would be a master stroke for simplification) and IO dies for Desktop and EPYC/TR. It does simplify production somewhat to begin with.

The other advantage of that gap is that if the 2.7x performance uplift is true a lot of people are going to want in on a 7900 series part so they can make more dies and they can upsell people who may have been happy with a, lets say ~1.8x gain for N32 based parts, to the 7900 because there is no 7800 available to start with. It also means they can get the mid range out a little earlier because people who would buy a 7700 are unlikely to stretch to a 7900 series part although some may have stretched to a 7800 part but getting the x7 out early has market share advantages.

So Super Halo for mind share and then mid range for market share, fill in the gaps later.
 
Last edited:

leoneazzurro

Golden Member
Jul 26, 2016
1,102
1,828
136
They have already a N33 part for 128 bit performance, and without cache such a arrangement would be quite inefficient. I simply don't see such a market, especially for parts made on a very expensive process. Lower-end parts, for sure, but for making those you'll want to use N6.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |