GeForce GTX 1180, 1170 and 1160 coming in August. Prices inside

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Guru

Senior member
May 5, 2017
830
361
106
At BEST the GTX 2080 will be equal to the GTX 1080ti if its on 12nm. Again where does everyone get these absurd 40% to 50% performance increases? From what we know 7nm is about 40-50% improvement in power consumption or performance improvement, either or.

Lets say next get is 12nm, the 2080 is still going to have smaller die size than the 1080ti, because Nvidia would want to be profitable in this segment, lets assume 20% gains through the node shrink(technically its not even a shrink, but a refinement, but whatever) and say about 10% improvement from architecture, that is 30% improvement in performance for the same wattage as the 1080. At 30% faster that is EQUAL to the 1080ti

My prediction for half a year now is that Nvidia will go straight to 7nm, possibly releasing their first GPU in late October, obviously going to be the GTX 2080 exclusively on their website as a "founders edition" costing $100 more than MSRP for AIB's, I'm assuming a $700 dollar price point as Nvidia knows people will overpay to get the newest GPU's. Late December, probably just before Christmas they allow AIB's to sell custom 2080's for $600, just in time for Christmass and New Year to have their gpu feature as a gift.

Then late January the GTX 2070, same release as "FE" for a month, after a month AIB custom cards probably at $450 MSRP. Followed by their 2060/ti/le/4vs8gb, about a month after the 2070, etc...

At about 45% performance improvement with 7nm over 16nm I'm assuming the following:

GTX 2080 ($600) 15% faster than the GTX 1080ti($700)
GTX 2070 ($450) 25% faster than the GTX 1080 ($550)
GTX 2060ti ($300) 25% faster than the GTX 1070 ($370)
GTX 2060 ($250) 10% faster than the GTX 1070 ($370)
GTX 2050ti ($180) 10% faster than the GTX 1060 6GB ($250)
GTX 2050 ($150) 0% faster than the GTX 1060 6GB, 10% faster than the GTX 1060 3GB ($200)
GTX Titan XXL ($1200) 30% faster than the GTX 2080
GTX 2080ti ($800) 30% faster than the GTX 2080
 

Elfear

Diamond Member
May 30, 2004
7,115
690
126
It may be hard to get excited but that's just enough gains to make a lot of spec hungry enthusiasts to upgrade from their 1080s.
Besides, those days of almost 100% gains each subsequent generation are over. Whether that's due to the lack of competition or laws of physics I'm not sure.

That's the unfortunate reality these days.
 

PeterScott

Platinum Member
Jul 7, 2017
2,605
1,540
136
At BEST the GTX 2080 will be equal to the GTX 1080ti if its on 12nm. Again where does everyone get these absurd 40% to 50% performance increases? From what we know 7nm is about 40-50% improvement in power consumption or performance improvement, either or.

Lets say next get is 12nm, the 2080 is still going to have smaller die size than the 1080ti, because Nvidia would want to be profitable in this segment, lets assume 20% gains through the node shrink(technically its not even a shrink, but a refinement, but whatever) and say about 10% improvement from architecture, that is 30% improvement in performance for the same wattage as the 1080. At 30% faster that is EQUAL to the 1080ti

My prediction for half a year now is that Nvidia will go straight to 7nm, possibly releasing their first GPU in late October, obviously going to be the GTX 2080 exclusively on their website as a "founders edition" costing $100 more than MSRP for AIB's, I'm assuming a $700 dollar price point as Nvidia knows people will overpay to get the newest GPU's. Late December, probably just before Christmas they allow AIB's to sell custom 2080's for $600, just in time for Christmass and New Year to have their gpu feature as a gift.

Then late January the GTX 2070, same release as "FE" for a month, after a month AIB custom cards probably at $450 MSRP. Followed by their 2060/ti/le/4vs8gb, about a month after the 2070, etc...

At about 45% performance improvement with 7nm over 16nm I'm assuming the following:

GTX 2080 ($600) 15% faster than the GTX 1080ti($700)
GTX 2070 ($450) 25% faster than the GTX 1080 ($550)
GTX 2060ti ($300) 25% faster than the GTX 1070 ($370)
GTX 2060 ($250) 10% faster than the GTX 1070 ($370)
GTX 2050ti ($180) 10% faster than the GTX 1060 6GB ($250)
GTX 2050 ($150) 0% faster than the GTX 1060 6GB, 10% faster than the GTX 1060 3GB ($200)
GTX Titan XXL ($1200) 30% faster than the GTX 2080
GTX 2080ti ($800) 30% faster than the GTX 2080

IMO since die size will need to be bigger here are my more pessimistic predictions for the lower end, midrange.

GTX 1150 $199 (Rebranded/refreshed GTX 1060 3GB)
GTX 1150ti $250(Rebranded/refreshed GTX 1060 6GB)
GTX 1160 $ 350 about the same speed at GTX 1070 (but new features like ray tracing, and lower power draw than GTX 1070)
 

Justinbaileyman

Golden Member
Aug 17, 2013
1,980
249
106
If the announcement is August 20th and release is some where around the 1st-15th of Oct how long till aftermarket cooler cards get released? A month or 2??
 

nurturedhate

Golden Member
Aug 27, 2011
1,762
761
136
GTX 1160 $ 350 about the same speed at GTX 1070 (but new features like ray tracing, and lower power draw than GTX 1070)
This would be absolutely terrible seeing as how 1070 could be had for slightly below that around Sept. 2016. Paid roughly $335-$350 then for mine, Gigabyte WF OC edition.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
7,064
7,490
136
I suspect NV is going to go for a 7xx to 9xx style release with bulked up die sizes across the range to maximize the gap from the prior series and to leverage that mature 16/14/12nm node.

~600mm2 xx80ti in 6 Months ~ 1080ti +30%
~400mm2 xx80 ~ 1080ti+ performance
~250mm2 xx60 ~ 1080 performance

NV really left a lot of headroom for themselves by releasing a 471mm2 GP102, with their next largest chip being the absurdly sized 815mm2 GV100.

Prices will likely remain the same as the 10xx series launch price. Even if NV is only competing against themselves,they need to give people a reason to upgrade (not enough performance or too high a price work against that regardless whether their completion is AMD or their own prior gen products).
 
Last edited:

mohit9206

Golden Member
Jul 2, 2013
1,381
511
136
IMO since die size will need to be bigger here are my more pessimistic predictions for the lower end, midrange.

GTX 1150 $199 (Rebranded/refreshed GTX 1060 3GB)
GTX 1150ti $250(Rebranded/refreshed GTX 1060 6GB)
GTX 1160 $ 350 about the same speed at GTX 1070 (but new features like ray tracing, and lower power draw than GTX 1070)
Lol 1150Ti for $250, rebranded 1060? Why would anyone buy that. $200 is acceptable for a 2 yr old card.
 

happy medium

Lifer
Jun 8, 2003
14,387
480
126
At BEST the GTX 2080 will be equal to the GTX 1080ti if its on 12nm. Again where does everyone get these absurd 40% to 50% performance increases? From what we know 7nm is about 40-50% improvement in power consumption or performance improvement, either or.

Lets say next get is 12nm, the 2080 is still going to have smaller die size than the 1080ti, because Nvidia would want to be profitable in this segment, lets assume 20% gains through the node shrink(technically its not even a shrink, but a refinement, but whatever) and say about 10% improvement from architecture, that is 30% improvement in performance for the same wattage as the 1080. At 30% faster that is EQUAL to the 1080ti

My prediction for half a year now is that Nvidia will go straight to 7nm, possibly releasing their first GPU in late October, obviously going to be the GTX 2080 exclusively on their website as a "founders edition" costing $100 more than MSRP for AIB's, I'm assuming a $700 dollar price point as Nvidia knows people will overpay to get the newest GPU's. Late December, probably just before Christmas they allow AIB's to sell custom 2080's for $600, just in time for Christmass and New Year to have their gpu feature as a gift.

Then late January the GTX 2070, same release as "FE" for a month, after a month AIB custom cards probably at $450 MSRP. Followed by their 2060/ti/le/4vs8gb, about a month after the 2070, etc...

At about 45% performance improvement with 7nm over 16nm I'm assuming the following:

GTX 2080 ($600) 15% faster than the GTX 1080ti($700)
GTX 2070 ($450) 25% faster than the GTX 1080 ($550)
GTX 2060ti ($300) 25% faster than the GTX 1070 ($370)
GTX 2060 ($250) 10% faster than the GTX 1070 ($370)
GTX 2050ti ($180) 10% faster than the GTX 1060 6GB ($250)
GTX 2050 ($150) 0% faster than the GTX 1060 6GB, 10% faster than the GTX 1060 3GB ($200)
GTX Titan XXL ($1200) 30% faster than the GTX 2080
GTX 2080ti ($800) 30% faster than the GTX 2080
Next gen Nvidia cards are going to be 12nm.
Mabe they will shrink the 1180ti next year to 7nm, but thats a big mabe, but it would make sense to compete with AMD 's next 7nm generation next summer.
Its possible that AMD's 7nm card will have the same performance per watt as Nvidia's 12nm, thats my guess.
 
Reactions: GodisanAtheist

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Next gen Nvidia cards are going to be 12nm.
Mabe they will shrink the 1180ti next year to 7nm, but thats a big mabe, but it would make sense to compete with AMD 's next 7nm generation next summer.
Its possible that AMD's 7nm card will have the same performance per watt as Nvidia's 12nm, thats my guess.

Unless AMD makes a huge leap in architectural advances, they are a full generation behind in performance per watt and currently lagging in perf/mm2 as well. 12nm to 7nm is supposedly a half node increase, right? If so, and things stay the same, AMD will still be behind.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I suspect NV is going to go for a 7xx to 9xx style release with bulked up die sizes across the range to maximize the gap from the prior series and to leverage that mature 16/14/12nm node.

~600mm2 xx80ti in 6 Months ~ 1080ti +30%
~400mm2 xx80 ~ 1080ti+ performance
~250mm2 xx60 ~ 1080 performance

NV really left a lot of headroom for themselves by releasing a 471mm2 GP102, with their next largest chip being the absurdly sized 815mm2 GV100.

Prices will likely remain the same as the 10xx series launch price. Even if NV is only competing against themselves,they need to give people a reason to upgrade (not enough performance or too high a price work against that regardless whether their completion is AMD or their own prior gen products).

I think we'll see the same 50% performance jumps at release as we've seen from kepler to maxwell and then maxwell to pascal. Two years with only a 25-30% perf/w improvement would be quite disappointing coming from Nvidia, especially with the fact that they're moving to GDDR6 which will free up more GPU TDP headroom with it's lower-than-GDDR5X power consumption.
 

GodisanAtheist

Diamond Member
Nov 16, 2006
7,064
7,490
136
Next gen Nvidia cards are going to be 12nm.
Mabe they will shrink the 1180ti next year to 7nm, but thats a big mabe, but it would make sense to compete with AMD 's next 7nm generation next summer.
Its possible that AMD's 7nm card will have the same performance per watt as Nvidia's 12nm, thats my guess.

-Yup. No more free lunches from die shrinks as silicon starts to push up against the limits of our understanding of physics.

Everyone knows this, better to squeeze as much out of a node as possible before moving on to the next if you have the room to play, which NV does.

More profit margin too on a mature node + as the ARM manufacturers move on to the new hotness leaving plenty of manufacturing space in their wake.

I think we'll see the same 50% performance jumps at release as we've seen from kepler to maxwell and then maxwell to pascal. Two years with only a 25-30% perf/w improvement would be quite disappointing coming from Nvidia, especially with the fact that they're moving to GDDR6 which will free up more GPU TDP headroom with it's lower-than-GDDR5X power consumption.

- No real disagreement, the actual finery of the performance improvement is up in the air.

However games are still made for the DX 11 feature set which hasn't really changed from Maxwell to Pascall to whatever is next so I'm not sure what can really be done from an architectural perspective, there is no large node improvment to lean on, and the new GDDR will likely be used to allow bandwith parity with prior generations using a smaller bus rather than feed a starving core which is already very efficiently fed thanks to all of NV's compression tricks.

As such, I feel the bulk of the improvment will come from the larger die size as a result of additional GPCs on the core. How large they choose to go will determine how much more performance we get.
 
Last edited:

Guru

Senior member
May 5, 2017
830
361
106
I think we'll see the same 50% performance jumps at release as we've seen from kepler to maxwell and then maxwell to pascal. Two years with only a 25-30% perf/w improvement would be quite disappointing coming from Nvidia, especially with the fact that they're moving to GDDR6 which will free up more GPU TDP headroom with it's lower-than-GDDR5X power consumption.
Kepler to Maxwell was basically a 20% increase at best, if you compare it price point to price point, the performance difference was 20% at BEST, the mid range has something like 15% gains.

Maxwell to Pascal we saw a HUGE performance increase per price points because it was a GIANT node shrink. 16nm is literally HALF the size of 28nm and its technically much much more advanced. Even from 20nm, the FinFet 16nm from TSMC is a giant leap.

16nm to 12nm is a decent improvement, enough to be called 12nm, even though its technically the SAME node size.

Again I would obviously love there to be a 50% improvement, that would be incredible, it might be even worth it to upgrade from my 1060 6gb to the 2060 if I really get 50% improvement. I can sell my 1060 6GB for say $150, add $150 and have essentially what is today a GTX 1080 for $300.

On 12nm I don't see any evidence that this is going to happen at all, from all the numbers the 12nm process add about 20% improvements in performance, its 7nm process that actually brings in 50% improvement over 16nm.
 
Reactions: Ottonomous

Justinbaileyman

Golden Member
Aug 17, 2013
1,980
249
106
NM I am wrong.. It looks like that little sign next to the gpu says GTX 2060 but the first number is a 1 not a 2 sorry guys.
Its hard to see on my little phone. I got all excited for nothing
 
Reactions: Tweak155

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Kepler to Maxwell was basically a 20% increase at best, if you compare it price point to price point, the performance difference was 20% at BEST, the mid range has something like 15% gains.

Maxwell to Pascal we saw a HUGE performance increase per price points because it was a GIANT node shrink. 16nm is literally HALF the size of 28nm and its technically much much more advanced. Even from 20nm, the FinFet 16nm from TSMC is a giant leap.

16nm to 12nm is a decent improvement, enough to be called 12nm, even though its technically the SAME node size.

Again I would obviously love there to be a 50% improvement, that would be incredible, it might be even worth it to upgrade from my 1060 6gb to the 2060 if I really get 50% improvement. I can sell my 1060 6GB for say $150, add $150 and have essentially what is today a GTX 1080 for $300.

On 12nm I don't see any evidence that this is going to happen at all, from all the numbers the 12nm process add about 20% improvements in performance, its 7nm process that actually brings in 50% improvement over 16nm.

I think you're comparing the GTX 780 to the GTX 980, but that is a comparison by name only. The GTX 780 was a behemoth 550mm2 cutdown flagship chip, GTX 980 was a 400mm2 leaner chip not; Maxwell's flagship chip. Kepler to Maxwell wasn't 20% if you compare the chips that were replaced by the same architectural hierarchy. The 680 (GK104) was replaced by GM204, GK110 was replaced by GM200, etc.

GF114 -> GK104 = 60+% improvement at launch
GK104 -> GM204 = 50+% improvement at launch (more over time).
GM204 -> GP104 = 50%% improvement at launch (more over time).

To think that GT104 will only exhibit a 20% performance / watt (and thus overall performance with Nvidia predictably targeting certain TDPs) over GP104 is setting a very, very low expectation given their performance improvements with each new generation since Fermi. I maintain the prediction that GT104 will be ~50% faster than GP104 at similar TDPs.
 
Reactions: Head1985

Ottonomous

Senior member
May 15, 2014
559
292
136
Kepler to Maxwell was basically a 20% increase at best, if you compare it price point to price point, the performance difference was 20% at BEST, the mid range has something like 15% gains.

Maxwell to Pascal we saw a HUGE performance increase per price points because it was a GIANT node shrink. 16nm is literally HALF the size of 28nm and its technically much much more advanced. Even from 20nm, the FinFet 16nm from TSMC is a giant leap.

16nm to 12nm is a decent improvement, enough to be called 12nm, even though its technically the SAME node size.

Again I would obviously love there to be a 50% improvement, that would be incredible, it might be even worth it to upgrade from my 1060 6gb to the 2060 if I really get 50% improvement. I can sell my 1060 6GB for say $150, add $150 and have essentially what is today a GTX 1080 for $300.

On 12nm I don't see any evidence that this is going to happen at all, from all the numbers the 12nm process add about 20% improvements in performance, its 7nm process that actually brings in 50% improvement over 16nm.

Same boat, 1060 6GB but I want to either jump on the 1080Ti or the 1170 depending on pricing

But why are the estimates for the 1180Ti conservative? Wouldn't 1.2x for 12nm and 1.3x for the architectural improvements be reasonable, 1.3x1.2=1.56x, yielding around 50% performance improvements? Or is the perf delta for xx80Ti conservative because of 4K/QHD or non-linear improvement? Or is it based on perf/mm2 calculations?
 

Guru

Senior member
May 5, 2017
830
361
106
I think you're comparing the GTX 780 to the GTX 980, but that is a comparison by name only. The GTX 780 was a behemoth 550mm2 cutdown flagship chip, GTX 980 was a 400mm2 leaner chip not; Maxwell's flagship chip. Kepler to Maxwell wasn't 20% if you compare the chips that were replaced by the same architectural hierarchy. The 680 (GK104) was replaced by GM204, GK110 was replaced by GM200, etc.

GF114 -> GK104 = 60+% improvement at launch
GK104 -> GM204 = 50+% improvement at launch (more over time).
GM204 -> GP104 = 50%% improvement at launch (more over time).

To think that GT104 will only exhibit a 20% performance / watt (and thus overall performance with Nvidia predictably targeting certain TDPs) over GP104 is setting a very, very low expectation given their performance improvements with each new generation since Fermi. I maintain the prediction that GT104 will be ~50% faster than GP104 at similar TDPs.
How else can you compare? Die size depends on a huge variation of things. If you look at Vega for example it can easily be much smaller if die space was cut down that is used for all of the compute stuff. Of course this is decided in the architecture, how you want to build it, but ultimately its the price segment that we compare.

Would it be fair to compare Vega 64 to GTX 1080ti? After all it's a lot more similar to that card in terms of TDP, die size, etc... No, because it's a $550 card, bar the mining craze.

So it easily goes 780 to 980 to 1080. How else would you compare? Even we go by your logic we have to compare the 770 to the 980 to the 1080ti. It doesn't make any sense.

In fact I think comparing 960 to 1060 is much more fair and telling, even though the 970 wasn't that far off at $330. But ultimately the 960 4GB cost about $230-240, much closer to the $250 1060 6GB.

And we already established that there was a huge architectural change in the Maxwell series that lowered power draw by a huge amount. This started off with the GTX 750 and 750ti and it was perfected with the 900 series. Since then even Pascal at 16nm only made incremental power draw improvements, purely from the process node, very little from architectural improvements.
 

Head1985

Golden Member
Jul 8, 2014
1,866
699
136
How else can you compare? Die size depends on a huge variation of things. If you look at Vega for example it can easily be much smaller if die space was cut down that is used for all of the compute stuff. Of course this is decided in the architecture, how you want to build it, but ultimately its the price segment that we compare.

Would it be fair to compare Vega 64 to GTX 1080ti? After all it's a lot more similar to that card in terms of TDP, die size, etc... No, because it's a $550 card, bar the mining craze.

So it easily goes 780 to 980 to 1080. How else would you compare? Even we go by your logic we have to compare the 770 to the 980 to the 1080ti. It doesn't make any sense.

In fact I think comparing 960 to 1060 is much more fair and telling, even though the 970 wasn't that far off at $330. But ultimately the 960 4GB cost about $230-240, much closer to the $250 1060 6GB.

And we already established that there was a huge architectural change in the Maxwell series that lowered power draw by a huge amount. This started off with the GTX 750 and 750ti and it was perfected with the 900 series. Since then even Pascal at 16nm only made incremental power draw improvements, purely from the process node, very little from architectural improvements.
You need compare same GPU class.
680-980-1080 was midrange DIE with almost same gains every generation around 50%.GTX780 was cutdown big DIE.It was something like todays 1080TI.Even Nvidida always compare GTX980 to GTX680 and GTX1080 to GTX980 in their slides.
SO it is like this:
680-980-1080(midrange)
780TI-TITANX-TITANXP(full big die)
780-980TI-1080TI(cutdown big die)
If nv deliver again 50% perf gain we should see:
GTX2080 15-20% faster than 1080TI(1080TI is 30-35% faster than 1080)
GTX2070 5-10% slower than 1080TI(1080TI is 55-60% faster than 1070)
 
Last edited:

Guru

Senior member
May 5, 2017
830
361
106
You need compare same GPU class.
680-980-1080 was midrange DIE with almost same gains every generation around 50%.GTX780 was cutdown big DIE.It was something like todays 1080TI.Even Nvidida always compare GTX980 to GTX680 and GTX1080 to GTX980 in their slides.
SO it is like this:
680-980-1080(midrange)
780TI-TITANX-TITANXP(full big die)
780-980TI-1080TI(cutdown big die)
If nv deliver again 50% perf gain we should see:
GTX2080 15-20% faster than 1080TI(1080TI is 30-35% faster than 1080)
GTX2070 5-10% slower than 1080TI(1080TI is 55-60% faster than 1070)

That is worthless in actual practice. Comparing like that doesn't mean anything in real life. Plus the die size depends on millions of things, a ton of die size might be just to make certain chips work(if the yields are low, so more chance of getting full chips and selling at full price), a lot of it is in terms of computing for example, maybe things like physix for Nvidia, etc... it doesn't really represent mm2 per performance. If we had the SAME architecture at the same die size it would be a good representation of process and node improvements, but even that would be worthless in terms of the real world, because you can only compare same price points.

What is the point if the GTX 2070 is 100% faster than the GTX 1070 and same die size and same architecture, if it costs $1000? Is it the same range? You would need to compare the GTX 2070 to the Titan XP, otherwise the "gains" might be technically there, purely from a technical perspective, but there is no gains if you have to buy a GTX 2060 at $400 to match the 1070. That would obviously be the logical match, even if the GTX 2060 die size is say 25% smaller than a 1070.
 
Reactions: Feld
Status
Not open for further replies.
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |