Linus Torvalds: Discrete GPUs are going away

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Ayah

Platinum Member
Jan 1, 2006
2,512
1
81
Not entirely. They are already decoupled. The IGP simply advances at a faster pace with many more lower hanging fruits to pick.

New GPU uarch or improved uarch every 1-1½ years. A continually expanding node lead. 14nm vs 28nm. (We may even see 10nm vs 28nm)

Stacked memory will only have relatively minor effect on dGPUs. While it will frog leap IGPs.

And again, sooner or later the economic incentive to develop the dGPU further stops due to shrinking volume. Remember, the IGP doesnt have to beat the dGPU. Just destroy its ROI to win.

you're forgetting the situations where ROI isn't considered and functionality/performance dominates. Look at 8 socket platforms for instance. 99.99...% of people haven't worked with a current generation one let alone seen one in person in a non-demo environment, and yet they're still incredibly useful for certain tasks and if they weren't profitable, they wouldn't even be designed and produced.

most people don't find themselves doing tasks where money really isn't a major issue, but these situations do exist.

dGPU vs IGP, definitely not sooner. eventually, maybe when there's an easy way to offload work with a negligible amount of latency.

obviously, I'm not talking about consumers.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
All of these are based on hypotheticals, and the biggest roadblock I see is any advance that allows full integration also allows for full integration on add in boards, you keep forgetting that. What applies to one also applies to the other.

Predict that any IGP powerful enough to get rid of discretes will be done long after linus is dead because it requires advances in material sciences that are difficult. The same way flying cars and 'fusion' was just right around the corner. Lets not forget people who thought that Mhz would scale much bigger than it has, P4 was originally slated to scale to 10Ghz then it was found out that wouldn't be the case because of unexpected roadblocks. It's been 10 years roughly since the pentium 4 and we've only managed to squeeze another 1.5Ghz over many processor generations.

I see unexpected roadblocks to IGP displacing discreets and any IGP advance will also apply to add in boards negating the advantage.

You forget that Nvidia and AMD don't have access to Intel's fabs.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
for the "consumer", easily. for enthusiests, eventually but definitely far in the future. for current ~GTX750Ti range people, relatively easily done, but motherboards will shoot up in cost, pin count will rocket up and air cooling is out of the question as this is FX-9590 range power consumption. and obviously, you would have needed several years headstart on this little project.

1) Motherboards wont shoot up in price to much, other than cost of the added HCM ram.

2) "..pin count will rocket up." Why? What makes HCM any differnt than the current DDR3 ram we use? why would pin count explode if the ram is on the motherboard?

3) "air cooling is out of the question". Why?
HCM uses alot less power than even DDR3/DDR4 and lowpower verisons of them. Its not from the ram, that will lower power usage.

Think of a 750ti, it uses like ~60watts when gameing.
Now imagine that without the GDDR5 thats on it, and you lower the power usage compaired to the discrete card.

You can easily make a Intel like 77w CPU, with a 750ti GPU on it, now.
There is no problem with a power limit, you can easily cool it with air. (would probably be ~125w)
Hell AMD have had 125w CPUs for along time and they manage just fine on air.


So the way I see it, your wrong about all 3 of those.


for people who actually use their computers for computing: I'm pretty sure I'll be dead before I see a high end GPU integrated into a high end CPU: with current gen tech, that's something like a 550W compound chip and most likely over 14 billion transistors.


Highend will never get integrated. But prices will probably shoot up 10-20 times current prices for high end, and be left to people that use it for servers.

Rest of people will have to live with middle range, gpus that are integrated into the CPU.



as always, even if energy efficiency increases, the high end products simply use that efficiency to add more performance while keeping similar-ish power consumption.


Thats where you and I differ the most.
I believe there is such a thing is good enough for now.
If the avg consumer buys that, sales of discrete could hurt, makeing it hard to recoup the costs of designing a discrete card.


All it takes is to imagine that 4k never really takes off.
Imagine 5 years from now we re all still mostly useing 1920x1080p.

If that happends, discrete GPUs are probably dead or really hurtting.
Because by then iGPUs will be powerfull enough for anything gameing related at that resolution.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
For fun:

"Haswell-DT" (22 nm) Die size = 177mm^2 (1.4B transistors)
the igpu part is like ~33% of the chip.

remove that (the intel igp) and you get ~119mm^2 for the cpu part only.

Nvidia 750ti (28 nm) is 148mm^2.

Now imagine what happends to that 750ti,
once you shrink it to intels 22nm with 3D finFETs.

The 60w power consumption of the card is probably reduced by 10w alone from going from GDDR5 -> HMC.

and the 50w or so from this as a iGPU is lowered ALOT by the use of Intels 3D FinFETs.


Power consumption wise = a chip like this would be ~50w + 54w = 104w total.
Chip die size wise = 119mm^2 + whatever a 148mm^2 28nm->22nm is.

Lets just say it ends up around 110w and 210mm^2 or so.


A mainstream CPU from intel with a iGPU like that would be pretty damn good for the mainstream users.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
For fun:

"Haswell-DT" (22 nm) Die size = 177mm^2 (1.4B transistors)
the igpu part is like ~33% of the chip.

remove that (the intel igp) and you get ~119mm^2 for the cpu part only.

Nvidia 750ti (28 nm) is 148mm^2.

Now imagine what happends to that 750ti,
once you shrink it to intels 22nm with 3D finFETs.

The 60w power consumption of the card is probably reduced by 10w alone from going from GDDR5 -> HMC.

and the 50w or so from this as a iGPU is lowered ALOT by the use of Intels 3D FinFETs.


Power consumption wise = a chip like this would be ~50w + 54w = 104w total.
Chip die size wise = 119mm^2 + whatever a 148mm^2 28nm->22nm is.

Lets just say it ends up around 110w and 210mm^2 or so.


A mainstream CPU from intel with a iGPU like that would be pretty damn good for the mainstream users.

Yes and imagine in a year or two where the perf/W/mm is going to be for GPU's. For most consumer's needs dGPU is on it's last legs, IMO. Remember that the consoles now use APU's. In a couple of years so will most everyone else.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
For fun:

"Haswell-DT" (22 nm) Die size = 177mm^2 (1.4B transistors)
the igpu part is like ~33% of the chip.

remove that (the intel igp) and you get ~119mm^2 for the cpu part only.

Nvidia 750ti (28 nm) is 148mm^2.

Now imagine what happends to that 750ti,
once you shrink it to intels 22nm with 3D finFETs.

The 60w power consumption of the card is probably reduced by 10w alone from going from GDDR5 -> HMC.

and the 50w or so from this as a iGPU is lowered ALOT by the use of Intels 3D FinFETs.


Power consumption wise = a chip like this would be ~50w + 54w = 104w total.
Chip die size wise = 119mm^2 + whatever a 148mm^2 28nm->22nm is.

Lets just say it ends up around 110w and 210mm^2 or so.


A mainstream CPU from intel with a iGPU like that would be pretty damn good for the mainstream users.

28nm at TSMC to 22nm Intel will not give you a lot of density.

Also, when Intel will have a 210mm^2 die at 14nm, AMD and NVIDIA will have 16nm FF. That will make the GT-750Ti at more than half the die size (~70-75mm^2) with ~50% less power consumption than current 28nm model.

They could even keep the same power consumption and have 30% or more performance at the same small die size of 70-75mm^2 and higher memeory bandwidht using HBM.

But lets get serious, at 16nm NVIDIA could have a GTX760 performance or more at half the die size of aproximately 100mm^2, 50% less power consumption, using HBM with 256-bit memory controller at 16nm FF.

I dont need to say that this GPU would completely destroy any 14nm APU at 200-240mm^2 both in performance and cost per die.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
You forget that 14nm also has a density and power advantage compared to 20+FF, so a 200-240mm² processor would destroy a 100mm² IGP, not the other way around.

I'm not so sure about die cost either, since 16FF will be a regression in cost per transistor compared to 28nm.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
nVidia and AMD may first have 16FF when Intel ships 10nm. If they even have it.

Even if 16FF comes 1 year after 20nm(extremely optimistic). Looking at the current players it would put AMD and nVidia into 2016.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
You forget that 14nm also has a density and power advantage compared to 20+FF, so a 200-240mm² processor would destroy a 100mm² IGP, not the other way around.

The 100mm^2 16nm dGPU will also have 256-bit memory, it will completely destroy the 128-bit 14nm iGPU even if both had HBM.

I'm not so sure about die cost either, since 16FF will be a regression in cost per transistor compared to 28nm.

What makes you believe that Intel 14nm is cheaper than 22nm ??
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
The 100mm^2 16nm dGPU will also have 256-bit memory, it will completely destroy the 128-bit 14nm iGPU even if both had HBM.

You seem to greatly overestimate the bandwidth benefits for the dGPU. And its not hard to test, since we can overclock dGPU memory. So not sure why you make such wild claims.

A GTX760 with 256bit HBM wouldnt perform much more than the current GTX760. However the IGP, even with 128bit HBM would fly vs today.
 

Ayah

Platinum Member
Jan 1, 2006
2,512
1
81

I said with current tech (as in, instead of what we can buy right now aka what if Intel and nVidia started developing this compound processor >2 years ago). HMC is not currently deployed to the masses, nothing at the time of release of the GTX750Ti has HMC or even purchasable now. HMC is not current gen, it is incoming tech. what you call current gen/tech is not actually available. 14nm has been in production for a while so is current technology, yet is not available for purchase.

the pin count will rise with a GPU as complex as a GTX750 Ti glued to an Haswell 4-core, this requires more power which means more space allocated to IVR and afaik, haswell and maxwell will require even more power planes. The other issue is routing, the current IGP is not afaik connected using a fast enough interconnect to route to any kind of system memory, perhaps not even to satisfy the GTX750 Ti, again this = more space as you cannot remove the externally pointed PCIEx16 without aggravating people. you'll need to add extra pins for a dedicated external 128bit-wide gddr5 module as well unless you want to do something with ridiculously-many channels of ddr3. my power estimate was under the assumption that both would be clocked competitively with the cost+margin. I'd be surprised if you added less than 300 pins to route that hybrid through a mboard, you're looking at more layers added to the minimum requirement, and to deal with gddr5, signal integrity is a pretty big deal on something as already crowded as typical motherboards. remember that trace length is important for gddr5.

you cannot just add die sizes together. if you've seen a SoC integrating 2 complicated systems + peripherals, the die sizes do not add up.
same with power consumption as you have to add secondary components to mesh them together.

the test equipment industry has wanted a top of the line x86 processor with a decent size fpga glued to it using a high speed interconnect for a while. guess how long industry has waited? About 20ish years and this is an industry where money really isn't too much of an issue. i.e. currently, you connect a high throughput fpga to a CPU through PCIE x16 to do post processing. this incoming cpu+fpga hybrid also has a lot of possible military applications. Currently, the top of the line oscilloscope is at 2 channels at 63GHz bandwidth sampled at 160Gsamples/s, this is a ton of data to process and will cost you roughly 500k$ for a sampling section glued by PCIE to an Intel PC in a single box. high end test equipment is basically "if you need it, you need it, regardless of cost." it might even be extremely bang for buck if they can speed up the fpga section of the hybrid to 1GHz or more with Intel's process tech.

and when I said high end in the previous post, I meant prosumer/professional. people who compute on their computers. modelling, simulations, design verification/validation. probably other non-engineering purposes as well. even the most spend-happy gamer pales in comparison to the actual high end. and servers are a different issue altogether. most "servers" are data oriented, usually doing delivery or minor computes and at the "low-ish" end, are already doing so ie avoton or the incoming arm-based processors. compute nodes are the ones who won't integrate as you need faster far more than more energy efficient.

you cannot make a haswell+maxwell hybrid "now" at this very moment. designing it will take a significant amount of time and verification takes long as hell while you go through many revisions and then you have the tedious process of getting production up to profitable yields.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
nVidia and AMD may first have 16FF when Intel ships 10nm. If they even have it.

Both AMD and NV can have 16nm products on retail at the end of 2015 or Q1 2016.
Intel will not have any 10nm products in retail in 2016.
Also, 10nm will be even more expensive than 14nm, making any big 200-240mm^2 die extreamly dificult for early production and/or high volume.

You seem to greatly overestimate the bandwidth benefits for the dGPU. And its not hard to test, since we can overclock dGPU memory. So not sure why you make such wild claims.


Just use the 750Ti(128bit) against 760Ti(256bit). That will be the performance fifference of the NV 16nm dGPU against Intel 14nm iGPU. The dGPU will be more that 50% faster with a smaller die.



Edit:
A GTX760 with 256bit HBM wouldnt perform much more than the current GTX760. However the IGP, even with 128bit HBM would fly vs today.

You are forgeting that the 16nm 100mm^2 dGPU will have the performance of 200+mm^2 28nm die with 50% les power consumption. It will also be priced for a lower segment that GTX760 is today, it will replace the 750Ti not the 760.
 
Last edited:

BrightCandle

Diamond Member
Mar 15, 2007
4,762
0
76
You can't compare todays tech with tomorrows, because tomorrows tech will be competing with tomorrows tech. Just as Intel improves its iGPU so the dGPU also moves on. Todays Haswell is already having a significant process advantage over the dGPUs of the day and yet its not competitive at all, so unless Intel is going to dramatically pull ahead and leave the other fabs behind I can't see this being the decider.

The other thing to consider is that games will keep getting better as well. Tomorrows Intel iGPU might be able to play Crysis but the next set of games will look even better as well. Much of the gaming industry moves with the top GPUs and the standard of graphics moves with it. None of these things is standing still, so something fundamentally has to change for the iGPU to kill off the dGPU, and to do that it must compete performance wise with the discrete option. The current system has a CPU competing with a chip that can draw 3 times more power and has 3 times the bandwidth and more die space. The ability to pass small data between the CPU and the GPU quickly is very unlikely to close that sort of gap anytime soon.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Both AMD and NV can have 16nm products on retail at the end of 2015 or Q1 2016.

Thats your optimistic hope. Everything points against it.

Just use the 750Ti(128bit) against 760Ti(256bit). That will be the performance fifference of the NV 16nm dGPU against Intel 14nm iGPU. The dGPU will be more that 50% faster with a smaller die.

I assume you gave up on the HBM part since you now compare it to something different again? The GK104, while cut down, is still 294mm2. Not to mention a 170W TDP.

Right now AMD and nVidia is preparing another round of 28nm chips. And we may even get a refresh of those chips too, just like with the previous uarch on 28nm.
 
Last edited:

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
Both AMD and NV can have 16nm products on retail at the end of 2015 or Q1 2016.
Intel will not have any 10nm products in retail in 2016.
Also, 10nm will be even more expensive than 14nm, making any big 200-240mm^2 die extreamly dificult for early production and/or high volume.

What makes you believe that Intel 14nm is cheaper than 22nm ??

I don't think you understand Moore's Law...

If 22nm wasn't cheaper than 10µm, the number of transistors would not have been able to grow 6 orders of magnitude.


Also, your availability estimates for 10nm and 16FF are way off.
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
You can't compare todays tech with tomorrows, because tomorrows tech will be competing with tomorrows tech. Just as Intel improves its iGPU so the dGPU also moves on. Todays Haswell is already having a significant process advantage over the dGPUs of the day and yet its not competitive at all, so unless Intel is going to dramatically pull ahead and leave the other fabs behind I can't see this being the decider.

Intel has mainly a power advantage with their Tri-Gate transistors, but a much lower density advantage. Intel will scale very aggressive at 14 and 10nm while TSMC will pause scaling to integrate FinFETs. Higher density means more room for more EUs.

Secondly, Intel will also aggressively update their graphics architecture in the same time frame.

Have you seen Anand's Hangout with Jem Davies? Jem Davies said exactly the same about mobile GPUs catching up with higher-end/higher TDP GPUs for exactly the same reasons.
 

_Rick_

Diamond Member
Apr 20, 2012
3,937
69
91
I've just read what Intel is doing with Knights Landing, and it makes me worry that nVidia will not be able to compete with AMD and Intel in the long run.
Why?
Fabric.
HPC is where performance gains matter, and with Intel proposing the holy grail of a unified NUMA-capable fabric between CPU and GPU, which AMD can dream to replicate, because they also own both ends of the interconnect, nVidia is being left behind.

Sure, they can still sell PCIe cards, but those are going to face a huge challenge, since they won't be as efficient for compute tasks, due to the much slower interconnect, unless nVidia adds dedicated IO-ports. And even then, comms with the CPU will still be over PCIe, with no NUMA, and thus no largescale support.

On the desktop this matters less, but it would make R&D more expensive (per card), since the market shrinks further, and all the CUDA-investment will not be worth much.

Actually, if Intel gets their Omni-scale fabric out before AMD does, they'll have done what AMD foresaw when they bought ATi, before AMD (who I would say pioneered this move after all) could do what they promised their investors back then.

It will be interesting to see how nVidia plans to respond to Knights Landing, and whether AMD can execute on a competitor in time. Gaming GPUs will continue to get more expensive, that's pretty much guaranteed.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
Thats your optimistic hope. Everything points against it.

16nm FF will be ready for production in the end of 2014. Ill give them approximately one year for first retail products availability.



I assume you gave up on the HBM part since you now compare it to something different again?

Nope, if both have HBM then the one with 256bit memory will be faster.

The GK104, while cut down, is still 294mm2. Not to mention a 170W TDP.

I used the GTX760 performance to show you what a 16nm FF at ~100mm^2 can do.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,361
136
I don't think you understand Moore's Law...

If 22nm wasn't cheaper than 10µm, the number of transistors would not have been able to grow 6 orders of magnitude.


Also, your availability estimates for 10nm and 16FF are way off.

At 16nm FF the transistor price is also lower than 28nm because you have 50%+ more density. That doesnt mean that your product will be cheaper in the beggining.

14nm products will only start to be available in retail in Q3 2014, if you are expecting 10nm sooner than Q3 2016 then you clearly dont understand how things work.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
16nm FF will be ready for production in the end of 2014. Ill give them approximately one year for first retail products availability.

20nm was also ready for production in late 2012. We all know how that worked out. And the same applies for 28nm, 40nm and so on. 16FF is no different.

Nope, if both have HBM then the one with 256bit memory will be faster.

I think you missed the point. The IGP with HBM/HMC/Etc would radically close the gap. dGPUs are less effected. Stacked memory gives huge boosts to IGP, while only minor to dGPUs. Simply due to the difference in the bottleneck.


I used the GTX760 performance to show you what a 16nm FF at ~100mm^2 can do.

At what? 100W? And in 2016?
 

witeken

Diamond Member
Dec 25, 2013
3,899
193
106
At 16nm FF the transistor price is also lower than 28nm because you have 50%+ more density. That doesnt mean that your product will be cheaper in the beggining.
And wafer costs will be 50%+ higher as well. And yields will be worse.

=> Transistor cost not lower

14nm products will only start to be available in retail in Q3 2014, if you are expecting 10nm sooner than Q3 2016 then you clearly dont understand how things work.
Intel stated 10nm HVM will start in 2015, exactly 2 years after when 14nm was originally intended to start HVM, but it was delayed by 1 quarter into 2014, so 10nm should arrive 1 quarter earlier than 14nm, so Q2/Q3 (mid-2016), or 1 year after Skylake.
 

njdevilsfan87

Platinum Member
Apr 19, 2007
2,331
251
126
It will be interesting to see how nVidia plans to respond to Knights Landing, and whether AMD can execute on a competitor in time. Gaming GPUs will continue to get more expensive, that's pretty much guaranteed.

Kepler is ahead in parallel performance compared to Knight's Landing. And, Maxwell's feature is going to be to give it the same "functionality" as Knight's Landing (as in be able to run by itself without the need of a CPU). But Xeon Phi would probably funciton better as the "all around device" as its has far less cores, thus each core would handle the generic single threaded tasks much better.

For Nvidia, their concerns are
1) Maintain their performance lead over Intel
2) Make CUDA easier to use (one of the advantages of Knight's Landing will be ease of use). It's already really easy to setup, but needs some work in the coding, which is what the current CUDA 6.X is already working on.
3) Look into building one (or a few) "super strong core" alongside everything else in a future architecture if we are indeed heading toward these running entire systems. It doesn't have to be anywhere near what Intel currently has. My guess is that cores as strong as C2D cores would be enough.

Nvidia as of right now seems very well off here. Intel is still obviously at the end of the R&D phase and has plenty of room to improve, and likely will. I really have no idea where AMD is falling into all of this. Their cards are certainly compute capable, but they rely too much on OpenCL which is not an advantage when CUDA itself doesn't cost anything. AMD needs to offer something as robust as CUDA, or officially support OpenCL (for example "OpenCL Toolkit") or offer some other incentive to use their GPUs over Nvidia or Intel, because pricing won't cut it in professional development where all of these GPUs are aiming.

I actually really like where all of this is going. It's likely going to force developers into (good) parallel programming.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
Kepler is ahead in parallel performance compared to Knight's Landing. And, Maxwell's feature is going to be to give it the same "functionality" as Knight's Landing (as in be able to run by itself without the need of a CPU). But Xeon Phi would probably funciton better as the "all around device" as its has far less cores, thus each core would handle the generic single threaded tasks much better.

There is nothing pointing to that dGPUs get ARM chips.

Also the fastest supercomputer uses Xeon Phi. And if you check the list, you see the Xeon Phi gaining momentum rather quickly.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |