Does anyone think we'll see ARM replace x86 in desktops?

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Rezist

Senior member
Jun 20, 2009
726
0
71
Since x86 has spread to consoles, not likely.

I see it going the other way, consoles will likely drop X86 ASAP. With ARM they get like 3-4 vendors to bid between. On X86 there bascially always going to be using AMD, unless Intel takes less margins and get a better IGPU.
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
Hooray for another one of *these* threads.

ARM and x86 are instruction sets; being that x86 has a massive head start and most of the back-end infrastructure is running on x86 hardware / software that dumb devices like ARM powered internet phones use; x86 will never be replaced.

Also, ARM hardware designed to perform like x86 hardware will use about the same amount of power.

That is COMPLETELY ignoring the main benefit of ARM:

x86, as it stands, is only licensed by three companies and two of them aren't very competitive in the general market. Even if a company that has the resources to be competitive (like Apple or Samsung) buy one of the weak companies the x86 licence does not transfer. x86 is a dead end LEGALLY.

Meanwhile anyone and we mean anyone can start making an ARM cpu. THAT is why it will beat x86 one day. Not because of actual tangible technical benefits to one instruction set or the other, but because the ARM market is more competitive and that competition will lead to the best possible CPUs.

What is funny is x86 fans know this- they watched as x86 killed Power, SPARC, and RISC using the same economic argument.

I think the truth is there aren't really any fans of x86, just fans of Intel, and they hate to read the obvious writing on the wall that one day Intel isn't going to matter like they did the last 30 years. Hell, Intel doesn't even know what they are doing. They cut the mobile budget, but increase the "Internet of Things" budget which is basically "mobile but even cheaper." That is why they are building robots and things like that, Intel knows in the long run the consumer market goose is cooked unless they can find a new use case for high margin CPUs.
 
Reactions: VirtualLarry

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81

This oft-cited paper has a lot of great raw data, but the ultimate conclusion, that ISA doesn't really influence power consumption, is completely pulled from nowhere. They took a bunch of x86 and ARM platforms with a ton of different variables and concluded that, since they gave very different results, that one of the variables didn't matter. It doesn't make any sense. It's very un-scientific.

That isn't to say that ISA does make a large difference. That's a very complicated question that's extremely difficult to examine, and probably not something you'll be able to do just by comparing real world hardware. You just can't isolate the variables. The most qualified people to give insight into that question would probably by CPU architects who are highly familiar with x86 and ARM. Over the years different CPU architects have commented on this, with somewhat differing answers - of course the actual answer is not going to be some fixed percentage difference in some metric but will be influenced by a ton of other factors.

I'm not a CPU architect, so I can't give insights like they could. All I can really do is look at things as an assembly programmer and look at aspects of existing CPU designs. But from my perspective, x86 has some major flaws in some areas.

x86 uses a byte variable encoding, vs ARM64 which uses a fixed 4-byte encoding. So x86's encoding is much more flexible and should enable a much better code density. Yet in studies I've seen the two (generated with the same version of GCC) tend to have comparable code density. And from my experience I would wager that in SIMD heavy code x86 has worse code density.

Why would this be the case? It's because x86 has been developed in a very inefficient way, by very gradually adding new functionality a bit at a time in many steps. This started with 32-bit mode. Adding the new operand sizes and expanding the addressing with SIB bytes was sub-optimal. Adding MMX then SSE took more and more prefix bytes. Lots of instructions have redundant encodings or do the same thing as other instructions. VEX undertakes a major restructuring to try to account for this, it's practically a completely new instruction set encoding but it too pays for having to live alongside legacy SSE.

So the key benefit to x86 ends up not actually being much of a benefit at all and you're stuck paying for it. People often say this payment isn't actually anything, just a little extra space in the decoders. But consider the lengths that Sandy Bridge and onward go through to avoid this decoder cost. They have this uop cache, and while it's not known exactly how many bits it takes we do roughly know what the uops are capable of, so I would estimate that the instructions take up at least 2-3x more space than they do in the L1 icache. This is before taking into account the extra space wasted on redundant data (from overlapping lines) and extra metadata needed over a normal cache (to maintain offsets into the cache lines). To pump out 4+ uops a cycle that need a really wide interface, a lot of wires compared to what they'd need to read from the L1 icache. All of this to avoid the decoders, which cost several pipeline stages.

And then, even with all of this work put into x86 instructions, even with the x86 instructions being relatively large for what they do.. in a lot of ways the instruction set still sucks.

Over the last few weeks I've been doing x86 optimization of my Android app. For me this means targeting SSSE3 in 32-bit x86. This is the realistic baseline for x86 on Android; 64-bit use is too low (even among 64-bit capable SoCs, which eg Medfield and Clovertrail+ were not) and SSE4.x doesn't add that much anyway.

I didn't really appreciate this until I had to do it but I can now list many disadvantages SSE4.2 has vs ARMv7 NEON (let alone ARMv8/AArch64). Especially with integer SIMD. SSE has some advantages but they're much fewer.

Here's a comparison of the inner loop from two functions to demonstrate some of what I'm saying:

ARMv7 NEON: http://pastebin.com/7g4Ad46N
x86 SSSE3: http://pastebin.com/E3wwyTif

Actual performance will vary depending on uarch and all that.. but the processor executing the second is going to have a really hard time doing it anywhere close to as efficiently as the processor executing the first. There's just only so much uarch can hide, I really can't look at this big gulf and say that ISA doesn't matter. Now this is kind of a contrived example, I picked something that looked especially bad afterall, for some functions SSSE3 basically nails it. But this is definitely enough of a thing to really make me feel a tangible difference.

AVX and AVX2 fix some of the disadvantages and add their own unique benefits. This is basically Intel's admission that ISA does matter, which is why they're addressing weaknesses in the ISA. But AVX is not supported on Celeron and Pentium branded processors, let alone Atoms. And at this point I'm wondering when they ever will be. From my perspective, they're only there on the processors that need them the least. They're sold as a luxury and not as a feature to make the CPU more competitive.
 
Last edited:
Reactions: Vattila

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
poofyhairguy said:
I think the truth is there aren't really any fans of x86, just fans of Intel,

I think I'd rephrase a little bit to say that there aren't (or at least aren't many) fans of x86, there are fans of high-performance, commodity hardware. And that mostly means Intel.
 

jhu

Lifer
Oct 10, 1999
11,918
9
81
I think I'd rephrase a little bit to say that there aren't (or at least aren't many) fans of x86, there are fans of high-performance, commodity hardware. And that mostly means Intel.

Indeed. x86 is fast and cheap. If ever ARM or some other architecture has fast and cheap computing solutions, I would definitely get one.

We already have fast and expensive (POWER, SPARC64), and while technically I could afford such systems, performance/$ is significantly better with x86 (even AMD stuff).
 

pitz

Senior member
Feb 11, 2010
461
0
0
Last time there was an effort to "switch" people to another platform (the DEC Alpha AXP was going to take over the world in the mid-late 1990s) the big issue was binary compatibility.

Today, most code is portable. And nobody worries about the cost of compilers as you don't have to fork out thousands to get a compiler for your operating system.

The conditions are better than ever for a 'switch'. Even though I don't think it will happen in the "PC" space, I could totally see Apple deciding to give up on x86.

Personally I believe Intel will be broken up at some point, as the individual parts are likely worth more than the conglomerate itself is. So the ARM world will have access to exactly the same processes as used by Intel's designs of x86 CPU's.
 

Essence_of_War

Platinum Member
Feb 21, 2013
2,650
4
81
Indeed. x86 is fast and cheap. If ever ARM or some other architecture has fast and cheap computing solutions, I would definitely get one.

There is a universe parallel to this one where a commoditizing-of-complements affected other architectures as well as x86. In such a world one can buy motherboards and CPUs for a plethora of different architectures, along with fireworks, and alcohol at drive through windows and use them all before you get home.

Or something like that.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
Indeed. x86 is fast and cheap. If ever ARM or some other architecture has fast and cheap computing solutions, I would definitely get one.

We already have fast and expensive (POWER, SPARC64), and while technically I could afford such systems, performance/$ is significantly better with x86 (even AMD stuff).

I would only consider ARM, even if it was as fast or faster, if it supported the software I wanted to use, including games.
 
Apr 30, 2015
131
10
81
Just to provide a bit of background on the argument that Intel will be around for a long time, because of legacy code:
In a 2009 article in the FT, industry insiders estimated the amount of Cobol code at over 220 billion (10^9) lines of code, or more; the cost of re-writing in another language was put at $25 per line, over 5 trillion USD. Most code originated on mainframes, and it was being ported to the cloud, as Cobol still.
A more recent article, in 2015, does mention newer systems, but a lot of the $200 billion IT annual budget of the banks is still spent on maintenance of old systems, and meeting regulatory targets, rather than innovating. A lot of the world's code did not originate on X86 systems, it is just passing through.
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
Just to provide a bit of background on the argument that Intel will be around for a long time, because of legacy code:
In a 2009 article in the FT, industry insiders estimated the amount of Cobol code at over 220 billion (10^9) lines of code, or more; the cost of re-writing in another language was put at $25 per line, over 5 trillion USD. Most code originated on mainframes, and it was being ported to the cloud, as Cobol still.
A more recent article, in 2015, does mention newer systems, but a lot of the $200 billion IT annual budget of the banks is still spent on maintenance of old systems, and meeting regulatory targets, rather than innovating. A lot of the world's code did not originate on X86 systems, it is just passing through.

How could code cost $25 per line? That's pretty high. Decent coders can write hundreds of lines of code on a good day. Granted, we're talking COBOL here, which is like writing code by punching the keyboard, but come on.

Anyway, I think the amount of legacy code that was written in x86 assembly is a lot smaller than the amount of COBOL code. There's still costs associating with porting code written in something like C or C++ away from x86, but they're a different sort of less severe cost. Even porting ancient assembly code is easier, there are surely tools to help with that that are pretty reliable if you don't care hugely about performance, and I doubt you really could if the code is prehistoric.
 

simboss

Member
Jan 4, 2013
47
0
66
This oft-cited paper has a lot of great raw data, but the ultimate conclusion, that ISA doesn't really influence power consumption, is completely pulled from nowhere. They took a bunch of x86 and ARM platforms with a ton of different variables and concluded that, since they gave very different results, that one of the variables didn't matter. It doesn't make any sense. It's very un-scientific.

That isn't to say that ISA does make a large difference. That's a very complicated question that's extremely difficult to examine, and probably not something you'll be able to do just by comparing real world hardware. You just can't isolate the variables. The most qualified people to give insight into that question would probably by CPU architects who are highly familiar with x86 and ARM. Over the years different CPU architects have commented on this, with somewhat differing answers - of course the actual answer is not going to be some fixed percentage difference in some metric but will be influenced by a ton of other factors.

I'm not a CPU architect, so I can't give insights like they could. All I can really do is look at things as an assembly programmer and look at aspects of existing CPU designs. But from my perspective, x86 has some major flaws in some areas.

x86 uses a byte variable encoding, vs ARM64 which uses a fixed 4-byte encoding. So x86's encoding is much more flexible and should enable a much better code density. Yet in studies I've seen the two (generated with the same version of GCC) tend to have comparable code density. And from my experience I would wager that in SIMD heavy code x86 has worse code density.

Why would this be the case? It's because x86 has been developed in a very inefficient way, by very gradually adding new functionality a bit at a time in many steps. This started with 32-bit mode. Adding the new operand sizes and expanding the addressing with SIB bytes was sub-optimal. Adding MMX then SSE took more and more prefix bytes. Lots of instructions have redundant encodings or do the same thing as other instructions. VEX undertakes a major restructuring to try to account for this, it's practically a completely new instruction set encoding but it too pays for having to live alongside legacy SSE.

So the key benefit to x86 ends up not actually being much of a benefit at all and you're stuck paying for it. People often say this payment isn't actually anything, just a little extra space in the decoders. But consider the lengths that Sandy Bridge and onward go through to avoid this decoder cost. They have this uop cache, and while it's not known exactly how many bits it takes we do roughly know what the uops are capable of, so I would estimate that the instructions take up at least 2-3x more space than they do in the L1 icache. This is before taking into account the extra space wasted on redundant data (from overlapping lines) and extra metadata needed over a normal cache (to maintain offsets into the cache lines). To pump out 4+ uops a cycle that need a really wide interface, a lot of wires compared to what they'd need to read from the L1 icache. All of this to avoid the decoders, which cost several pipeline stages.

And then, even with all of this work put into x86 instructions, even with the x86 instructions being relatively large for what they do.. in a lot of ways the instruction set still sucks.

Over the last few weeks I've been doing x86 optimization of my Android app. For me this means targeting SSSE3 in 32-bit x86. This is the realistic baseline for x86 on Android; 64-bit use is too low (even among 64-bit capable SoCs, which eg Medfield and Clovertrail+ were not) and SSE4.x doesn't add that much anyway.

I didn't really appreciate this until I had to do it but I can now list many disadvantages SSE4.2 has vs ARMv7 NEON (let alone ARMv8/AArch64). Especially with integer SIMD. SSE has some advantages but they're much fewer.

Here's a comparison of the inner loop from two functions to demonstrate some of what I'm saying:

ARMv7 NEON: http://pastebin.com/7g4Ad46N
x86 SSSE3: http://pastebin.com/E3wwyTif

Actual performance will vary depending on uarch and all that.. but the processor executing the second is going to have a really hard time doing it anywhere close to as efficiently as the processor executing the first. There's just only so much uarch can hide, I really can't look at this big gulf and say that ISA doesn't matter. Now this is kind of a contrived example, I picked something that looked especially bad afterall, for some functions SSSE3 basically nails it. But this is definitely enough of a thing to really make me feel a tangible difference.

AVX and AVX2 fix some of the disadvantages and add their own unique benefits. This is basically Intel's admission that ISA does matter, which is why they're addressing weaknesses in the ISA. But AVX is not supported on Celeron and Pentium branded processors, let alone Atoms. And at this point I'm wondering when they ever will be. From my perspective, they're only there on the processors that need them the least. They're sold as a luxury and not as a feature to make the CPU more competitive.

Very interesting, thanks for sharing this.

From a CPU architecture point of view, the advantage of a reasonably simple ISA is also that it gives you the ability to try new things more easily and validate them. Either you can put the same amount of new ideas faster or for less money (eg: Mediatek), or you can put more for the same cost and development time (eg: Apple).
ARM was starting from far behind Intel, so this was not showing at first, but now that in terms of revenue ARM SoC vendors are in the same league, it is starting to show, performance increases pace is much higher than Intel's.
Some people will say that ARM vendors will hit the same performance wall as Intel, but why didn't Intel manage to keep up the pace at least in the low-power end then?
There are probably internal reasons for that (Atom not high enough on the priority list), but the ISA is probably not a small part in it.
 
Apr 30, 2015
131
10
81
How could code cost $25 per line? That's pretty high. Decent coders can write hundreds of lines of code on a good day. Granted, we're talking COBOL here, which is like writing code by punching the keyboard, but come on.

Anyway, I think the amount of legacy code that was written in x86 assembly is a lot smaller than the amount of COBOL code. There's still costs associating with porting code written in something like C or C++ away from x86, but they're a different sort of less severe cost. Even porting ancient assembly code is easier, there are surely tools to help with that that are pretty reliable if you don't care hugely about performance, and I doubt you really could if the code is prehistoric.

With all due respect, they are talking about banking systems, utility company systems etc; you can't just sit at a screen and write code, there is design, code, test and integrate phases, plus all the quality control.
In 2011, Micro Focus claimed that 1.5 million lines of Cobol were being written per day. Tools must help, but there is still a huge legacy of code to maintain and enhance; Micro Focus' share price has ramped up over the last five years; they must be very busy.
 

B-Riz

Golden Member
Feb 15, 2011
1,530
676
136
That is COMPLETELY ignoring the main benefit of ARM:

x86, as it stands, is only licensed by three companies and two of them aren't very competitive in the general market. Even if a company that has the resources to be competitive (like Apple or Samsung) buy one of the weak companies the x86 licence does not transfer. x86 is a dead end LEGALLY.

Meanwhile anyone and we mean anyone can start making an ARM cpu. THAT is why it will beat x86 one day. Not because of actual tangible technical benefits to one instruction set or the other, but because the ARM market is more competitive and that competition will lead to the best possible CPUs.

What is funny is x86 fans know this- they watched as x86 killed Power, SPARC, and RISC using the same economic argument.

I think the truth is there aren't really any fans of x86, just fans of Intel, and they hate to read the obvious writing on the wall that one day Intel isn't going to matter like they did the last 30 years. Hell, Intel doesn't even know what they are doing. They cut the mobile budget, but increase the "Internet of Things" budget which is basically "mobile but even cheaper." That is why they are building robots and things like that, Intel knows in the long run the consumer market goose is cooked unless they can find a new use case for high margin CPUs.

But this doesn't really matter; OP references mobile phones which seems to be a common thread of "ARM is da futurz".

ARM works well here because a loss leader like a phone needs to be as cheap as possible in hardware, the money is made on the contract / after the hardware sale.

My ultimate point, which seems to be lost, is that an ARM desktop chip DESIGNED to compete with an i7 will probably use a similar amount of power and produce a similar amount of heat.

So the low power good performance that ARM is known for is moot in a desktop situation.

AND, why would someone (and not one person, but a whole lot of people and money too) design an ARM desktop chip with little useful ecosystem for it to work in??? Look at all the x86 / x64 software already in existence.

This is just re-invention of the wheel.

ARM compliments x86 / x64, it is not a replacement.
 
Apr 30, 2015
131
10
81
I am reading this thread, and writing this comment, using a Raspberry Pi 2; does that make it a desktop machine? The SBC dangles from the 23" display in thin air; it is sufficient for surfing, and light computation, and the R Pi 2 has four cores, allowing parallel-computation, which I use to monitor data from web-sites. Each time the R Pi is updated, I will buy a new one; I do not plan to buy X86 again, if I can help it. I must be a (low) power user.
 

TheRyuu

Diamond Member
Dec 3, 2005
5,479
14
81
Now this is kind of a contrived example, I picked something that looked especially bad afterall, for some functions SSSE3 basically nails it. But this is definitely enough of a thing to really make me feel a tangible difference.

AVX and AVX2 fix some of the disadvantages and add their own unique benefits. This is basically Intel's admission that ISA does matter, which is why they're addressing weaknesses in the ISA. But AVX is not supported on Celeron and Pentium branded processors, let alone Atoms. And at this point I'm wondering when they ever will be. From my perspective, they're only there on the processors that need them the least. They're sold as a luxury and not as a feature to make the CPU more competitive.

Just to add a note here that you can reduce code duplication and simplify assembly coding on x86 by using x86inc.asm[1] which provides a bunch of macro's using the yasm[2] assembler. It can allow for an easier time writing functions which target different SSE/AVX versions (I believe it has some sort of macro magic which also handles AVX/SSE conversion automagically although I could be wrong about that, i.e. write once and you get both). Also this is probably beyond the scope of this thread.

[1] https://git.videolan.org/?p=x264.git;a=blob;f=common/x86/x86inc.asm;hb=HEAD
[2] http://yasm.tortall.net/
 

poofyhairguy

Lifer
Nov 20, 2005
14,612
318
126
ARM works well here because a loss leader like a phone needs to be as cheap as possible in hardware, the money is made on the contract / after the hardware sale.

That is completely false. Most non-Apple phones are very low margin and almost no companies who offer phone service also manufacture smartphones.

Smartphones in 2016 is exactly like the laptop market- Apple dominates the high end while everyone else competes on price with low margins. ARM works well here because its cheap.

My ultimate point, which seems to be lost, is that an ARM desktop chip DESIGNED to compete with an i7 will probably use a similar amount of power and produce a similar amount of heat.

The node matters more than instruction set on this point.

AND, why would someone (and not one person, but a whole lot of people and money too) design an ARM desktop chip with little useful ecosystem for it to work in??? Look at all the x86 / x64 software already in existence.

This is just re-invention of the wheel.

Why did Intel design and create server chips twenty years ago when companies like IBM and Sun had most of the best server software in existence for their platforms? Because in the long run Intel was able to provide hardware that was overall as powerful for magnitudes cheaper. The software vendors then followed along, because the other option is someone would replace them too.

Plus one thing people often never consider for the server/professional market is how much of the software businesses use is opensource. A lot of that software can be easily ported to ARM if there is a cost reason to do so, which cuts down on the software moat that separates the instruction sets. It would be harder for a gamer to move to ARM than a webhost, and webhosts buy more CPUs.

Intel has be very pro Open Source and outright pro Linux for the last ten years, because they saw that cutting MS out of the Wintel empire was a way to ensure their margins remained in the final product price. It would be funny if because of those efforts (and what they did to make open source software legitimate for industry) it will be easier for companies to move off of Intel's products.
 

Gunbuster

Diamond Member
Oct 9, 1999
6,852
23
81
ARM works well here because a loss leader like a phone needs to be as cheap as possible in hardware, the money is made on the contract / after the hardware sale.

You can get a competent smart phone for $30. Do you actually think those $700+ handsets are putting Apple, Samsung and LG in the poor house?

They are making PROFIT! Heck it's like the last consumer tech bastion for profit and they rigorously spin the tale of all that expense and R&D for their taxes and to keep you thinking you just got some great value.

Handsets $$$
Insurance $$$
Carriers $$$

They make $300 X86 laptops and $50 tablets for goodness sake!
 

Exophase

Diamond Member
Apr 19, 2012
4,439
9
81
With all due respect, they are talking about banking systems, utility company systems etc; you can't just sit at a screen and write code, there is design, code, test and integrate phases, plus all the quality control.

Okay. It's just weird to me to call that $/line when it doesn't really scale that linearly (IMO) given some rather huge and more fixed costs there, but this is really a semantic matter...

In 2011, Micro Focus claimed that 1.5 million lines of Cobol were being written per day. Tools must help, but there is still a huge legacy of code to maintain and enhance; Micro Focus' share price has ramped up over the last five years; they must be very busy.

And COBOL programmers probably become more relatively expensive every year...
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
We've already had Windows ARM a few years ago. The original Surface was made both ways. It didn't take off. Maybe one day, but this isn't new. This is an attempt to try it again.

Edit:
The biggest issue they had before, and will remain an issue, is all the legacy software will not work. Most software designed for current PC's will not work. They have to either write using high level code that will work for both and not as efficient, or they have to have to write code for both versions. This led to problems with adoption last time, and will likely make this a very tough sale for a while into the future.
 

pooptastic

Member
Oct 18, 2015
87
1
36
Lol, fair point.

Still, it shows how much ARM has been gaining on x86 due to cellphone popularity.
 

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
This is an interesting and slightly misleading paragraph, though it does give a little hope:

The ARM equipped devices will be running full Windows 10, so no desktop apps are left behind, unlike the previous time that Microsoft attempted this. The ARM chips at the time offered much less performance as well, so this time around, it should be a much better experience. Universal Windows Apps will be available compiled for ARM directly, but x86 apps will run in emulation, which is still a cause for concern for both performance and battery life, so we’ll have to see how that pans out. Microsoft has an “optimized” version of Office 365 for the new ARM powered PCs, which likely means it’s been recompiled for native performance.

Obviously, it's not exactly a full version of Windows 10, as it won't run PC versions of code directly, but it does have an emulation mode. And it allows for compilation of the same code to the ARM version. That said, I know things usually don't work so smoothly when working on a new platform due to low level code issues.

It'll be interesting to see how well it works.
 

cytg111

Lifer
Mar 17, 2008
23,551
13,116
136
We've already had Windows ARM a few years ago. The original Surface was made both ways. It didn't take off. Maybe one day, but this isn't new. This is an attempt to try it again.

Edit:
The biggest issue they had before, and will remain an issue, is all the legacy software will not work. Most software designed for current PC's will not work. They have to either write using high level code that will work for both and not as efficient, or they have to have to write code for both versions. This led to problems with adoption last time, and will likely make this a very tough sale for a while into the future.

I think thats small potatoes, what really held the old ARM version /windowsrt back was the fact that it was locked down. Windows store or bust and that sucked hard.
 
Reactions: NTMBK

bystander36

Diamond Member
Apr 1, 2013
5,154
132
106
I think thats small potatoes, what really held the old ARM version /windowsrt back was the fact that it was locked down. Windows store or bust and that sucked hard.
I'm not entirely sure if that will change much. It requires a persistent online connection while in use. And was it really tied to the store, or was that the only place to find existing software that worked for it?
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |