Ethereum GPU mining?

Page 179 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Accord99

Platinum Member
Jul 2, 2001
2,259
172
106
Wait, is that 3D NAND really on an older process? I thought both the 950 Pro and MX300 were newer drives than my MX200.
I don't recall the process that Micron is using but Samsung is/was using 40nm for its 3D NAND:

By stacking transistors (i.e. cells when speaking about NAND) vertically, Samsung is able to relax the process node back to a much more convenient 40nm. When there are 32 cells on top of each other, it is obvious that there is no need for a 10nm-class node because the stacking increases the density, allowing production costs to scale lower. As we have seen with the history of NAND die shrinks, a higher process node provides more endurance and higher performance, which is what the 850 Pro and V-NAND is all about.

https://www.anandtech.com/show/8216/samsung-ssd-850-pro-128gb-256gb-1tb-review-enter-the-3d-era
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
The BPX reports total host writes of 69TB (69,109 GB). The MX200 reports total host writes of 22TB (22,187 GB). If I had kept closer track of drive health, I could tell when the BPX accumulated the majority of those writes.

That's an amazing amount. Despite that, 9 months for 8% calculation suggests the drive should last quite a bit.

The Optane Caching drive I have is at 97% with 5.4TB written. I bought it back in late May. The X25-M drive I bought in 2009 when it came out says 87% remaining. I don't remember the number for written, maybe 15-20TB? I used the X25-M system as a main one for few years.

Wait, is that 3D NAND really on an older process? I thought both the 950 Pro and MX300 were newer drives than my MX200.

Yes, as Accord99's post shows, 3D NAND is on an older lithography than on planar NAND. Most guesses put it at 35-50nm. Not only they reached a point where its extremely difficult to scale down further, every time the geometries become small you lose endurance. So they stack multiple layers, but set back few process generations, which should reverse the endurance loss caused by cutting-edge planar NAND. Of course the drive is TLC, so I think there will be an overall loss, but not as much as MLC vs TLC on the same process suggest.
 

DrMrLordX

Lifer
Apr 27, 2000
22,496
12,367
136
edit: redacted, @IntelUser2000 explained it pretty well. So Micron is probably doing the same stacking on an older node as Samsung.

And yeah the wear levels I've piled up on my NVMe drive are pretty creepy.

I might have to go get me one of those MX300s, let it sync the blockchain, and run it as close to 24/7 as I can just to see how many writes I can pile up.

FOR SCIENCE
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
So Micron is probably doing the same stacking on an older node as Samsung.

Yes. 3D stacking is the way to go for NAND. Not so much for logic as they run hot, but not so with NAND. Plus they've got no choice other than to keep stacking.

I might have to go get me one of those MX300s, let it sync the blockchain, and run it as close to 24/7 as I can just to see how many writes I can pile up.

I always like to see manufacturer stated endurance ratings: https://www.anandtech.com/show/9702/samsung-950-pro-ssd-review-256gb-512gb

The 512GB 950 Pro is at 400TBW for example. The early SSDs like X25-M didn't explicitely state TBW, just a simple "xx writes per day would be enough for folks".

This is an even more reason to fix the massive amount of resource required by the Mist wallet.
 

DrMrLordX

Lifer
Apr 27, 2000
22,496
12,367
136
This is an even more reason to fix the massive amount of resource required by the Mist wallet.

So, two tidbits to report here.

1). My BPX is now reporting only 69TB (not 89TB) total writes via SMART. Odd. Not sure what made it change the reported value like that.

edit: I am clearly on crack, it never showed 89TB. Gotta defrag the brain. Guh.

2). I found a 146 GB partition on my old MX200 that wasn't doing anything important (had some old Steam titles on it from when the MX200 was my main drive). So I cleared it out completely and am now syncing the blockchain via Ethereum Wallet (had to do some symlinking; passing geth arguments to the ethereum wallet executable made it fail to sync period, which sucks).

According to my metrics (mostly HWiNFO64), SMART is reporting 53GB in writes to the MX200 despite Windows showing that only 7GB have been written to the drive thus far.

Also, Ethereum Wallet is syncing FAST on the older blocks. I run maybe a 220 MB/s download speed via the 802.11ac NIC integrated into my X370 Taichi, on a (currently) 3.9 GHz R7-1800x with 16 GB of RAM, and Ethereum Wallet has pulled nearly 3 million blocks in 25 minutes. Of course it hasn't synced all the chain structure yet either. We'll see what happens on the last 10k blocks or so.

The only other interesting bit to report right now is that the MX200 has averaged ~49MB/s write speeds during the sync operation. The number seems to be climbing. Peak sustained write reported by HWiNFO64 was ~122 MB/s for the session. It's not a system drive so syncing is the only thing it's doing right now. Not sure how many HDDs could keep up with that, assuming a preponderance of random writes.

edit: syncing is slowing down at about 3.9 million blocks

https://www.nytimes.com/2017/12/25/...ment/germany-electricity-negative-prices.html

Getting PAID to CONSUME ELECTRIC POWER in Germany? Mwhahahaha! Mining for all!

They have excess grid power, so they gotta dump it somewhere. The alternative is to run massive heaters on-site to dump the extra power. Tesla megabatteries, where are you?
 
Last edited:
Reactions: Headfoot

DrMrLordX

Lifer
Apr 27, 2000
22,496
12,367
136
So Ethereum Wallet stopped syncing with 203 blocks remaining. Pfft.

Task Manager is reading no activity from the MX200 drive that stores chaindata. No real activity anywhere else. Ethereum Wallet has dropped from maybe 8% of CPU time to .4%. It's sitting there, doing nothing. Also Ethereum Wallet reports "peers" instead of telling me how many are connected. Not 0 peers, not 20 peers, just . . . "peers". Will restart client to see if it can finish off the last few blocks.

For those of you who can not sync the last few blocks . . . I do not think it is your HDD that is to blame.

Anyway, total stats for a fresh sync to an MX200 drive:

Total writes reported by SMART: 293 GB
Total space used on drive as reported by Win10: 38GB
Average write speed reported by HWiNFO64 during sync: ~58 MB/s
Max write speed reported by HWiNFO64 during sync: ~148 MB/s

edit: as an addendum, I want to add that the massive chain structure download the client hits you with in the last 200-300 blocks adds another 30GB+ to the SMART record of total writes while adding maybe 8 GB more to the drive. Once you handle all the chain structure elements, the final blocks will download, and you're synced.

Syncing on this machine took about 6 1/2 hours. Now I get to leave it running for awhile to see what else happens to my poor MX200 . . .
 
Last edited:

Charlie22911

Senior member
Mar 19, 2005
614
231
116
I’ve got what I suspect is a PMR drive (Seagate 2.5” 2TB slim), it just isn’t able to keep up as new blocks are added.
I’ve abandoned the NUC for an Optiplex 7010 I have sitting around (i7 3770, 16GB Ram). Also have a 2TB WD Black 3.5” HDD laying around, I’ll play with this combo in a couple weeks when I get back from my winter trip.
 

DrMrLordX

Lifer
Apr 27, 2000
22,496
12,367
136
Should be interesting to see if that WD Black can keep up.

So I've restarted Ethereum Wallet twice since getting stuck. Got stuck with 203 blocks, then the second time with 90 blocks. It's making some progress again after the second restart.

What seems to be happening is that you get hit with massive chain structure downloads when you get into the last few blocks, and the client seems to mishandle those if you press "launch client" before getting to the last block. Not a lot of disk activity here though.

edit: sync finished, see above.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
adds another 30GB+ to the SMART record of total writes while adding maybe 8 GB more to the drive.

8GB actual increase in capacity used while adding 30GB+ to SMART?

Is this a real world example of write amplification being much higher than 1? Or is the syncing process doing deletes of its own as its writing? Otherwise the download size would be even bigger?
 

DrMrLordX

Lifer
Apr 27, 2000
22,496
12,367
136
8GB actual increase in capacity used while adding 30GB+ to SMART?

Is this a real world example of write amplification being much higher than 1? Or is the syncing process doing deletes of its own as its writing? Otherwise the download size would be even bigger?

That's what has me confused. I'm not sure if SMART is recording writes correctly or what's really going on here.

If you think that's wacky, check this out: once I restarted the client yesterday for the second time (which got me going on the massive chain structure download that hit when I had 90 blocks left to go), I restarted HWiNFO64 and left it running since then. That was 22 hours ago. In that time, SMART reports 417GB worth of host writes. So between the chain structure download and DLing new blocks, I've piled up more than 400 GB of host writes? Wow.

Now that I'm fully synced, I'm gonna restart HWiNFO64 and see what SMART reports over the next 24 hours or so. I want to take the chain structure DL data off the table and see what happens. Regardless, it's telling that pulling down the first ~4.8 million blocks and a lot of chain structure to go with it, only registered 293 GB of host writes, but everything since then . . .
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
I am doing geth fastsync now, because I want to see TBW on my XPS12's PM830 SSD, and because I want to get my wallet working.

Update: Did it on 3 systems. They all show few times difference between bytes written rating on SMART and storage taken up on the drive. So Geth sync is very demanding. Roughly 2 hours of syncing has written 33GB according to HWInfo64's Smart figure for the XPS 12. That suggests a write speed of 4-6MB/s.

Average write speed reported by HWiNFO64 during sync: ~58 MB/s

Are you sure its that high? Look at the average numbers rather than Current/Minimum/Maximum. Mine reports 7.9MB/s. I believe that's realistic considering the speed of the internet connection. Mine is rated at 150Mbps. 58MB/s means the internet speed needs to be 500Mbps or greater. I guess you might have a Gbit connection.
 
Last edited:

Charlie22911

Senior member
Mar 19, 2005
614
231
116
When doing a sync on my main PC I was also seeing spikes to around 60-70MBps, I have Comcast 150mbps.
When I was trying to sync with the NUC, the drive was seeking like crazy so it’s fair to say a lot of random accesses were taking place.

I’m sure there is an easy answer in the source code, it’s just way out of my depth.
 

DrMrLordX

Lifer
Apr 27, 2000
22,496
12,367
136
Are you sure its that high?

Yes. I made sure to check the average (fourth field in HWiNFO64 by default). Remember that was for the initial download of blocks. It tore through blocks pretty quickly for the first 3.9 million or so.

In contrast, maintaining syncronization shows an average of ~5.2 MB/s written to the MX200.

I am not 100% certain what needs to be done at the code level to reduce host writes during block syncronization. There is probably some good reason why it has to write to the drive so often. I just can't fathom what that is. My next experiment will be to try using a large cache setting to see if that affects total host writes. I only have 16GB of RAM, but I can probably do around 12.228 GB before risking RAM shortages.
 

Charlie22911

Senior member
Mar 19, 2005
614
231
116
Yes. I made sure to check the average (fourth field in HWiNFO64 by default). Remember that was for the initial download of blocks. It tore through blocks pretty quickly for the first 3.9 million or so.

In contrast, maintaining syncronization shows an average of ~5.2 MB/s written to the MX200.

I am not 100% certain what needs to be done at the code level to reduce host writes during block syncronization. There is probably some good reason why it has to write to the drive so often. I just can't fathom what that is. My next experiment will be to try using a large cache setting to see if that affects total host writes. I only have 16GB of RAM, but I can probably do around 12.228 GB before risking RAM shortages.

If you do increase cache size be sure to disable the superfetch service, that made a huge difference for me in used RAM.
 

Erithan13

Senior member
Oct 25, 2015
218
79
66
After Cryptokitties comes EtherTanks, just launched in the past day. Bought the cheapest tank on a whim to see what happens, am already on the way to breaking even. If you're going to get in on this now is absolutely the time.

Interesting to see Ripple knock us down from second place by market cap. Have seen people argue very passionately for and against it. I'd be lying if I said I wasn't a little miffed by it getting all the attention while eth sits stewing on the same price, but it does demonstrate that BTC is no longer the only game in town.

Anyone cashing out just now? I agonised over it and decided to take out a couple of eth near the recent peak price. Yes, that eth may be worth far more in the future, but it can buy me a nice treat right now and cover my previous mining costs.
 

DrMrLordX

Lifer
Apr 27, 2000
22,496
12,367
136
All I can say about cashing out is that any disciplined investor has a target. Reach the target, then sell. Maybe you will make more in the future, maybe you won't. But if you hit your target and sell, at least you know you met your own expectations.

Granted we haven't even reached PoS yet, but certainly those of us who have been in it since 2016 have reached our targets by now. The price increases have been crazy.

Ripple is economically nonsense. Some people will make money on it now, just as they did earlier in the year. But in the end something like that has to come crashing down.

If you do increase cache size be sure to disable the superfetch service, that made a huge difference for me in used RAM.

I'll look into it. Will probably try the --cache option later tonight.

edit: According to my last 24 hours of testing, geth assigned around 424 GB of host writes while actively syncing new blocks. So I restarted my data counter, closed Ethereum Wallet, ran geth.exe from the command line with the --cache=10240 option (most RAM I could sacrifice for now), and then relaunched Ethereum Wallet on top of that. It recognized that geth was already running and did not attempt to launch a redundant copy (yay).

Sadly this leaves me with very little RAM available for anything else, but I will leave it running at least for a few hours to see how many host writes I record in that time. It seems to be pretty consistent from one hour to the next. I could easily have predicted the 24h host write total from 1-2 hours of operating under the previous conditions (using the default cache value of 128).
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,786
136
My XPS12 finally finished syncing. I was able to access my wallet. Granted, that was yesterday, and took less than 4 hours. That was with fastsync enabled.

I am going to talk to a data recovery service to see what they can do with my SSD. They said if recovery isn't possible, there's no cost.
 

DrMrLordX

Lifer
Apr 27, 2000
22,496
12,367
136
Running with the --cache=10240 option has been a real eye-opener.

The downside is that, even with superfetch disabled, I've got maybe 800 MB of RAM left after taking out what is used by Waterfox, Ethereum Wallet, HWiNFO64, and suchlike. That's out of the 15.9 GB Windows says is usable.

Now for the real data:

Within the first 27 minutes, the MX200's host controller reported 18 GB of host writes. After a total of 213 minutes, the same host controller reports 19 GB of host writes. Apparently there was a surge of drive activity when initializing geth and downloading the few outstanding blocks that had built up on the chain between the time I ended my previous trial and began the new one. Three hours later it's only managed to pick up 1 GB of host writes. That's astonishing. I choose not to continue the experiment on this machine because there's not much I can do with it given how little RAM is remaining. Regardless, the data seems to indicate that setting up a very large cache by way of the --cache command eliminates most drive activity associated with block syncing/verification.

Also, as of 213 minutes into the run, the average write speed reported by the drive in HWiNFO64 is sitting around 1.5 MB/s. That's easily within the capabilities of a platter-based HDD, even if all the writes are random. Reported average read rate is ~4.6 MB/s which also should be doable by a 7200 rpm HDD, even if the reads are all random.

If you're going to set up a dedicated node box, you should be able to use a spinner. You just need to set a large cache value; the bigger, the better. Some folks have been trying 8192, but I would go larger if possible. I can't say for sure that large --cache values really work with spinners until I've tried it myself. I might have to install my old 640 GB WD Black and see if it's doable on that thing.

Simply relying on an SSD to take care of the problem does NOT seem a wise decision, since the default cache value of 128 leads to an insane number of host writes to the drive, even when handling a relatively small amount of data. Over 400 GB in writes per day makes no damn sense.

People hoping to casually use Ethereum Wallet for basic exchange of tokens, viewing wallet totals, etc. will probably want/need an SSD since the default settings preclude anything else. Using large --cache values requires at least a tiny bit of "expert" knowledge, as well as a fair amount of free RAM.

I am going to talk to a data recovery service to see what they can do with my SSD. They said if recovery isn't possible, there's no cost.

Good of them to take a look for free. Hope it isn't a total loss for you.
 
Last edited:
Reactions: Headfoot

beginner99

Diamond Member
Jun 2, 2009
5,309
1,748
136
This syncing process is extremely annoying. I bought ETH ages ago and never synced for month and starting from scratch was advised so I did. Still takes forever. And once you are there you must basically let you PC running with geth for several hours each day to be able to catch up. How do you guys handle this? Full resync ever other month? But then when you want to sell you will have to wait hours or by then maybe days till you can actually sell and price might have crashed by then. For sure the software /stack is not yet ready for prime time and general adoption.
 

thilanliyan

Lifer
Jun 21, 2005
11,994
2,216
126
This syncing process is extremely annoying. I bought ETH ages ago and never synced for month and starting from scratch was advised so I did. Still takes forever. And once you are there you must basically let you PC running with geth for several hours each day to be able to catch up. How do you guys handle this? Full resync ever other month? But then when you want to sell you will have to wait hours or by then maybe days till you can actually sell and price might have crashed by then. For sure the software /stack is not yet ready for prime time and general adoption.
Get a hardware wallet like the ledger nano s. I stopped syncing completely once I got that since there is no need for it.
 

DrMrLordX

Lifer
Apr 27, 2000
22,496
12,367
136
I only sync when I feel like running a node. It's rare I have to move any ETH around. Most people use MyEtherWallet to do that . . . or they risk a 3rd-party wallet like Parity or I guess Jaxx?
 
Reactions: Charlie22911
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |