Threadripper BUILDERS thread

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

ub4ty

Senior member
Jun 21, 2017
749
898
96
This is a terribly annoying bug that does not seem to affect general day to day use if you run Windows. It does however cause issues if you compile large codebases or perform frequent compilations (which is the sort of use case you might imagine a Threadripper used for). There are a number of places this is documented, however the most authoritative source might be AMDs own forum.
https://community.amd.com/thread/215773

I don't want to make a big deal of it, but there doesn't seem to be a general awareness of this issue outside of people using a *nix variant. It has been reproduced on Windows using the "Windows Subsystem for Linux" and also in a VM, so it's certainly not a Linux specific issue as some would seem to believe (first really identified on a BSD variant anyway). If can affect random processes on a rare but day to day basis, but is most easily reproduced using either the programs built especially to reproduce the fault, or just building large codebases (a real-life use case that bit me in early May).

So I raised it here in a legitimate complaint that I'd *really* love to buy one of these setups, but as I can't even get a Ryzen stable (with AMDs help) for my specific stability testing I'm loathe to even contemplate putting a couple of extra grand down on what would normally be a very compelling purchase.

We're (the people who this bug affects in the AMD forum) all hoping AMD gets it sorted, and would dearly love anyone who buys one of these things to run some tests on it to demonstrate they have got these issues sorted. Until now, people have been disabling SMT, turning up voltages (at AMDs request), playing with kernel options to disable features like address space randomization to mixed success (usually it just makes the bug harder to hit and after 24 odd hours of testing it usually only bites about 30 seconds after the tester posts a "it worked, my problems are solved" message to the forum. The only thing that seemingly consistently makes it go away is disabling the OpCache, but very few BIOS seem to expose that option at the moment (like mine).

Anyway, I'll go back to lurking now and wait until this gets positively sorted before dropping > $2200 for another chip & motherboard. The Asus Prime X370 & 1800X that sits idle in the corner is punishment enough.

> Hit the segfault issue after 1hr15min of compilation using some test script that's floating around that uses livedisk and ramdisk for some crazy set of compilations... So, obviously there's a highly technical issue with a long series of compilations?

> Downloaded Phoronix's test suite
Ran 3 consecutive back to back linux kernel compilations w/ j16. Zero issues

My question here is : What in the world is this use case that everyone's talking about that's causing seg faults? What is being compiled and how many times? Is the use case a kernel compilation done over and over again w/o stop? Why the creation of a ramdisk? Is memory usage ballooning over time?

If this is what I think this is then this only likely matters for someone in enterprise using a ryzen platform as a build machine in some specific manner? If this is the case, although I recognize this is a bug, it is unlikely to be faced by a general developer running linux correct? Further, I hear that disabling OP cache nullifies the bug (albeit w/ a 5% performance penalty). Sounds like a good work around until they get this sorted which it appears they are narrowing :
https://community.amd.com/thread/215773?start=555&tstart=0
On Ryzen there is some sort of interaction
between code running at the top of user memory address space and
interrupts that can cause FreeBSD to either hang or silently reset.
 
Reactions: lightmanek

Space Tyrant

Member
Feb 14, 2017
149
115
116
I've just proofed my setup (1700 w/ asrock gaming board) using a 4 pass memtest and prime95 run w/o issue. OS : ubuntu 17.04.

I will now run the famed compile test found at :
http://funks.ddns.net:8080/tools/ryzen/testRyzenGCC.sh
and report back

TBH, i have a funny feeling people aren't proofing their setup via stress test/memtest and some instability therein is causing the issue in the compile test.

We'll see shortly if I get a clean pass.

[Edit5 17August2017: Long story short, after disabling ASLR I have been unable to reproduce the problem documented in the following paragraphs.]

I discovered the magnitude of this problem when I tried building kernel 4.12.4 yesterday. Using large thread counts, I had *mostly* failures compiling the 21,785,906 lines of code. This is 1000x as big as my main project -- which has never failed to compile.

I disabled ASLR this morning.

Today I've done successful runs at 12, 18, and 24 threads. I'm currently in the midst of 10 consecutive runs at 18 threads. Anyway, no failures in 7 runs after disabling ASLR. From my reading I don't think that's the cure, but clearly it greatly reduces the chance of hitting the bug, at least on my system.

Edit1: Added the following line to /etc/sysctl.conf to persistently disable ASLR:

kernel.randomize_va_space = 0

Followed by the command 'sysctl -p' to make it take effect immediately. This should work on most any Linux distro. replacing the '0' with a '1' will enable a conservative address space randomization and a '2' will more aggressively randomize address space locations. My distro (Mint 18) defaulted to '2' -- which can be viewed at /proc/sys/kernel/randomize_va_space

Edit2: My 10x run of 18t Linux 4.12.4 compiles completed successfully. That brings the consecutive successes to 14 since disabling ASLR.

Edit3: I ran an additional set of 12 compiles at 12 threads, also without errors. That brings the total to 26 consecutive compiles of Linux 4.12.4 and a grand total of 566,433,556 lines of code compiled at 12 threads or higher without problems. So, considering how slow I write code, I think my config is going to be OK.

Edit4 05August2017: I configured my test script for an additional 26 x Linux compiles in 12 threads. A couple of hours into the test I had a failure. So, confirming my suspicions -- and consistent with others' tests -- disabling ASLR only reduces the frequency of failures. I'm not going to bother with further tests. I'm of the opinion that there may be more than one problem going on. I've read (somewhere) about an SMT-specific failure, and tests with SMT disabled also fail, albeit with reduced frequency. Other testers have reported reduced failures with uOP caching disabled as well.

Of course, we have no way of knowing how rigorously any random system has been tested for hardware problems -- which means we can't really trust any specific report from around the internet. AMD must find these problems themselves, and probably already have.

Edit5 17August2017: I had a bug in my script and the 'make clean' step was removing one of my vmlinux images. That accounted for *all* of the failures with ASLR disabled. That is, it turns out I have had NO failures since disabling ASLR. And, of course, after fixing the flawed script, I've ran a few additional dozen compiles, also with no failures.

TL;DR: Initially I encountered failures in *most* 12+ thread Linux compiles but then disabled ASLR and I have since done 26 straight compiles without the error. [Edit5]: A second run of 26 compiles produced no failures -- bringing the total to 0 failures in 1,132,867,112 lines of code compiled. [this number doesn't include the uncounted successful compiles after edit4.]
 
Last edited:

ub4ty

Senior member
Jun 21, 2017
749
898
96
I discovered the magnitude of this problem when I tried building kernel 4.12.4 yesterday. Using large thread counts, I had *mostly* failures compiling the 21,785,906 lines of code. This is 1000x as big as my main project -- which has never failed to compile.

I disabled ALSR this morning.

Today I've done successful runs at 12, 18, and 24 threads. I'm currently in the midst of 10 consecutive runs at 18 threads. Anyway, no failures in 7 runs after disabling ALSR. From my reading, I don't think that's the cure, but clearly it greatly reduces the chance of hitting the bug, at least on my system.

(I'll update this comment after the current 10x @ 18t completes.)
Yeah, imo.. i'm looking for workarounds and a way forward on this issue (if it becomes a problem for me). It doesn't seem to be an issue on smaller compiles. Also, I don't appear to have in instabilities in a slew of memory/performance tests. So, i'm green lighting my development configuration and getting back to the code.

I'll keep a track of the bug's progress and resolution but I need to get going on development.
Given this, Threadripper purchase seems like it will be a go.

- Runtime OP seems bullet proof
- Significant Compilation appears to be the only issue which I will rarely hit. If I do, I will just disable OPcache until AMD fixes the issue.
 
Reactions: scannall

ub4ty

Senior member
Jun 21, 2017
749
898
96
Reactions: Drazick

ub4ty

Senior member
Jun 21, 2017
749
898
96
Alright, so I did a little research on this :


Their CPU cooler is : 2 x 12CM PWM 4-pin. Thus, they're running a 240mm cooling setup. I wonder which brand. Anyone know this cooler off hand? I guess the same analysis can be done for any pre-built vender that shows case pics of their threadripper setup.
 
Reactions: Drazick

wildhorse2k

Member
May 12, 2017
180
83
71

kaporkle

Junior Member
Aug 2, 2017
19
33
61
Feel free to comment, but primarily I would love for all of those building a Threadripper system to post their build information, and get feedback on it. I ordered the below today: (some are preorder)

Hi Markfw, just saw this thread and decided to stop lurking and join. I have some questions about your choice of memory.
Do you expect any issues with getting two kits 16GB instead of one 32GB kit? The memory in your link seems to do well on Ryzen systems.



I pre-ordered the MSI board. Originally I wanted to get 32GB of Trident Z, but the MSI QVL doesn't show any Trident Z kits at all. Do QVL lists tend to expand with time? I know MSI only just posted it yesterday. At the moment there seems to be only one 4x kit 32GB at 3200 (everything else is rated slower) and it's Hynix RAM:
https://www.newegg.com/Product/Prod...e=HyperX_Predator_32GB-_-20-104-705-_-Product

From my limited reading since yesterday the Hynix stuff is said to troublesome for overclocking. Would it be more of a crap-shoot to get Trident Z that works well on other Ryzen systems even if it's not on the QVL. Thanks all.
 

ajc9988

Senior member
Apr 1, 2015
278
171
116
Excellent. Yeah, can you run Thaiphoon-Burner on them and drop the die info here?
Mucho gracias
Hiccup. This site doesn't allow file or image posting, and I'm not creating an account elsewhere to post this. But, the part number and information on the DRAM components is the same.
 
Reactions: ub4ty

ajc9988

Senior member
Apr 1, 2015
278
171
116
Hi Markfw, just saw this thread and decided to stop lurking and join. I have some questions about your choice of memory.
Do you expect any issues with getting two kits 16GB instead of one 32GB kit? The memory in your link seems to do well on Ryzen systems.



I pre-ordered the MSI board. Originally I wanted to get 32GB of Trident Z, but the MSI QVL doesn't show any Trident Z kits at all. Do QVL lists tend to expand with time? I know MSI only just posted it yesterday. At the moment there seems to be only one 4x kit 32GB at 3200 (everything else is rated slower) and it's Hynix RAM:
https://www.newegg.com/Product/Prod...e=HyperX_Predator_32GB-_-20-104-705-_-Product

From my limited reading since yesterday the Hynix stuff is said to troublesome for overclocking. Would it be more of a crap-shoot to get Trident Z that works well on other Ryzen systems even if it's not on the QVL. Thanks all.
So, first, it depends. You may need to change timings for compatibility, but it usually is not that bad. But, there is variance between kits, even in the same line (silicon lottery). So keep that in mind.

Yes, QVLs are updated, but I wouldn't expect a lot of movement on support changing. It all depends on whether the MB vendor got the ram to test from the ram vendor and has been put through the paces. Hell, my Asus M8E added more in the past couple months to the QVL. But, the motherboard may not be able to support the faster speeds. So...
 
Reactions: kaporkle

ub4ty

Senior member
Jun 21, 2017
749
898
96
Hiccup. This site doesn't allow file or image posting, and I'm not creating an account elsewhere to post this. But, the part number and information on the DRAM components is the same.
Boom, so they took the same dies and tune/tested/proofed them for higher clocks w/ higher latencies at the factory and attached an XMP profile to them. Thus, this means, it will be possible to do the same (albeit w/o professional equipment and guess work + lots of time) for 3200CL14 kits and such.

Yeah, I noticed the 3200CL14 Trident kit was missing from the QVL on Asrock for threadripper but is there for X370. Also, Tridentz seems to drop off the list above 3000 on Threadripper and instead go for G.Skill's GFX line. Even though they use the same samsung b-die, I guess it comes down to being Ryzen ready and tuned w.r.t to XMP profiles. I bet one could just get someone to give the settings from their XMP profile on GFX sticks and the Tridentz would behave the same....

I wish someone from the industry would spill the beans on how these ram sticks work w.r.t to packaging dies and how they test/spec/fit them to different lines.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
https://videocardz.com/press-releas...ds-ryzen-threadripper-x399-and-epyc-platforms

Noctua coolers for Epyc and Threadripper released.
NH-U14S TR4-SP3: EUR 79.90 / USD 79.90
NH-U12S TR4-SP3: EUR 69.90 / USD 69.90
NH-U9 TR4-SP3: EUR 69.90 / USD 69.90

Would have been interested if it weren't for those crap color fans they seem to refuse to want to change. You'd think they'd have brand awareness by now by making their fans poo colored such that they could mature and offer something more aesthetically pleasing by simply changing the die they use.

Incredible. No sale
 

ajc9988

Senior member
Apr 1, 2015
278
171
116
Boom, so they took the same dies and tune/tested/proofed them for higher clocks w/ higher latencies at the factory and attached an XMP profile to them. Thus, this means, it will be possible to do the same (albeit w/o professional equipment and guess work + lots of time) for 3200CL14 kits and such.

Yeah, I noticed the 3200CL14 Trident kit was missing from the QVL on Asrock for threadripper but is there for X370. Also, Tridentz seems to drop off the list above 3000 on Threadripper and instead go for G.Skill's GFX line. Even though they use the same samsung b-die, I guess it comes down to being Ryzen ready and tuned w.r.t to XMP profiles. I bet one could just get someone to give the settings from their XMP profile on GFX sticks and the Tridentz would behave the same....

I wish someone from the industry would spill the beans on how these ram sticks work w.r.t to packaging dies and how they test/spec/fit them to different lines.
This is what I just posted in another forum on this topic (or related to this topic for your last line):


"I've been wondering about quad-channel in the ThreadRipper, and usually how the fewer the DIMM's/SODIMM's the better the chance of maintaining high speed low latency settings.

With ThreadRipper isn't the memory access "shared" across the Infinity Mesh, such that if you got 4 x DIMM's, it's really 2x to each CCX, so running 4 DIMM's is the "minimum" configuration for quad-channel and low enough count to run highest speed and lowest latency?

Or, is there an advantage to 1 DIMM per CCX, and run memory in a 2x configuration?

What's the best memory layout per the design spec / manual?"

Great question. This goes back to the discussion of memory on Epyc. Intel has one massive, consolidated memory controller, but Ryzen has one per die. So, when the other die needs information from the other set of dimms connected to the other die, it has to jump to the other die, then jump to the memory, then follow it back. For 2P, they made a big deal on limiting it to 2 jumps before hitting the ram. Here, you just have the one jump.

This means you are achieving that speed per controller. So, two kits should work fine instead of one kit with all four tested as matching. The timings are still set together, though, meaning you still may have to find the variance in timings between the two kits.

Now, if you only populated dual channel to one die, you would permanently handicap the other die. If you run one in each channel for each die, you are not approaching as good of performance as if you did dual channel. So you do need 4 dimms to get the higher speeds and lowest latency.

Edit: they also test for voltage to reach certain speeds, then search for timings that the chips can reach uniformly. You flash the XMP, test to see if it runs it, repeat until failure, then go to the last good XMP at a set voltage. There is more to it than that, but giving a rough assembly line type idea of how it is done.

Edit 2: http://forum.notebookreview.com/thr...zen-tr-epyc-vega-polaris-gpus.799348/page-262
 
Last edited:
Reactions: ub4ty

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,752
14,782
136
@Markfw - So the Asrock QVL memory list is published: http://www.asrock.com/mb/AMD/X399 Taichi/index.us.asp#Memory
I know my memory has been tested and verified to reach 3600. You should check yours.
Thanks for that. Mine is not on the QVL, oddly enough, all the chips that say 3600 speed, are higher speed per Gskill, like 3733, 4000, 4133,4266, etc And there are some that are only 2 characters off at the end GVK instead of GTX (mine), but those say 3466 speed, not 3600. Mine are F4-3600C16D-16GTZ, cas 16, speed 3600.
 
Reactions: Drazick and ajc9988

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,752
14,782
136
Hi Markfw, just saw this thread and decided to stop lurking and join. I have some questions about your choice of memory.
Do you expect any issues with getting two kits 16GB instead of one 32GB kit? The memory in your link seems to do well on Ryzen systems.



I pre-ordered the MSI board. Originally I wanted to get 32GB of Trident Z, but the MSI QVL doesn't show any Trident Z kits at all. Do QVL lists tend to expand with time? I know MSI only just posted it yesterday. At the moment there seems to be only one 4x kit 32GB at 3200 (everything else is rated slower) and it's Hynix RAM:
https://www.newegg.com/Product/Prod...e=HyperX_Predator_32GB-_-20-104-705-_-Product

From my limited reading since yesterday the Hynix stuff is said to troublesome for overclocking. Would it be more of a crap-shoot to get Trident Z that works well on other Ryzen systems even if it's not on the QVL. Thanks all.
I went with ram that appeared to be the samsung bdie, the same as my Ryzen systems, but 3600 instead of 3200, the most the Taichi mobo supports. Its not on the QVL (found out after I ordered) But I am going to try. TR is just 2 Ryzens on one chip, and qhad channel with some tweaks, so I think I am good. I will certainly reply after I get it built.
 

ub4ty

Senior member
Jun 21, 2017
749
898
96
This is what I just posted in another forum on this topic (or related to this topic for your last line):
Please post source link


"I've been wondering about quad-channel in the ThreadRipper, and usually how the fewer the DIMM's/SODIMM's the better the chance of maintaining high speed low latency settings.

With ThreadRipper isn't the memory access "shared" across the Infinity Mesh, such that if you got 4 x DIMM's, it's really 2x to each CCX, so running 4 DIMM's is the "minimum" configuration for quad-channel and low enough count to run highest speed and lowest latency?

Or, is there an advantage to 1 DIMM per CCX, and run memory in a 2x configuration?

What's the best memory layout per the design spec / manual?"

Great question. This goes back to the discussion of memory on Epyc. Intel has one massive, consolidated memory controller, but Ryzen has one per die. So, when the other die needs information from the other set of dimms connected to the other die, it has to jump to the other die, then jump to the memory, then follow it back. For 2P, they made a big deal on limiting it to 2 jumps before hitting the ram. Here, you just have the one jump.
Here's the EPYC layout :

Each die gets 4 dimm slots and 2 channels of DDR. One memory controller per Die. So, on a per die basis, you're in the realm of Ryzen. I'm not sure what is meant by : Here, you just have the one jump.

The jump spans two infinity fabrics

Die 1 [req for memory hanging off of Die 2]
REQ : [across die interconnect fabric] -> [Die 2 fabric->DIMM] - {2 infinity fabric hops}
REP : [across die interconnect fabric] -> [Die 1 fabric->core] - {2 infinity fabric hops}
4 hops round trip

Latency is defined here :
https://www.servethehome.com/amd-epyc-infinity-fabric-latency-ddr4-2400-v-2666-a-snapshot/
Green and yellow interaction ... not across to 2p [grey]


Also, i notice that they did a linux kernel compile test on an hourly basis. Is this B2 stepping correcting the compile issue found in B1? How exactly does this work? If they don't fix the compile crash problem in B1 errata fixes, can you force them to give you a B2 stepping threadripper processor once its produced?

This means you are achieving that speed per controller. So, two kits should work fine instead of one kit with all four tested as matching. The timings are still set together, though, meaning you still may have to find the variance in timings between the two kits.

Now, if you only populated dual channel to one die, you would permanently handicap the other die. If you run one in each channel for each die, you are not approaching as good of performance as if you did dual channel. So you do need 4 dimms to get the higher speeds and lowest latency.
K, yes. So, minimum is 2 dimms for which you'd be running each die in single channel mode.
With 4 dimms you'd be utilizing both die's dual channel capability.

Edit: they also test for voltage to reach certain speeds, then search for timings that the chips can reach uniformly. You flash the XMP, test to see if it runs it, repeat until failure, then go to the last good XMP at a set voltage. There is more to it than that, but giving a rough assembly line type idea of how it is done.
Excellent. Thank you for this info. I imagine they use professional grade hardware that automates the search process and locks in the best value and bins it appropriately. I guess this is also where the kits (2/4/8) dimm slots derive from in that they test it simultaneously and thus there is a little more overhead and premium selection going on. I think i understand the process now. Thank you very much
 
Last edited:

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,752
14,782
136
Hiccup. This site doesn't allow file or image posting, and I'm not creating an account elsewhere to post this. But, the part number and information on the DRAM components is the same.
Sure it does. Use {IMG} and {/IMG} but with square brackets and have the name of the file with the hosted URL Like this one (the case my TR system will be in)
 
Last edited:
Reactions: Drazick

ajc9988

Senior member
Apr 1, 2015
278
171
116
Thanks for that. Mine is not on the QVL, oddly enough, all the chips that say 3600 speed, are higher speed per Gskill, like 3733, 4000, 4133,4266, etc And there are some that are only 2 characters off at the end GVK instead of GTX (mine), but those say 3466 speed, not 3600. Mine are F4-3600C16D-16GTZ, cas 16, speed 3600.
So, match the timings. What G.Skill does is use the same ram dimm, then often changes the last characters to signify the color of the heat spreader. Be careful, as they have a couple different lines similarly priced that are different, such as the 4133@CL19 21 21 and the 4133@CL19 19 19. So, if the timings are the same, but the color is different, it is the same dimms.

Example:
These two are the same:
https://www.gskill.com/en/product/f4-4133c19d-16gtza
https://www.gskill.com/en/product/f4-4133c19d-16gtzkw

This is different from those above:
https://www.gskill.com/en/product/f4-4133c19d-16gtzc
https://www.gskill.com/en/product/f4-4133c19d-16gtzkwc
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
25,752
14,782
136
So, match the timings. What G.Skill does is use the same ram dimm, then often changes the last characters to signify the color of the heat spreader. Be careful, as they have a couple different lines similarly priced that are different, such as the 4133@CL19 21 21 and the 4133@CL19 19 19. So, if the timings are the same, but the color is different, it is the same dimms.

Example:
These two are the same:
https://www.gskill.com/en/product/f4-4133c19d-16gtza
https://www.gskill.com/en/product/f4-4133c19d-16gtzkw

This is different from those above:
https://www.gskill.com/en/product/f4-4133c19d-16gtzc
https://www.gskill.com/en/product/f4-4133c19d-16gtzkwc
I already ordered the day I started this thread, so I will return/exchange them if they don;t do 3600
F4-3600C16D-16GTZ

Now I see that they have the same thing, but cas 15 for $24 more ! I would have gotten those if they had them then.
 
Reactions: Drazick and ajc9988

ajc9988

Senior member
Apr 1, 2015
278
171
116
Please post source link


Here's the EPYC layout :

Each die gets 4 dimm slots and 2 channels of DDR. One memory controller per Die. So, on a per die basis, you're in the realm of Ryzen. I'm not sure what is meant by : Here, you just have the one jump.

The jump spans two infinity fabrics

Die 1 [req for memory hanging off of Die 2]
REQ : [across die interconnect fabric] -> [Die 2 fabric->DIMM] - {2 infinity fabric hops}
REP : [across die interconnect fabric] -> [Die 1 fabric->core] - {2 infinity fabric hops}
4 hops round trip

Latency is defined here :
https://www.servethehome.com/amd-epyc-infinity-fabric-latency-ddr4-2400-v-2666-a-snapshot/
Green and yellow interaction ... not across to 2p [grey]


Also, i notice that they did a linux kernel compile test on an hourly basis. Is this B2 stepping correcting the compile issue found in B1? How exactly does this work? If they don't fix the compile crash problem in B1 errata fixes, can you force them to give you a B2 stepping threadripper processor once its produced?


K, yes. So, minimum is 2 dimms for which you'd be running each die in single channel mode.
With 4 dimms you'd be utilizing both die's dual channel capability.


Excellent. Thank you for this info. I imagine they use professional grade hardware that automates the search process and locks in the best value and bins it appropriately. I guess this is also where the kits (2/4/8) dimm slots derive from in that they test it simultaneously and thus there is a little more overhead and premium selection going on. I think i understand the process now. Thank you very much
What I refer to as a hop is discussing it in the manner that AMD did when discussing EPYC. One hop to the mirroring core on the second processor, then one jump to the correct die on that processor. That is before hitting memory. Here, you describe it correctly, it would jump to the core on the other die, then out to memory. I'm not talking round trip or the route to memory. Does that make sense now?
 

MarkPost

Senior member
Mar 1, 2017
239
345
136
https://videocardz.com/press-releas...ds-ryzen-threadripper-x399-and-epyc-platforms

Noctua coolers for Epyc and Threadripper released.
NH-U14S TR4-SP3: EUR 79.90 / USD 79.90
NH-U12S TR4-SP3: EUR 69.90 / USD 69.90
NH-U9 TR4-SP3: EUR 69.90 / USD 69.90

Would have been interested if it weren't for those crap color fans they seem to refuse to want to change. You'd think they'd have brand awareness by now by making their fans poo colored such that they could mature and offer something more aesthetically pleasing by simply changing the die they use.

Incredible. No sale
yeah, I just installed a NH-D15S in my Ryzen system and the first thing I've done is swapping the ugly Noctua fan for a Corsair ML140 PRO LED Red, now it looks so much better

btw, these Noctua coolers are really great. They perform just like the best AIO coolers, but VRM temp is a lot lower than with an AIO. Till now I had installed a SilentLoop 280 and CPU temps are just equal (even a little lower with NH-D15S), and VRM temp has dropped around 15º

IMO, AIO are overrated
 

kaporkle

Junior Member
Aug 2, 2017
19
33
61
Thank you ajc9988 and Markfw. I think I will hold off on the memory purchase for another week to see if the QVL gets any updates.
 
Reactions: ajc9988
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |