Print Page | Close Window

ASRock X99 WS-E memory compatiblity

Printed From: ASRock.com
Category: Technical Support
Forum Name: Intel Motherboards
Forum Description: Question about ASRock Intel Motherboards
URL: https://forum.asrock.com/forum_posts.asp?TID=271
Printed Date: 16 Jun 2024 at 8:58am
Software Version: Web Wiz Forums 12.04 - http://www.webwizforums.com


Topic: ASRock X99 WS-E memory compatiblity
Posted By: orthoceros
Subject: ASRock X99 WS-E memory compatiblity
Date Posted: 17 Jun 2015 at 2:08am
Hello ASRock,
I have a pre-sales question with respect to the memory compatibility of your X99 WS-E board ( http://www.asrock.com/mb/Intel/X99%20WS-E" rel="nofollow - http://www.asrock.com/mb/Intel/X99%20WS-E ).
In the specs, the board supports RDIMM ECC memory (with a Xeon CPU). When the board was released, only 16GB RDIMM ECC memory modules were available, but now 32GB RDIMM ECC modules are on the market, e.g. the dual-rank HP 728629-B21 32GB modules (see http://www8.hp.com/us/en/products/smartmemory/product-detail.html?oid=6987490#!tab=specs" rel="nofollow - http://www8.hp.com/us/en/products/smartmemory/product-detail.html?oid=6987490#!tab=specs ).
I have also seen that there are already many BIOS updates available that "Improve DRAM module compatibility" (see http://www.asrock.com/mb/Intel/X99%20WS-E/?cat=Download&os=BIOS" rel="nofollow - http://www.asrock.com/mb/Intel/X99%20WS-E/?cat=Download&os=BIOS ).
However, there is no extended info and I wonder: Does this board already support 8x32GB dual-rank RDIMM modules, i.e. 256GB RAM? This would make it a great mainboard for scientific computing! Smile
Thanks!



Replies:
Posted By: orthoceros
Date Posted: 18 Jun 2015 at 3:33pm
Is this not the right place to ask ASRock mainboard/RAM compatibility questions?

Additional to the above question, I am interested in the compatibility of just four 32GB modules (RDIMM, dual-rank, with Xeon CPU). Already the original specs state that 128GB are supported, so this should work for certain, despite newer DIMM modules; is this correct? (Background idea: This would keep four DIMM slots free for a later upgrade to 256GB, if the current BIOS cannot yet handle 256GB...)

Thanks for any info.



Posted By: Xaltar
Date Posted: 18 Jun 2015 at 3:38pm
You are in the right place, tech support will have to take this one. Tech support sometimes takes a while to respond, often because they are testing in a situation like the one you mentioned. The RAM you listed was not available at the time of the board's release so I would imagine it would require testing and possibly a bios update if it will even work with such high density RDIMMs.

Welcome to the forums Smile


Posted By: orthoceros
Date Posted: 18 Jun 2015 at 3:52pm
Great, thank you very much! Clap Then I will patiently wait...


Posted By: Cydona
Date Posted: 20 Jun 2015 at 12:04am
I'll also be very curious to hear any feedback on this matter. I likewise would like to be able to populate with 4x32GB with the intention of adding 4x32GB in the future.

Also on this matter, as well as any insight as to compatibility with dual rank modules, can you also please comment on compatibility with quad rank modules such as these Samsung M386A4G40DM0 

These seem to be less expensive that the dual rank versions I have come across.

Thank you for any insights!

http://www.samsung.com/global/business/semiconductor/file/product/DDR4_Product_guide_Dec13.pdf

http://www.samsung.com/global/business/semiconductor/product/computing-dram/detail?productId=8028&iaId=2427

http://www.newegg.com/Product/Product.aspx?Item=N82E16820147384


Posted By: orthoceros
Date Posted: 24 Jun 2015 at 8:34pm
>> compatibility with quad rank modules such as these Samsung M386A4G40DM0
These modules are LRDIMMs (load reduced DIMMs with an extra buffer chip), not RDIMMs. Hence, I fear, they do not work in the X99 workstation boards, unfortunately.

However, there are already less expensive and system-independent 32GB RDIMMs on the market. For example the Transcend TS4GHR72V1C (with a good price tag at about 350EUR per module, specification: http://www.transcend-info.com/Products/No-682  ).

This is also a 32GB RDIMM module with the same geometry (2Rx4) that is already supported by this board for 16GB RDIMM modules. So, in theory, there should be no hardware reason why it should not work... still, the BIOS must accept it. (Maybe this more "consumer-near" Transcend module is more suitable for tests by tech support?)

Support for 256GB would really make Asrock workstation mainboards outstanding... I would buy it immediately. Still hoping for good news! Wink


Posted By: Cydona
Date Posted: 24 Jun 2015 at 9:10pm
Missed that LRDIMMs bit on the Samsung ones. Thanks for the insight orthoceros.




Posted By: vacaloca
Date Posted: 05 May 2016 at 8:23am
I also would like to know if it's possible to use 8x32GB -- it seems that 4x32GB is already supported given the QVL posted: http://www.asrock.com/mb/Intel/X99%20WS-E/?cat=Memory

Crucial's CT32G4RFD4213 is DDR4 PC4-17000 CL15 Dual Ranked x4 based Registered ECC 1.2V, 4096Meg x 72

They only certified using 4pcs, so perhaps 128 GB is the maximum regardless for this board?




Posted By: Xaltar
Date Posted: 05 May 2016 at 12:38pm
You will have to contact tech support on that, the official memory max capacity is 128mb but at the time of the board's release there were as yet no 32gb modules on the market so it may be possible now depending on the CPU installed (Xeon for high capacity).

-------------


Posted By: vacaloca
Date Posted: 14 May 2016 at 4:18am
Tech support claimed that 128 GB was the max. That being said, I got a 4x CT32G4RFD4213's from one vendor and another from a different vendor (all returnable, just in case), and with BIOS v1.8 I was able to boot with 160 GB into both Ubuntu 16.04 and Windows 7, and did some short Memtest 86+ 5.01 runs and it appears to register the entire amount. With BIOS 1.7 that the board came with, it is only possible to boot with 3x of the 32GB DIMMs, BIOS 1.8 fixes that (can boot with 5 sticks), and BIOS 3.2 breaks it again. Under 3.2, I can boot with 4x32GB sticks, but it only recognizes 3x of them... probably a regression on ASrock's part... that and since it was probably untested w/ new BIOS revision, and a niche market they don't imagine anyone will do it.... ;)

Edit: I have confirmed that whatever issue that was causing the DPC latency before I re-seated the RDIMMs was also probably the reason why BIOS 1.7 and 3.2 could not recognize more than 3x32 GB RDIMMs. Retested both BIOS' on May 30, 2016 and I was able to boot with 128 GB with no issues.

That being said, with this much memory installed, I'm seeing a strange issue with NVIDIA-based cards on bootup. Sometimes after a restart, it stops for maybe a minute at code 99 at the UEFI splash-screen and then recovers and continues the boot into GRUB and into Linux/Windows normally. This is the case for at least a CSM-based install of Ubuntu and Windows, although I believe booting Ubuntu from UEFI media tends to cause the same issue. It usually is fairly repeatable after restart, and is not influenced by putting the PC to sleep before the restart. It seems to go away after a hard shutdown and subsequent power up, where the delay resurfaces on next restart.

This is similar to an issue I had with a X99 WS-E board from ASUS that was unstable when put to sleep, woken up, and restarted. The ASUS board, however did hang up with a QCODE and didn't recover.

I'll do a bit more testing over the coming days most likely, but just thought I'd post a few findings so far on this.

Edit: using a Xeon E5-1650 V3 for this, and also have 8x16GB UDIMMs on hand ... going to test which setup (UDIMM/RDIMM) is more stable when it comes to sleep/restart, as I can't afford to have the PC hang on boot when either of those happen.

Edit 2: see next post, as code 99 slow boot is resolved w/ some UEFI settings changes.


Posted By: vacaloca
Date Posted: 15 May 2016 at 11:13am
A bit of an update...

I was able to resolve the Dr. Debug code 99 slow BIOS boot after restart in BIOS version 1.80 by enabling a few BIOS options that are disabled by default:
a) Set PCI-E ASPM Support to Enabled
b) Set PCH PCI-E ASPM Support to Enabled
c) Set PCH DMI ASPM Support to Enabled

I also changed the options for memory test on boot/fast boot, memory power savings mode and maximum aggregate memory performance to 'Disabled'. Not sure if these made any difference, however, but the combination of these changes resulted in a constantly stable boot/restart/restart after sleep with either 4x or 5x32 GB Crucial 2133 RDIMMs (128 or 160 GB RAM) or 8x16 GB 2133 UDIMMs (128 GB RAM). Tested with Ubuntu GNOME 16.04 and Windows 7 x64, and NVIDIA 361,365 drivers (GTX 750 Ti)

(Edit: I have later set maximum aggregate memory performance to 'Enabled' with no further issues)

Ended up keeping the RDIMMs and returning UDIMMs as ECC functionality is useful and the appeal of having > 128 GB on this board is helpful.


Posted By: vacaloca
Date Posted: 21 May 2016 at 3:25am
Another slight update... managed to sysprep my current installation of Windows 8.1 x64 (BIOS mode) on an X79 system to ASRock's X99 WS-E after doing the following in Powershell:

Get-AppxPackage | Remove-AppxPackage
Get-AppXProvisionedPackage -online | Remove-AppxProvisionedPackage -online

Also, delete HKLM\System\Setup\Upgrade key, as well as HKLM\System\Setup\Upgrade DWORD in registry editor.

Finally, point it to an Unattend.xml with this text: 

<?xml version="1.0" encoding="utf-8"?>
<unattend xmlns="urn:schemas-microsoft-com:unattend">
       <settings pass="generalize">
            <component name="Microsoft-Windows-PnpSysprep" processorArchitecture="amd64" publicKeyToken="31bf3856ad364e35" language="neutral" versionScope="nonSxS" xmlns:wcm="http://schemas.microsoft.com/WMIConfig/2002/State" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
            <PersistAllDeviceInstalls>true</PersistAllDeviceInstalls>
        </component>
    </settings>
</unattend>

and run sysprep with:
sysprep /oobe /generalize /shutdown /unattend:Unattend.xml

The first step to get sysprep running was to get rid of any system upgrade regkeys and indications. Next, had to keep drivers because NVIDIA drivers were not playing nice and sysprep was failing. Finally, the powershell commands remove all Windows 8.1 apps and de-provisions them, because otherwise sysprep will also fail.

One last thing that took a while to debug was that DPC latency was pretty horrid when I tried to play a Youtube video @ 1080p in Chrome. System process was shooting up to 20 or 30% causing slowdowns and 100% CPU usage and Chrome was using 20-30% CPU to play the video... almost as if hardware acceleration was broken, yet chrome://gpu showed it was enabled.

I ended up using a spare drive to install a clean copy of Win 8.1 x64 and Win 10 x64 (both in BIOS mode) and the same problem persisted. I removed the 32GB ECC DIMMs and inserted a single 16GB UDIMM and the problem immediately went away... Chrome went back to using 2-6% CPU usage to play video and System process never went past 0-2% CPU usage. I replaced the ECC DIMMs 1 by 1 back until I had 5x32GB (the amount I'm intending to keep) and the problem was still gone both in the new clean install and my transplanted sysprepped image.

So needless to say it's possible that either there was some weird BIOS corruption or that the DIMMs were not seated correctly to begin with. Figured I'd add all this info in case anyone else (or myself again, ha!) ever needs it.

System now has 1x GTX Titan X and 1x Quadro K6000 functioning well... will add the remaining GTX Titan Black once I clean up the cabling and add a few other components here and there (Blu-ray drive, hotswap SSD caddy, etc)

RAM was tested up to 140 GB processing some huge electromagnetic data sets, very happy with this new setup after all the platform tweaking to figure out its quirks.


Posted By: vacaloca
Date Posted: 22 May 2016 at 11:20am
One final update on this for now. Changed over from the X79 system tonight and installed the last video card (GTX Titan Black). Checked on-board audio and USB ports, and connectivity with my remaining peripherals and everything seems just fine. Before I installed the last video card, I connected both molex slots on the motherboard for extra power.

CPU-Z memory tab and HWMonitor screenshots:
http://imgur.com/a/zsimA" rel="nofollow - http://imgur.com/a/zsimA


Posted By: vacaloca
Date Posted: 26 May 2016 at 3:14am
One last thing I noticed was that on two separate occasions (but occurs very seldom) the system 'wakes up' back from sleep, however, no monitor output is present and if the Dr Debug led was switched on all the time, it does not light up to E3 after resume. I saw this happen a few times initially before I changed any settings. Also, when this happens, the power button ceases to work, and a hard shutdown and power on does not solve the issue. Only removing power and CMOS battery for a few seconds to clear BIOS settings gets the system correctly booting again.

So far, the fix seems to have been either setting processor state C6 to disabled and state C3 to enabled OR enabling Maximum Aggregate Memory performance in UEFI settings. All other settings were re-set as mentioned earlier in the thread.

Also for anyone curious/wanting to duplicate this setup, Jet has CT32G4RFD4213 32GB DIMMs for a decent price currently. A 15% coupon on Jet currently lets you buy up to 3x in a single order. Also, Staples and Quill have coupons that can be applied if they come back in stock. For Staples, best price can be had via a 25 off 75 coupon for an order of a single DIMM. Quill (Staples affiliate) has them for the same Staples price, signing up for their e-mail list gives a $20 off coupon that works with the item.

I took advantage of an eBay offer for a $150 Staples GC for $130 and purchased another 32GB DIMM for net cost of $136.74 after coupon/discounted gift card/Staples pricematch to its own website after the fact, and installed it on May 27th, and no issues so far with 192 GB populated.


Posted By: vacaloca
Date Posted: 03 Jun 2016 at 1:25am
As I updated in an earlier post, I found the reason for the excessive DPC latency... a bad RDIMM that had quite a lot of correctable errors -- this presents itself as overly high CPU usage and the 1080p youtube test being VERY slow. When looking at the Windows Event Viewer under System, they show up as a warning from source WHEA-Logger as:

"A corrected hardware error has occurred.

Component: Memory
Error Source: Corrected Machine Check"

Apparently Windows is capable of placing bad pages on error into its BCD configuration, but that functionality seems like does not work for this workstation board given that Windows continues to issue these errors at the rate of 700+/minute on that bad RDIMM, and no bad pages appear in the BCD store at all.

The sad part is I have at least 2 bad RDIMMs, although only 1 of them was bad enough that it adds the excessive latency. Currently figuring out which other one is affected to return as such before the Jet return period expires. Definitely not a slot issue, as I swapped the 4 RDIMMs installed first the ones on channel C1 D1, then the ones installed on channels B1 A1. When I swapped B1<->A1, the latency manifested itself. Re-seated modules, and same issue. Swapped out module A1 w/ another module, and latency went away.

Go figure that ECC RAM will first slow down your system w/ processor cycles correcting errors as opposed to UDIMMs where if corruption hits it can causes a memory check exception or similar.

I also realized I might be able to determine if an RDIMM is bad quicker than by using Memtest86 6.3/7.0 beta1... Just use a single RDIMM on slot A1, and see if the latency issue is present. If it is present, mark RDIMM as bad. Repeat for rest of sticks, and test the ones with no latency issues with a final run of Memtest86. (Edit: I later learned this actually was not an effective test, as the latency issue is only present with one of the sticks)

Of course I can just run Memtest86 on each DIMM for a few passes, but that might take a bit, even with multi processor support.

Ah the joys of debugging a bad RAM stick ;)


Posted By: animaciek
Date Posted: 04 Jun 2016 at 5:39am
Thank you for very detailed analyse. 
I'm planning to buy X99 Extreme4 and use E5-1650 v3 with 4 x Samsung M393A4K40BB0-CPB00.
If this turned out to be successful I will get another set of 4 M393A4K40BB0-CPB00 to make use of 256GB ram.


Posted By: clubfoot
Date Posted: 04 Jun 2016 at 8:12am
Very impressive system and diagnosis vacaloca. What make and model SSD hotswap caddy do you use...I would like to get one for my system?

-------------
https://valid.x86.fr/1tkblf" rel="nofollow">


Posted By: vacaloca
Date Posted: 04 Jun 2016 at 9:34pm
Originally posted by clubfoot clubfoot wrote:

Very impressive system and diagnosis vacaloca. What make and model SSD hotswap caddy do you use...I would like to get one for my system?

I got this one:  http://www.newegg.com/Product/Product.aspx?Item=N82E16817994146" rel="nofollow - http://www.newegg.com/Product/Product.aspx?Item=N82E16817994146
It's a bit pricey though. You're supposed to screw the SSDs in place to the metal hot swap brackets, but I'm currently using it without the screws and it works fine. Has a slim ODD slot, as I wanted to just use a single slot for ODD and SSDs. I mostly just use it for cloning the OS drive every now and then to have a hot spare in case of failure.

I'm sure you can find other similar ones on NewEgg that might better fit what you're looking for.

The old one I had was something I got at TigerDirect for $10 after rebate a while ago, but not available anymore:
http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=7829711&sku=U12-42483


Posted By: clubfoot
Date Posted: 05 Jun 2016 at 7:53am
Thanks I found this one from ICY:
http://www.newegg.ca/Product/Product.aspx?Item=N82E16817994143


-------------
https://valid.x86.fr/1tkblf" rel="nofollow">


Posted By: vacaloca
Date Posted: 14 Jun 2016 at 2:17am
Yet another update, turns out after testing multiple RDIMMs that the same Channel 3 / slot 0 error in Memtest86 7.0.0 beta is indicative of a bad motherboard DIMM slot (D1), as I can (now sadly) replicate it within seconds of a run, only for that particular slot. Swapping DIMMs that have previously tested correctly in tri-channel mode triggers the fault in the slot, no matter which DIMM. 

Thankfully, I was able to request a return from the retailer I purchased it from and will get the new board in a few days to retest.

One interesting note... when using any DIMM on D1 and looking at Memtest's memory latency number, it severely plummets... from ~29 ns when using tri-channel mode, down to ~102 ns when populating the bad D1 slot. I suspect this will fix itself with the new motherboard.


Posted By: vacaloca
Date Posted: 24 Jun 2016 at 6:38am
Another update, quick for now. Basically the replacement board had (what seems like) a bad PLX chip, as when using gpuburn to test 2 GPUs at the same time in Linux I had NVRM Xid 32 errors ("Invalid or corrupted push buffer stream") on 1 slot and the GPU falling off the bus on /var/log/kern.log, and a subsequent crash. Testing 1 GPU at a time produced no issues, however, when I used at least two slots it definitely had these issues.

In Windows, while running Unigine Heaven benchmark, the monitor outputs went to power save when it failed and no recovery was possible. That being said, the memory slots worked just fine, passing Memtest with flying colors.

After a few more days and NewEgg providing an advance RMA, I finally have a board that is behaving correctly -- memory slots and PCI-E slots both seem good this time after Memtest and gpuburn / heaven runs.

Only remaining issue is a 'sleep of death' issue, only prevalent with a Zotac GTX 1080 under Windows 8.1 x64. Strangely enough, issue isn't present in Linux. Have an open ticket with NVIDIA about this... hopefully they can replicate it. I can only assume it is a driver issue since Linux recovers after sleep fine. Previous GTX Titan X card is unaffected by the sleep issue with same driver, also confirming some issue with the way the card is talking to the Windows driver, presumably


Posted By: vacaloca
Date Posted: 27 Jun 2016 at 12:40pm
The 'sleep of death' issue with the 1080 was resolved by reinstalling the O/S -- will slowly restore commonly used programs. One thing to note is that DPC latency is higher with the 1080, perhaps non-mature drivers at cause. Also, returning from sleep with video outputs connected to the 1080 is slower than video outputs connected to GTX Titan Black, for example.. so it seems that there are still some driver quirks. NVIDIA Level 2 technical support is looking at the longer sleep wakeup time, will post if they offer some resolution.

Edit: NVIDIA mentions these are known issues that will be fixed in some future driver update.
Edit 2: Issue of a slow monitor return from sleep with video outputs connected to GTX 1080 is fixed in Windows OS w/ 368.69 drivers.
Edit 3: High DPC latency is fixed in 368.95 hotfix drivers

Interestingly enough, Ubuntu 16.04 w/ latest NVIDIA drivers does not suffer from the longer sleep wakeup time.

One other issue that happened under Ubuntu that affected warm boots was the cause of the earlier slow bootups with Dr. Debug codes 99 and b4. Turns out it has nothing to do with APSM settings at all, but with a rare USB2/3 transition that results in a race condition upon bootup:
http://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/patch/include/linux/usb.h?id=feb26ac31a2a5cb88d86680d9a94916a6343e9e6" rel="nofollow - https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git/patch/include/linux/usb.h?id=feb26ac31a2a5cb88d86680d9a94916a6343e9e6

This is fixed in kernel 4.4.0-25.44 according to:
http://www.mail-archive.com/kernel-packages@lists.launchpad.net/msg183683.html" rel="nofollow - https://www.mail-archive.com/kernel-packages@lists.launchpad.net/msg183683.html

I personally just installed kernel 4.6.3 -- the latest mainline stable as of 6/27/2016:
http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.6.3-yakkety/" rel="nofollow - http://kernel.ubuntu.com/~kernel-ppa/mainline/v4.6.3-yakkety/

For steps on how to install:
http://ubuntuhandbook.org/index.php/2016/05/install-linux-kernel-4-6-ubuntu-16-04/" rel="nofollow - http://ubuntuhandbook.org/index.php/2016/05/install-linux-kernel-4-6-ubuntu-16-04/


Posted By: vacaloca
Date Posted: 29 Jun 2016 at 4:07am
This was from a while back -- I've since returned some of the sticks since the original problem was the motherboard DIMM slot and not the RDIMMs, but it does indeed support 256 GB. =) See pictures linked below:

http://imgur.com/a/zwwv6" rel="nofollow - http://imgur.com/a/zwwv6


Posted By: vacaloca
Date Posted: 29 May 2018 at 7:48pm
Almost 2 years later, this platform has worked out just fine. Upgraded to Windows 10 x64 a few months ago given that NVIDIA only released Windows 7 / Windows 10 drivers for their TITAN V card.

I have been running w/ 256 GB once again -- purchased the remaining DIMMs before the RAM price hike.

Last weekend upgraded the BIOS to 3.60 and after clearing CMOS and changing the slots of 2 out of 3 NVIDIA video cards, was able to get it to boot using the card that was connected to the displays rather than the TITAN V. Either Windows 10 x64 is a lot more stable than Windows 8.1 x64, or this 3.60 BIOS resolved the stability issues I had experienced earlier with the previous 3.x versions.

One thing to note, when the BIOS was upgraded, NICs and video cards showed up in 'Other Devices' under Device Manager. I had to delete any devices that showed up there in order for Windows to re-create/use the devices correctly after the BIOS upgrade.



Print Page | Close Window

Forum Software by Web Wiz Forums® version 12.04 - http://www.webwizforums.com
Copyright ©2001-2021 Web Wiz Ltd. - https://www.webwiz.net