ASRock.com Homepage
Forum Home Forum Home > Technical Support > Intel Motherboards
  New Posts New Posts RSS Feed - ASRock X99 WS-E memory compatiblity
  FAQ FAQ  Forum Search Search  Events   Register Register  Login Login

ASRock X99 WS-E memory compatiblity

 Post Reply Post Reply Page  123>
Author
Message Reverse Sort Order
vacaloca View Drop Down
Newbie
Newbie


Joined: 05 May 2016
Status: Offline
Points: 36
Post Options Post Options   Thanks (0) Thanks(0)   Quote vacaloca Quote  Post ReplyReply Direct Link To This Post Topic: ASRock X99 WS-E memory compatiblity
    Posted: 29 May 2018 at 7:48pm
Almost 2 years later, this platform has worked out just fine. Upgraded to Windows 10 x64 a few months ago given that NVIDIA only released Windows 7 / Windows 10 drivers for their TITAN V card.

I have been running w/ 256 GB once again -- purchased the remaining DIMMs before the RAM price hike.

Last weekend upgraded the BIOS to 3.60 and after clearing CMOS and changing the slots of 2 out of 3 NVIDIA video cards, was able to get it to boot using the card that was connected to the displays rather than the TITAN V. Either Windows 10 x64 is a lot more stable than Windows 8.1 x64, or this 3.60 BIOS resolved the stability issues I had experienced earlier with the previous 3.x versions.

One thing to note, when the BIOS was upgraded, NICs and video cards showed up in 'Other Devices' under Device Manager. I had to delete any devices that showed up there in order for Windows to re-create/use the devices correctly after the BIOS upgrade.
Back to Top
vacaloca View Drop Down
Newbie
Newbie


Joined: 05 May 2016
Status: Offline
Points: 36
Post Options Post Options   Thanks (0) Thanks(0)   Quote vacaloca Quote  Post ReplyReply Direct Link To This Post Posted: 29 Jun 2016 at 4:07am
This was from a while back -- I've since returned some of the sticks since the original problem was the motherboard DIMM slot and not the RDIMMs, but it does indeed support 256 GB. =) See pictures linked below:

Back to Top
vacaloca View Drop Down
Newbie
Newbie


Joined: 05 May 2016
Status: Offline
Points: 36
Post Options Post Options   Thanks (0) Thanks(0)   Quote vacaloca Quote  Post ReplyReply Direct Link To This Post Posted: 27 Jun 2016 at 12:40pm
The 'sleep of death' issue with the 1080 was resolved by reinstalling the O/S -- will slowly restore commonly used programs. One thing to note is that DPC latency is higher with the 1080, perhaps non-mature drivers at cause. Also, returning from sleep with video outputs connected to the 1080 is slower than video outputs connected to GTX Titan Black, for example.. so it seems that there are still some driver quirks. NVIDIA Level 2 technical support is looking at the longer sleep wakeup time, will post if they offer some resolution.

Edit: NVIDIA mentions these are known issues that will be fixed in some future driver update.
Edit 2: Issue of a slow monitor return from sleep with video outputs connected to GTX 1080 is fixed in Windows OS w/ 368.69 drivers.
Edit 3: High DPC latency is fixed in 368.95 hotfix drivers

Interestingly enough, Ubuntu 16.04 w/ latest NVIDIA drivers does not suffer from the longer sleep wakeup time.

One other issue that happened under Ubuntu that affected warm boots was the cause of the earlier slow bootups with Dr. Debug codes 99 and b4. Turns out it has nothing to do with APSM settings at all, but with a rare USB2/3 transition that results in a race condition upon bootup:

This is fixed in kernel 4.4.0-25.44 according to:

I personally just installed kernel 4.6.3 -- the latest mainline stable as of 6/27/2016:

For steps on how to install:


Edited by vacaloca - 22 Jul 2016 at 10:30pm
Back to Top
vacaloca View Drop Down
Newbie
Newbie


Joined: 05 May 2016
Status: Offline
Points: 36
Post Options Post Options   Thanks (0) Thanks(0)   Quote vacaloca Quote  Post ReplyReply Direct Link To This Post Posted: 24 Jun 2016 at 6:38am
Another update, quick for now. Basically the replacement board had (what seems like) a bad PLX chip, as when using gpuburn to test 2 GPUs at the same time in Linux I had NVRM Xid 32 errors ("Invalid or corrupted push buffer stream") on 1 slot and the GPU falling off the bus on /var/log/kern.log, and a subsequent crash. Testing 1 GPU at a time produced no issues, however, when I used at least two slots it definitely had these issues.

In Windows, while running Unigine Heaven benchmark, the monitor outputs went to power save when it failed and no recovery was possible. That being said, the memory slots worked just fine, passing Memtest with flying colors.

After a few more days and NewEgg providing an advance RMA, I finally have a board that is behaving correctly -- memory slots and PCI-E slots both seem good this time after Memtest and gpuburn / heaven runs.

Only remaining issue is a 'sleep of death' issue, only prevalent with a Zotac GTX 1080 under Windows 8.1 x64. Strangely enough, issue isn't present in Linux. Have an open ticket with NVIDIA about this... hopefully they can replicate it. I can only assume it is a driver issue since Linux recovers after sleep fine. Previous GTX Titan X card is unaffected by the sleep issue with same driver, also confirming some issue with the way the card is talking to the Windows driver, presumably
Back to Top
vacaloca View Drop Down
Newbie
Newbie


Joined: 05 May 2016
Status: Offline
Points: 36
Post Options Post Options   Thanks (0) Thanks(0)   Quote vacaloca Quote  Post ReplyReply Direct Link To This Post Posted: 14 Jun 2016 at 2:17am
Yet another update, turns out after testing multiple RDIMMs that the same Channel 3 / slot 0 error in Memtest86 7.0.0 beta is indicative of a bad motherboard DIMM slot (D1), as I can (now sadly) replicate it within seconds of a run, only for that particular slot. Swapping DIMMs that have previously tested correctly in tri-channel mode triggers the fault in the slot, no matter which DIMM. 

Thankfully, I was able to request a return from the retailer I purchased it from and will get the new board in a few days to retest.

One interesting note... when using any DIMM on D1 and looking at Memtest's memory latency number, it severely plummets... from ~29 ns when using tri-channel mode, down to ~102 ns when populating the bad D1 slot. I suspect this will fix itself with the new motherboard.


Edited by vacaloca - 14 Jun 2016 at 9:43pm
Back to Top
clubfoot View Drop Down
Newbie
Newbie
Avatar

Joined: 28 Mar 2016
Location: Canada
Status: Offline
Points: 246
Post Options Post Options   Thanks (0) Thanks(0)   Quote clubfoot Quote  Post ReplyReply Direct Link To This Post Posted: 05 Jun 2016 at 7:53am
Thanks I found this one from ICY:
http://www.newegg.ca/Product/Product.aspx?Item=N82E16817994143
Back to Top
vacaloca View Drop Down
Newbie
Newbie


Joined: 05 May 2016
Status: Offline
Points: 36
Post Options Post Options   Thanks (0) Thanks(0)   Quote vacaloca Quote  Post ReplyReply Direct Link To This Post Posted: 04 Jun 2016 at 9:34pm
Originally posted by clubfoot clubfoot wrote:

Very impressive system and diagnosis vacaloca. What make and model SSD hotswap caddy do you use...I would like to get one for my system?

It's a bit pricey though. You're supposed to screw the SSDs in place to the metal hot swap brackets, but I'm currently using it without the screws and it works fine. Has a slim ODD slot, as I wanted to just use a single slot for ODD and SSDs. I mostly just use it for cloning the OS drive every now and then to have a hot spare in case of failure.

I'm sure you can find other similar ones on NewEgg that might better fit what you're looking for.

The old one I had was something I got at TigerDirect for $10 after rebate a while ago, but not available anymore:
http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=7829711&sku=U12-42483
Back to Top
clubfoot View Drop Down
Newbie
Newbie
Avatar

Joined: 28 Mar 2016
Location: Canada
Status: Offline
Points: 246
Post Options Post Options   Thanks (0) Thanks(0)   Quote clubfoot Quote  Post ReplyReply Direct Link To This Post Posted: 04 Jun 2016 at 8:12am
Very impressive system and diagnosis vacaloca. What make and model SSD hotswap caddy do you use...I would like to get one for my system?
Back to Top
animaciek View Drop Down
Newbie
Newbie


Joined: 04 Jun 2016
Status: Offline
Points: 1
Post Options Post Options   Thanks (0) Thanks(0)   Quote animaciek Quote  Post ReplyReply Direct Link To This Post Posted: 04 Jun 2016 at 5:39am
Thank you for very detailed analyse. 
I'm planning to buy X99 Extreme4 and use E5-1650 v3 with 4 x Samsung M393A4K40BB0-CPB00.
If this turned out to be successful I will get another set of 4 M393A4K40BB0-CPB00 to make use of 256GB ram.
Back to Top
vacaloca View Drop Down
Newbie
Newbie


Joined: 05 May 2016
Status: Offline
Points: 36
Post Options Post Options   Thanks (0) Thanks(0)   Quote vacaloca Quote  Post ReplyReply Direct Link To This Post Posted: 03 Jun 2016 at 1:25am
As I updated in an earlier post, I found the reason for the excessive DPC latency... a bad RDIMM that had quite a lot of correctable errors -- this presents itself as overly high CPU usage and the 1080p youtube test being VERY slow. When looking at the Windows Event Viewer under System, they show up as a warning from source WHEA-Logger as:

"A corrected hardware error has occurred.

Component: Memory
Error Source: Corrected Machine Check"

Apparently Windows is capable of placing bad pages on error into its BCD configuration, but that functionality seems like does not work for this workstation board given that Windows continues to issue these errors at the rate of 700+/minute on that bad RDIMM, and no bad pages appear in the BCD store at all.

The sad part is I have at least 2 bad RDIMMs, although only 1 of them was bad enough that it adds the excessive latency. Currently figuring out which other one is affected to return as such before the Jet return period expires. Definitely not a slot issue, as I swapped the 4 RDIMMs installed first the ones on channel C1 D1, then the ones installed on channels B1 A1. When I swapped B1<->A1, the latency manifested itself. Re-seated modules, and same issue. Swapped out module A1 w/ another module, and latency went away.

Go figure that ECC RAM will first slow down your system w/ processor cycles correcting errors as opposed to UDIMMs where if corruption hits it can causes a memory check exception or similar.

I also realized I might be able to determine if an RDIMM is bad quicker than by using Memtest86 6.3/7.0 beta1... Just use a single RDIMM on slot A1, and see if the latency issue is present. If it is present, mark RDIMM as bad. Repeat for rest of sticks, and test the ones with no latency issues with a final run of Memtest86. (Edit: I later learned this actually was not an effective test, as the latency issue is only present with one of the sticks)

Of course I can just run Memtest86 on each DIMM for a few passes, but that might take a bit, even with multi processor support.

Ah the joys of debugging a bad RAM stick ;)


Edited by vacaloca - 05 Jun 2016 at 2:20am
Back to Top
 Post Reply Post Reply Page  123>
  Share Topic   

Forum Jump Forum Permissions View Drop Down

Forum Software by Web Wiz Forums® version 12.04
Copyright ©2001-2021 Web Wiz Ltd.

This page was generated in 0.125 seconds.