ASRock.com Homepage
Forum Home Forum Home > Technical Support > AMD Motherboards
  New Posts New Posts RSS Feed - Enabling NVMe RAID Mode Disables PCIE3 Slot + More
  FAQ FAQ  Forum Search Search  Events   Register Register  Login Login

Enabling NVMe RAID Mode Disables PCIE3 Slot + More

 Post Reply Post Reply
Author
Message
organshifter View Drop Down
Newbie
Newbie
Avatar

Joined: 07 May 2018
Status: Offline
Points: 5
Post Options Post Options   Thanks (0) Thanks(0)   Quote organshifter Quote  Post ReplyReply Direct Link To This Post Topic: Enabling NVMe RAID Mode Disables PCIE3 Slot + More
    Posted: 07 May 2018 at 2:28pm
Before UEFI P4.60, NVMe RAID mode wasn't available for my motherboard, the ASRock Fatal1ty X370 Professional Gaming. Once the feature became available (March 2018), I was eager to try it out. It didn't go well.

The problem is that, with UEFI P4.60, enabling NVMe RAID mode under Advanced/AMD PBS, RAIDXpert2 doesn't detect the NVMe M.2 drives in the UEFI environment. Each drive continues to be listed under Advanced/Storage Configuration, which should not be the case once NVMe RAID mode is active. The UEFI Driver Version for bios P4.60 is 8.x.x, which is old, and I suspect to be the issue. Leaving NVMe RAID mode enabled on UEFI P4.60, and booting into Windows 10 Pro x64 gives different results. RAIDXpert2 actually detects the drives just fine, which are located in sockets M2_1 (PCIE) and M2_2 (PCIE).

Another major issue with UEFI P4.60 is that enabling NVMe RAID mode disables the PCIE3 expansion slot. I have an M.2 NVMe to PCIe x4 adapter equipped with a third Samsung SSD 960 EVO, which is installed into the PCIE3 slot. It works perfectly with NVMe RAID mode disabled. The UEFI detects all threes drives without issue, as does Windows 10 Pro x64. Once NVMe RAID mode is enabled, the adapter is no longer detected at all. It does not matter if the PCIe x16 Switch (J10) option is set to 1x16 or 4x4. Removing the adapter, and installing a second graphics card yields the same results, which is the graphics card not being detected anywhere on the system while NVMe RAID mode is enabled. The PCIE3 slot is completely disabled.

It is also worth noting that with NVMe RAID mode disabled, and while booted into Windows 10 Pro x64, I'm able to install the AMD-RAID Bottom Device Version: 9.2.0.41 [3/19/2018] driver for all three M.2 NVMe drives. Once I reboot, enter the UEFI, and enable NVMe RAID mode, only the two drives connected to sockets M2_1 (PCIE) and M2_2 (PCIE) show up in RAIDXpert2. Again, the M.2 NVMe drive installed onto the adapter is not available due to PCIE3 being disabled. These are the issues that I have been facing since the beginning of March 2018.

A week ago (05/01/2018), I updated the UEFI to P4.70, and then to L4.64 shortly after. To my delight, the UEFI Driver Version has been updated to 9.2.0-00009 on both versions. With NVMe RAID mode enabled, RAIDXpert2 finally detects my two M.2 NVMe drives connected to sockets M2_1 (PCIE) and M2_2 (PCIE) from within the UEFI environment. I am now able to configure the two drives in a RAID array as I please. This is definitely a step in the right direction.

However, the one major issue from UEFI P4.60 still remains in version P4.70, and L4.64. Enabling NVMe RAID mode still disables the PCIE3 expansion slot. So, I can't utilize a second GPU, PCIe adapter, or anything else which requires the use of that slot.

This really takes the wind out of my sails as I was mainly hoping the adapter with the third NVMe drive would finally be detected by RAIDXpert2 or, at the very least, just continue functioning along with the other two drives, especially considering that it is using the exact same AMD-RAID driver mentioned previously. The key word is "hoping", as this is just wishful thinking on my part. If detecting drives connected through an M.2 to PCIe adapter is not a function of the raid controller, that is totally understandable. Still and all, this doesn't mean that PCIE3 should be disabled altogether.

So, as it stands, UEFI P4.70 and L4.64 will allow NVMe RAID mode to function properly. What's more is that my operating system is installed on two 250GB SanDisk Ultra 3D NAND SATA III drives in RAID 0 through RAIDXpert2. The array is functioning flawlessly, and continues to do so even if I create a raid array with the two M.2 NVMe drives installed into sockets M2_1 (PCIE) and M2_2 (PCIE). RAIDXpert2 manages both arrays without issue. To achieve that setup, however, it requires enabling NVMe RAID mode, which leaves the third drive on the M.2 NVMe to PCIe adapter disabled on PCIE3. So, because I want to make use of all three 250GB Samsung 960 EVO's, I have them configured as individual drives using Samsung's NVMe controller driver (v2.3.0.1709). While I am able to software RAID the three of them, I was really hoping to RAID them through RAIDXpert2 instead. At this point, I would just settle for PCIE3 to remain active while NVMe RAID mode is enabled.

In regards to the PCIe x16 Switch (J10) option that becomes available after enabling NVMe RAID mode under Advanced/AMD PBS. Bifurcation is not setup properly. As I said before, it gives me the ability to choose between 1x16 or 4x4. The problem is that it doesn't allow me to specify which PCIe slot I want to apply the bifurcation to. For example, expansion slot PCIE2 is populated with an x16 graphics card. Inserting any device into PCIE3 automatically reduces both slots (PCIE2 & PCIE3) from x16 to x8, which is normal. However, the PCIe x16 Switch (J10) option only applies to PCIE2. So selecting 4x4 reduces the primary PCIe x16 slot to x4, and that is a big problem. So, my only choice is to select 1x16. There desperately needs to be an option to choose which slot(s) these bifurcation changes are going to be applied to. By the way, if I install my GPU into PCIE3, the adapter into PCIE2, and set the bifurcation to 4x4, the system will not post.

The bifurcation options are especially important to me as I have an ASRock Ultra Quad M.2 Card that I can't use properly due to not having the ability to select PCIE3 as the slot I want to bifurcate to 4x4. Being that the slot will only function at x8, I would only install two of the M.2 NVMe drives onto the ASRock Ultra Quad M.2 Card, allowing each drive to utilize x4 data lanes. If I ever get the chance. Currently, even with two M.2 drives installed onto the adapter, only one is being detected. No 4x4, no joy.

Another issue is with the way additional storage drives are listed. With NVMe RAID mode disabled, and PCIE3 fully functional, the Samsung 960 EVO drive installed onto the M.2 NVMe to PCIe x4 adapter is never listed under Advanced/Storage Configuration. Being that it is a storage device, it needs to be listed with the other storage solutions. Currently, the only way to see the drive within the UEFI environment is to navigate to Boot/Hard Drive BBS Priorities. It is listed with the other two NVMe drives, and is available to boot from.

This has all been tested with CSM set to UEFI only, as well as CSM being disabled altogether. Is anyone else experiencing any of these issues? Can anyone recreate, and confirm that PCIE3 gets disabled when enabling NVMe RAID mode? Is this something likely to be fixed in a UEFI update?

I purchased, and assembled this system in March 2017. It has been rock solid, but this NVMe raid and PCIE3 issue has not been a pleasant experience, especially considering how long I've waited to get to this point.

System Specs:
ASRock Fatal1ty X370 Professional Gaming 
AMD Ryzen 7 1800X 
G.SKILL Ripjaws 32GB DDR4 3200 PC4 25600
3x 250GB Samsung SSD 960 EVO NVMe M.2
2x 250GB SanDisk Ultra 3D NAND SATA III SSD RAID 0
2x 8TB Seagate Archive v2 RAID 0 installed in: (ICYRaid)
Gigabyte AORUS Radeon RX580 XTR GDDR5 8GB
Rosewill 1600 Watt HERCULES1600S
Windows 10 Pro x64 v1709 Build 16299.371
Back to Top
JohnM View Drop Down
Groupie
Groupie
Avatar

Joined: 20 Feb 2018
Location: UK
Status: Offline
Points: 267
Post Options Post Options   Thanks (0) Thanks(0)   Quote JohnM Quote  Post ReplyReply Direct Link To This Post Posted: 08 May 2018 at 5:43am
That was a lot to read and take in, especially as I'm not intimately familiar with your motherboard. Am I right in understanding that what you would actually like is a x8+x4+x4 mode, with eight lanes being routed to the primary graphics card slot, four to the secondary and four to the second M.2 socket? Where does that second M.2 socket currently get its PCIe lanes from? It must be the X370 (and hence ver. 2.0) since the CPU has none left. It seems as though what you want is not possible. But I'm curious, in 4x4 mode where do you believe the lanes are routed - which four destinations each receive four lanes?

Perhaps your best chance of success would be with the normal 2x8 bifurcation mode that happens automatically when you have cards in both the primary and secondary graphics card slots. In the latter you need a x8 adapter card to two M.2 sockets, if such a thing exists. I've only found cheap x4 to single M.2 socket adapters and expensive x16 to four M.2 socket adapters.
ASRock Fatal1ty AB350 ITX P4.90, AMD Ryzen 5 2400G, 2x8GB Corsair CMK16GX4M2A2666C16, 250GB Samsung 960EVO, 500GB Samsung 850EVO, 4TB WD Blue, Windows 10 Pro 64, Corsair SF450, Cooler Master Elite 110
Back to Top
organshifter View Drop Down
Newbie
Newbie
Avatar

Joined: 07 May 2018
Status: Offline
Points: 5
Post Options Post Options   Thanks (0) Thanks(0)   Quote organshifter Quote  Post ReplyReply Direct Link To This Post Posted: 09 May 2018 at 2:38pm
Originally posted by JohnM JohnM wrote:

That was a lot to read and take in, especially as I'm not intimately familiar with your motherboard. Am I right in understanding that what you would actually like is a x8+x4+x4 mode, with eight lanes being routed to the primary graphics card slot, four to the secondary and four to the second M.2 socket? Where does that second M.2 socket currently get its PCIe lanes from? It must be the X370 (and hence ver. 2.0) since the CPU has none left. It seems as though what you want is not possible. But I'm curious, in 4x4 mode where do you believe the lanes are routed - which four destinations each receive four lanes?

Perhaps your best chance of success would be with the normal 2x8 bifurcation mode that happens automatically when you have cards in both the primary and secondary graphics card slots. In the latter you need a x8 adapter card to two M.2 sockets, if such a thing exists. I've only found cheap x4 to single M.2 socket adapters and expensive x16 to four M.2 socket adapters.
Hello, John. Thanks for responding.

Ultimately, I would like to enable NVMe RAID mode, and not have PCIE3 non-functional by doing so, all while having the ability to bifurcate the slot to 4x4. At this point, while bifurcation of PCIE3 is something I would enjoy, having the slot remain functional is most important.

Initially, I had an RX 580 installed into PCIE2 (primary graphics card slot), a second RX 580 installed in PCIE3. Each slot functioned at PCI-E 3.0 x8. The first M.2 socket (M2_1 (PCIE) is Ultra, operates at PCI-E 3.0 x4, and is populated with a 250GB Samsung 960 EVO. That's twenty PCI-E 3.0 lanes functioning without issue, four of which are dedicated to M2_1 (PCIE), leaving sixteen available to the user. This setup still works perfectly on the latest UEFI P4.70 or L4.64.

Here's my problem. When I remove the second RX 580 from PCIE3, and install a SilverStone PCI-E X4 Card in its place, it continues to take up eight lanes. Being that the card only requires four, bifurcation of 4x4 is desperately needed.
  • Graphics Card: PCIE2 using PCI-E 3.0 x8
  • Silverstone Card: PCIE3 using PCI-E 3.0 x4 (reserving x8)
  • M2_1 (PCIE): PCI-E 3.0 x4
I recently purchased an ASRock Ultra Quad M.2 Card, which is PCI-E 3.0 x16, and capable of powering four PCI-E 3.0 x4 cards at maximum speed. In my case, and the reason why bifurcation is important, is that I want to install two Samsung 960 EVO's onto the card, bifurcate PCIE3 to 4x4, resulting in the eight lanes being split between the two 960 EVO's, giving x4 each. This will allow for me to raid those two drives along with the one located in M2_1 (PCIE), all utilizing PCI-E 3.0 lanes. That is my end goal. Problem is, whenever I enable NVMe RAID mode under Advanced\AMD PBS, PCIE3 no longer functions. Nothing in the slot is detected within the UEFI or Windows 10 x64. I see no reason for this to happen, as it just completely takes eight PCI-E 3.0 lanes away from me.

Additionally, once NVMe RAID mode is enabled under Advanced\AMD PBS, a PCIe x16 Switch (J10) option becomes available. It gives me bifurcation choices of 1x16 or 4x4. These choices affect PCIE2, which is where my primary GPU is located. When I choose 4x4, my graphics card runs at PCI-E 3.0 x4, which is far from ideal. Choosing 1x16, my graphics card still only runs at PCI-E 3.0 x8, which is weird considering that PCIE3 is non-functional at this point. So, I guess the system knows a device is there. I'm just not getting access to it.

I also want to note that, with NVMe RAID mode disabled, the SilverStone PCI-E X4 Card, which only supports one M.2 drive, works perfectly fine in PCIE3. The ASRock Ultra Quad M.2 Card also works in this scenario. However, it only recognizes one of the two M.2 drives on the card as the slot is not bifurcated. In any event, both Samsung 960 EVO's using PCI-E 3.0 lanes are detected, and work as they should.

The second M.2 socket (M2_2 (PCIE), which operates at PCI-E 2.0 x4, is also detected, and works as intended. I believe that you are correct in stating that the socket is powered by the X370 chipset. Having said that, it doesn't appear to be the cause of my PCIE3, or PCI-E 3.0 issues at all.

Pic 1 & 2 shows how my drives are listed when NVMe RAID mode is disabled. The M.2 drive on the adapter card doesn't show up here, but it should as it is a storage device.


Pic 3 shows Advanced\AMD PBS options before enabling NVMe RAID mode. At the top of the picture, you'll notice PCIe x16/2x8 Switch. That disappears after enabling NVMe RAID mode.

Pic 4 shows Advanced\AMD PBS options after enabling NVMe RAID mode (PCIe x16 Switch (J10) appears).

Pic 5 shows all three Samsung 960 EVO's available under Boot\Hard Drive BBS Priorities. As a reminder, this is the only place to see all three drives listed as storage options within the UEFI. There's no way to tell one drive from another. Booting into Windows 10 x64 detects all three drives, and I can then view where each is installed, as well as identify each by serial number.

The following two pictures are from TweakTown's review of the ASRock Ultra Quad M.2 Card, using an ASRock motherboard. The first pic shows how the user is given the option to specifically bifurcate a particular PCIE slot. This is what I need through a UEFI update.

The second pic shows how RAIDXpert2 detects the drives on the card after bifurcation is complete. They're utilizing four Samsung 960 Pro drives. I only want to use two 960 EVO's, as stated previously.



Edited by organshifter - 09 May 2018 at 2:53pm
Back to Top
wardog View Drop Down
Moderator Group
Moderator Group


Joined: 15 Jul 2015
Status: Offline
Points: 6447
Post Options Post Options   Thanks (0) Thanks(0)   Quote wardog Quote  Post ReplyReply Direct Link To This Post Posted: 09 May 2018 at 2:46pm
As far as I'm aware, currently the only chipsets capable of bifurcating are the X299 and x399
Back to Top
organshifter View Drop Down
Newbie
Newbie
Avatar

Joined: 07 May 2018
Status: Offline
Points: 5
Post Options Post Options   Thanks (0) Thanks(0)   Quote organshifter Quote  Post ReplyReply Direct Link To This Post Posted: 09 May 2018 at 4:07pm
Originally posted by wardog wardog wrote:

As far as I'm aware, currently the only chipsets capable of bifurcating are the X299 and x399

Thanks for your reply, Wardog.
 
I'm actually able to bifurcate, just not for the PCIE slot I'd hoped for. I received a response from ASRock support regarding the matter, and I'm going to try his suggestion.



I agree with what he's saying about the M2_1 and M2_2 sockets, because I've tried it, and the performance was great, but not spectacular (which I assumed would be the case).

I've also tried his second suggestion as well, but only partly. Here's what I did. After realizing that I could only bifurcate PCIE2, I installed the adapter card into that slot, and the GPU into PCIE3. I entered the UEFI, set the bifurcation to 4x4, rebooted, and was prepared to finally get the results I wanted. No joy as the system wouldn't post. I only received error beeps, and a debug code on the motherboard's display. To no surprise, it was VGA related.

The only difference, and it's a significant one, is that he wants me to install my GPU into PCIE5 instead. This could totally eliminate the PCI-E 3.0 issue as PCIE5 is ver. 2.0. I keep asking myself is the performance drop worth it? Guess I'll find out soon enough.

Maybe PCIE2 & M2_1 are the ticket! The saga continues...I'll report back.
Back to Top
 Post Reply Post Reply
  Share Topic   

Forum Jump Forum Permissions View Drop Down

Forum Software by Web Wiz Forums® version 12.04
Copyright ©2001-2021 Web Wiz Ltd.

This page was generated in 0.172 seconds.