ASRock.com Homepage
Forum Home Forum Home > Technical Support > Intel Motherboards
  New Posts New Posts RSS Feed - Triple M.2 RAID setup
  FAQ FAQ  Forum Search Search  Events   Register Register  Login Login

Triple M.2 RAID setup

 Post Reply Post Reply Page  <1234>
Author
Message Reverse Sort Order
Brighttail View Drop Down
Newbie
Newbie


Joined: 15 Aug 2017
Location: Canada
Status: Offline
Points: 5
Post Options Post Options   Thanks (0) Thanks(0)   Quote Brighttail Quote  Post ReplyReply Direct Link To This Post Posted: 15 Aug 2017 at 6:32am
Hey there,

New user here and I'm trying to understand the RAID arrays using PCI-e lanes via the CPU and Chipset.

On my x99, I run two 960 pros at RAID 0 using windows software. Obviously I'm using the CPU PCI-e lanes and I can get seq reads of 7000 MB/s.

I have seen folks using the z170/z270 motherboards and creating RAID 0 arrays using the Chipset lanes, but they are only getting seq reads of 3500 MB/s.

Why is this?  Are the CPU lanes able to 'double up' while the Chipset lanes cannot?

I ask because some of the motherboards I'm looking at disable certain lanes when using a 28 lane CPU and thus I may only be able to use Chipset lanes to create a RAID 0 drive.  If this is the case, then I'm guessing it would be SLOWER than the RAID 0 drive I currently have because i'm using CPU PCI-e lanes instead of Chipset lanes. 

I agree, it would be a great thing if you could use a CPU lane/Chipset lane m.2 to create a RAID drive across both.  That would solve a lot of issues with a 28 lane CPU.  Hell for me it wouldn't have to even be bootable just a non-os drive!

Am I correct in this?  I would love to do a triple m.2 RAID but it sounds like it would be slower as a boot drive since you have to go through the chipset than a non-os drive and using the CPU lanes.

Finally concerning VROC, I don't think that even WITH a key you'll be able to use the 960 EVO.  From what I understand VROC is only for Intel drives UNLESS you are using a Xeon processor.  The key is only if you want to do RAID 1 and another key for RAID 5/10.  It really does suck that Intel won't allow non-intel drives to use VROC unless you have a XEON. :(


Edited by Brighttail - 15 Aug 2017 at 7:08am
Back to Top
parsec View Drop Down
Moderator Group
Moderator Group
Avatar

Joined: 04 May 2015
Location: USA
Status: Offline
Points: 4996
Post Options Post Options   Thanks (0) Thanks(0)   Quote parsec Quote  Post ReplyReply Direct Link To This Post Posted: 22 Jul 2017 at 10:52pm
Originally posted by marko.gotovac marko.gotovac wrote:

Hey Parsec,

So what you're saying is that it's possible that the 3 M.2 drives for manufacturer A might have 2 of them being driven by the chipset lanes, whereas for a hypothetical manufacturer B may have all 3 of them being driven by chipset lanes?

I did a bit more digging, and it looks like because of the DMI being limited at x4 bandwidth, the maximum number of M.2 drives I should have connected to the chipset is a single drive. 

Assuming one of the M.2 slots (yet to be confirmed by an ongoing support ticket with Gigabyte) is being driven by the CPU lanes and the remaining two are going through the chipset, would this sort of config work (not be limited by Intel's DMI?):

1. In the m.2 (chipset lane): Have a single M.2 drive running in full x4 mode.
*** This will be my windows bootable drive
2. In the other m.2 (CPU lane): Plug in a 960 evo.
3. With an AiC PCI-e adapter card, plug in the other 960 evo. 

I should then be able to put items 2. and 3. into a RAID0?

Thanks Parsec.


First, we have the earlier generation Z170 and Z270 boards whose chipsets have 24 DMI3/PCIe 3.0 lanes in them, connected to three full bandwidth PCIe 3.0 x4 M.2 slots. Myself and others have three M.2 NVMe SSDs in RAID 0 volumes on those boards. The trade-off with those chipsets is using each M.2 slot costs us two Intel SATA III ports. That trade-off does occur with X299, since it has more resources than the two non-HEDT type chipsets. But we have had three NVMe SSDs in RAID arrays for the two Intel chipset generations prior to X299, so it is nothing new.

Yes, a manufacture may choose to allocate the chipset lanes to different interfaces. My ASRock Z270 Gaming K6 board is a good example of that. It only has two PCIe 3.0 x4 M.2 slots, but has a PCIe 3.0 x16 slot (only x4 electrically) connected to the chipset. With an M.2 to PCIe slot adapter card, I have three M.2 SSDs in RAID 0 on that board.

eric_x currently has two M.2 NVMe SSDs in RAID 0 in two M.2 slots on his X299 board. I see no reason why the Gigabyte board is any different. Your statement about "DMI being limited at x4 bandwidth" does not make sense. Why the X299 boards seem to only have two PCIe 3.0 x4 M.2 slots instead of three is a good question. Note that on X299, none of its SATA ports are lost when NVMe SSDs are connected to the M.2 slots, and only one per M.2 slot is lost when using M.2 SATA SSDs. Possibly only two M.2 slots are provided to preserve resources for the SATA ports, which would be my bet.

Regarding RAID support for NVMe SSDs, some history and technical reality. Intel introduced the first and only support for NVMe SSDs in RAID in some of their 100 series chipsets with the IRST version 14 driver. NVMe SSDs use an NVMe driver, and Intel was able to add NVMe support to their IRST version 14 RAID driver. But the IRST RAID driver has always only controlled the chipset resources, SATA and now NVMe in RAID, not the PCIe resources of the CPU. We can connect an NVMe SSD to the processors's PCIe lanes, but it must have an NVMe driver installed. Even an NVMe SSD not in a RAID array, when connected to the chipset, must have an NVMe driver.

So we have never been able to have an NVMe RAID array, where the multiple NVMe SSDs were connected to a mix of the chipset and processor resources, or when connected only to the processor's resources. The IRST RAID driver only works with the chipset resources.

Now we have X299, which has the new VROC feature, Virtual RAID On CPU. Note that VROC uses the different RSTe (Rapid Storage enterprise) RAID driver. There is an option in the UEFI of the ASRock X299 board, to enable the use of RSTe. The description of VROC on the ASRock X299 board overview pages is:

Virtual RAID On CPU (VROC) is the latest technology that allows you to build RAID with PCIe Lanes coming from the CPU.

The point of all this is, I am skeptical that a RAID array of NVMe SSDs can be done when the SSDs are connected to a mix of the chipset and processor resources. That is, M.2 slots connected to the chipset, and PCIe slots connected to the processor. That would be a great feature, but I don't see any description of it anywhere in the overview or specs of X299 boards. I know Gigabyte said that is possible, but I'll believe it when I see it.

We cannot create a non-bootable RAID array with IRST within Windows with a mix of M.2 and PCIe slot connections. That is strictly a Windows virtual software RAID. As we have seen, the IRST program will only allow NVMe SSDs that are able to be remapped to be used in a RAID array.
Back to Top
eric_x View Drop Down
Newbie
Newbie
Avatar

Joined: 15 Jul 2017
Status: Offline
Points: 15
Post Options Post Options   Thanks (0) Thanks(0)   Quote eric_x Quote  Post ReplyReply Direct Link To This Post Posted: 22 Jul 2017 at 8:14pm
Assuming the other two are on CPU lanes that should work. If you don't want to boot from the CPU raid array you can actually make that configuration today with RST from inside Windows if I'm not mistaken.

Technically you are correct that the x4 bandwidth could be a bottleneck, but since a single drive does not saturate the connection you would likely only see a difference in specific benchmarks. Two m.2 drives would still work on the chipset lanes.

Thanks for the info, Parsec. I'll report back on what it looks like VROC actually does when I figure out where to get a key. I did finally finish a backup using Macrium Reflect which worked a lot better than Windows Backup when trying to save to my NAS. The updated BIOS has some visual changes:

1. M.2_1 is listed as unsupported for remapping, which is what I expect now but it's nice to have clarity here:

2. 960 EVO still shows as unsupported for VROC without key

3. Bug still exists where if you press escape while looking at a VROC drive it brings you to a prompt with no text but from the directory it looks like it resets something:


I may try turning off the windows caching again and see if I still get crashes.


Edited by eric_x - 22 Jul 2017 at 8:15pm
Back to Top
marko.gotovac View Drop Down
Newbie
Newbie


Joined: 21 Jul 2017
Status: Offline
Points: 2
Post Options Post Options   Thanks (0) Thanks(0)   Quote marko.gotovac Quote  Post ReplyReply Direct Link To This Post Posted: 21 Jul 2017 at 10:40pm
Hey Parsec,

So what you're saying is that it's possible that the 3 M.2 drives for manufacturer A might have 2 of them being driven by the chipset lanes, whereas for a hypothetical manufacturer B may have all 3 of them being driven by chipset lanes?

I did a bit more digging, and it looks like because of the DMI being limited at x4 bandwidth, the maximum number of M.2 drives I should have connected to the chipset is a single drive. 

Assuming one of the M.2 slots (yet to be confirmed by an ongoing support ticket with Gigabyte) is being driven by the CPU lanes and the remaining two are going through the chipset, would this sort of config work (not be limited by Intel's DMI?):

1. In the m.2 (chipset lane): Have a single M.2 drive running in full x4 mode.
*** This will be my windows bootable drive
2. In the other m.2 (CPU lane): Plug in a 960 evo.
3. With an AiC PCI-e adapter card, plug in the other 960 evo. 

I should then be able to put items 2. and 3. into a RAID0?

Thanks Parsec.
Back to Top
parsec View Drop Down
Moderator Group
Moderator Group
Avatar

Joined: 04 May 2015
Location: USA
Status: Offline
Points: 4996
Post Options Post Options   Thanks (0) Thanks(0)   Quote parsec Quote  Post ReplyReply Direct Link To This Post Posted: 21 Jul 2017 at 9:53pm
Originally posted by marko.gotovac marko.gotovac wrote:


@Eric, new registered user here.

I'm relatively new to PC building so would greatly appreciate some clarification as to some questions I have:

1a. From my research, speaking about the 10 core 7900x, this CPU provided 44 lanes. These lanes are all spread out across the 5 x16 slots, as seen in my Gigabyte x299 Gaming 7 mobo manual:

https://scontent.fybz1-1.fna.fbcdn.net/v/t1.0-9/20155742_10155549067923669_8406735989774525926_n.jpg?oh=6fd6a1bbc0776c62b6c8e9c99139e9c7&oe=59F756C0

Is this correct? 

(16 + 4 + 16 + 4 + 8) = 48? Isn't it supposed to equal 44?

1b. The chipset has 24 lanes. Correct?
1c. These chipset lanes are responsible for driving different components on the chipset such as the Ethernet connection, 8 SATA connections and the THREE M.2 drives?
1d. If so, how many lanes of the 24 are dedicated to the 3 m.2 drives? From my understanding, in order to be able to not bottleneck the potential bandwidth throughput of an M.2 drive, it requires at least 4 PCI-e lanes. Is that correct?

I've attached a photo of a page from the mobo describing what happens when utilizing the 3 m.2 slots:

https://scontent.fybz1-1.fna.fbcdn.net/v/t1.0-9/20245516_10155549070448669_8167031257432050270_n.jpg?oh=2829db001b070f9e09450d24a118d916&oe=59F315A6

1e. I've got my hands on two 960 EVOs and was hoping to get a RAID 0 going. Will I be bottlenecked by a lack of chipset PCI-e lanes? What is the maximum number of M.2 drives I can install in these three m.2 slots before I begin to experience a degradation in performance? And if there is some degradation, is my only alternative to buy a M.2 to PCI-e adapter card and plug it into one of the x4 slots being fed by the CPU lanes?

I apologize if these questions have been answered again and again, but I've been going from forum to forum, and all of these threads involving x299 is making my brain hurt. 

Any help would be appreciated!

Marko from Canada



First understand that the way a mother board manufacture allocates the various resources available on a platform can differ between manufactures. So what ASRock does may not be the same as what Gigabyte does on their X299 boards.

As the second picture shows, there is a sharing of the chipset resources (PCIe lanes) between the M.2 slots and the Intel SATA III ports. That is normal for the last several Intel chipsets. You may use either a specific M.2 slot, or the SATA III ports that are shared with that M.2 slot, but not both at the same time.

You won't be bottle necked using two 960 EVOs in two M.2 slots, or even three. All will be PCIe 3.0 x4, as long as that many PCIe 3.0 lanes have been allocated to each M.2 slot. Given the sharing of resources situtation, you have either full bandwidth on the M.2 slots and lose the ability to use the shared SATA III ports, or vice versa.

But we also apparently have PCIe 3.0 lanes from the X299 chipset allocated to one of the PCIe slots on your Gigabyte board. That would account for the 48 vs 44 lane count. So you will need to further verify which PCIe slot uses the X299 chipsets's PCIe 3.0 lanes, and how that affects things like the M.2 slots, IF it does so. Normally the PCIe lanes used by a slot are only actually used when a card is inserted in the slot.
Back to Top
marko.gotovac View Drop Down
Newbie
Newbie


Joined: 21 Jul 2017
Status: Offline
Points: 2
Post Options Post Options   Thanks (0) Thanks(0)   Quote marko.gotovac Quote  Post ReplyReply Direct Link To This Post Posted: 21 Jul 2017 at 12:20pm

@Eric, new registered user here.

I'm relatively new to PC building so would greatly appreciate some clarification as to some questions I have:

1a. From my research, speaking about the 10 core 7900x, this CPU provided 44 lanes. These lanes are all spread out across the 5 x16 slots, as seen in my Gigabyte x299 Gaming 7 mobo manual:

https://scontent.fybz1-1.fna.fbcdn.net/v/t1.0-9/20155742_10155549067923669_8406735989774525926_n.jpg?oh=6fd6a1bbc0776c62b6c8e9c99139e9c7&oe=59F756C0

Is this correct? 

(16 + 4 + 16 + 4 + 8) = 48? Isn't it supposed to equal 44?

1b. The chipset has 24 lanes. Correct?
1c. These chipset lanes are responsible for driving different components on the chipset such as the Ethernet connection, 8 SATA connections and the THREE M.2 drives?
1d. If so, how many lanes of the 24 are dedicated to the 3 m.2 drives? From my understanding, in order to be able to not bottleneck the potential bandwidth throughput of an M.2 drive, it requires at least 4 PCI-e lanes. Is that correct?

I've attached a photo of a page from the mobo describing what happens when utilizing the 3 m.2 slots:

https://scontent.fybz1-1.fna.fbcdn.net/v/t1.0-9/20245516_10155549070448669_8167031257432050270_n.jpg?oh=2829db001b070f9e09450d24a118d916&oe=59F315A6

1e. I've got my hands on two 960 EVOs and was hoping to get a RAID 0 going. Will I be bottlenecked by a lack of chipset PCI-e lanes? What is the maximum number of M.2 drives I can install in these three m.2 slots before I begin to experience a degradation in performance? And if there is some degradation, is my only alternative to buy a M.2 to PCI-e adapter card and plug it into one of the x4 slots being fed by the CPU lanes?

I apologize if these questions have been answered again and again, but I've been going from forum to forum, and all of these threads involving x299 is making my brain hurt. 

Any help would be appreciated!

Marko from Canada



Edited by marko.gotovac - 21 Jul 2017 at 12:24pm
Back to Top
parsec View Drop Down
Moderator Group
Moderator Group
Avatar

Joined: 04 May 2015
Location: USA
Status: Offline
Points: 4996
Post Options Post Options   Thanks (0) Thanks(0)   Quote parsec Quote  Post ReplyReply Direct Link To This Post Posted: 20 Jul 2017 at 12:22pm
Interesting about VROC and the three M.2 slots working together. I assume anything with VROC would use the RSTe driver. The VROC acronym (Virtual RAID On CPU) contradicts itself if the two M.2 slots that must be connected to the X299 chipset, can be used with VROC. Using the M.2 slots does not use up any of the CPU's PCIe lanes, so if it is possible, it is more magic from Intel, as the PCIe Remapping in a sense is.

If you could not use IRST and RSTe at the same time, they you could not even have SATA drives in RAID 0, which would not be good. It will be interesting to see how IRST and RSTe RAID drivers coexist.

I forgot to explain something you asked about earlier, why your third 960 EVO does not appear in the IRST Windows program as a detected drive. That is normal for the IRST RAID driver and the IRST Windows program on the Z170 and Z270 platforms. Only NMVe SSDs that have been "remapped" with the PCIe Remapping options appear in the IRST Windows program. Since the M2_1 slot (right) does not have a Remapping option, an NVMe SSD connected to it will not be shown in the IRST Windows program and UEFI IRST utility.

We are so accustomed to SATA drives being "detected" in the UEFI/BIOS that we expect NVMe SSDs to behave the same way. NVMe is unrelated to SATA, and their controller is not in the board's chipset as the SATA controller is, it is in the NVMe SSD itself. On X99 boards, the only place NVMe SSDs are shown is in the System Browser tool, and in an added NVMe Configuration screen (if it is available), that simply shows the NVMe SSD listed. A more extreme case is my Ryzen X370 board, with no NVMe RAID support. It has an M.2 slot, and will list an NVMe SSD in the slot. But if I use an M.2 to PCIe slot adapter card for the NVMe SSD, as I have done as a test, it is shown nowhere in the UEFI except the Boot Order. That 960 EVO works fine as the OS drive in the adapter card, using that PC right now. The Windows 10 installation program recognized it fine.

I can give you some information about this spec:

Please note that if you install CPU with 44 lanes or 28 lanes, RSTe does not support PCH PCIe NVME and VROC (IntelĀ® Virtual RAID on CPU) is supported.

Socket 2066/X HEDT processors are actually from two Intel CPU generations, Skylake and Kaby Lake.

The Kaby Lake 2066 X processors are the i5-7640X and i7-7740X, both only provide 16 PCIe 3.0 lanes.

The Skylake 2066 X processors are the i7-7800X, i7-7820X, and i9-7900X. Both i7s have 28 PCIe 3.0 lanes, and the i9 has 44 PCIe 3.0 lanes.

So that spec is saying, as far as I can tell, that the Skylake 2066 X processors do not support using the RSTe RAID (driver) with the PCH (X299 chipset) NMVe interface, the two (or three?) M.2 slots.

But the Skylake 2066 X processors support using the RSTe RAID (driver) with VROC.

We basically know that we need to use the Intel IRST RAID driver for the PCH NVMe interface (M.2) RAID, which this spec states in a round about way. As I thought, it seems the RSTe RAID driver is only used with VROC. VROC does use the PCIe slots for drives in RAID. I hope all the M.2 slots are usable with VROC, but I'm a bit skeptical about that.

What extra RAID support (RSTe, VROC) the two Kaby Lake 2066 X processors have, if any, I don't know. Probably none, since they are cheaper processors. Only IRST RAID with the X299 chipset's two M.2 slots can be used with the Kaby Lake processors, no VROC. Since VROC will use the processor's PCIe lanes, and the Kaby Lake processors only have 16 PCIe lanes, which in the past have always been allocated per slot as 16 in one slot, or 8 and 8 in two slots, how can you use a video card if all the PCIe lanes are in use in two slots? Actually, the 16 PCIe lane processor spec for your board is one slot at x8, and the other at x4, if that is correct.

Intel already has NVMe SSDs that use a PCIe slot interface, the consumer 750 model, and their many Data Center DC series professional SSDs.

As you can see, I am a PC storage drive enthusiast, so all this interests me. I'd enjoy having an X299 board mainly to play with the storage interfaces, with the appropriate CPU of course.
Back to Top
eric_x View Drop Down
Newbie
Newbie
Avatar

Joined: 15 Jul 2017
Status: Offline
Points: 15
Post Options Post Options   Thanks (0) Thanks(0)   Quote eric_x Quote  Post ReplyReply Direct Link To This Post Posted: 20 Jul 2017 at 4:52am
I have been away from my computer and Windows backup kept failing so I haven't had a chance to see what the new BIOs does. I did hear back from Gigabyte support, their motherboard has the same setup but they seem to think VROC will be able to combine chipset and CPU drives. Again, will have to see what is supported when it actually comes out. Since it's the same lane setup the same settings will likely work on the ASRock triple m.2 x299 boards.

Q: Does the Aorus 7/9 support 3 way RAID 0 with the onboard m.2 slots? I have 3 x Samsung 960 EVOs.

A: Currently only 2x M.2 RAID can be supported. The M2M_32G connector must work with an Intel VROC Upgrade Key to support RAID configuration.

Q: With the upgrade key can it RAID with the two onboard slots or do I need a PCIe add on card? Do you know when the VROC key will be available?

A: There are 3 x M.2 slots for X299 AORUS Gaming 7. You can use M2Q_32G and M2P_32G to setup RAID currently. After Intel VROC installed, you can setup RAID using M2M_32G, M2P_32G, and M2Q_32G together.

For Intel VROC Upgrade Key releasing date and detail information, you can follow most of major computer forums or news. Once it has releasing date, you will be able to find out immediately.


Edited by eric_x - 20 Jul 2017 at 4:53am
Back to Top
eric_x View Drop Down
Newbie
Newbie
Avatar

Joined: 15 Jul 2017
Status: Offline
Points: 15
Post Options Post Options   Thanks (0) Thanks(0)   Quote eric_x Quote  Post ReplyReply Direct Link To This Post Posted: 18 Jul 2017 at 12:57am
Thanks, I didn't see that. I think I will try the Windows 10 image backup in case anything happens, I already accidentally erased the array once in testing. I did notice a bug in the UEFI where if you were in the VROC control and pressed escape it would bring you to a dialog with no text so maybe they fixed that with this update.
Back to Top
parsec View Drop Down
Moderator Group
Moderator Group
Avatar

Joined: 04 May 2015
Location: USA
Status: Offline
Points: 4996
Post Options Post Options   Thanks (0) Thanks(0)   Quote parsec Quote  Post ReplyReply Direct Link To This Post Posted: 17 Jul 2017 at 11:50pm
I noticed today that ASRock has a new UEFI/BIOS version for your board. Or should I say the first update from the initial factory version. But please read all of this post before you update:

http://www.asrock.com/MB/Intel/Fatal1ty%20X299%20Professional%20Gaming%20i9/index.asp#BIOS

Of interest to you may be the update to the Intel RAID ROM, according the the description. What that will provide for your board remains to be seen, but I would suggest applying the new UEFI version.

I don't know about your experience using ASRock boards, but I strongly suggest using the Instant Flash update method, labeled "BIOS" on the download page. Instant Flash is in the Tools section of the UEFI, or at least should be.

This and all UEFI/BIOS updates will reset all of the UEFI options to their defaults (with some exceptions), and remove any saved UEFI profiles you may have.

Early on with the Z170 boards, users that had NVMe SSD IRST RAID arrays discovered that after a UEFI update, even if the PC just completed POST when NOT in RAID mode and without the PCIe Remapping options enabled, their RAID arrays would be corrupted and lost! That did not happen with SATA drive RAID arrays (no PCIe Remapping settings), so no one expected that behavior.

I don't know who made the mistake here, Intel or ASRock, but it was not a good thing for NVMe SSD RAID users. Of course that was the first time IRST RAID worked with NVMe SSDs, with the then new IRST 14 version. At first we would remove our NVMe SSDs after a UEFI update, reset to RAID and Remapping, and install the SSDs again.

ASRock now does NOT reset the "SATA" mode from RAID to AHCI or reset the PCIe Remapping options during a UEFI update in their newer UEFI versions, after learning about this issue. I imagine they remembered to do that with your board, but just an FYI.
Back to Top
 Post Reply Post Reply Page  <1234>
  Share Topic   

Forum Jump Forum Permissions View Drop Down

Forum Software by Web Wiz Forums® version 12.04
Copyright ©2001-2021 Web Wiz Ltd.

This page was generated in 0.172 seconds.