ASRock.com Homepage
Forum Home Forum Home > Technical Support > Intel Motherboards
  New Posts New Posts RSS Feed - Triple M.2 RAID setup
  FAQ FAQ  Forum Search Search  Events   Register Register  Login Login

Triple M.2 RAID setup

 Post Reply Post Reply Page  123 4>
Author
Message Reverse Sort Order
Fraizer View Drop Down
Newbie
Newbie


Joined: 25 Dec 2018
Status: Offline
Points: 2
Post Options Post Options   Thanks (0) Thanks(0)   Quote Fraizer Quote  Post ReplyReply Direct Link To This Post Topic: Triple M.2 RAID setup
    Posted: 25 Dec 2018 at 9:13pm
sorry the image dosent appear:

Back to Top
Fraizer View Drop Down
Newbie
Newbie


Joined: 25 Dec 2018
Status: Offline
Points: 2
Post Options Post Options   Thanks (0) Thanks(0)   Quote Fraizer Quote  Post ReplyReply Direct Link To This Post Posted: 25 Dec 2018 at 9:09pm
Hello all

I just bought 2 NVMe Samsung 970 Pro 1 To to make an raid 0 on motherboard chipset Z390 last drivers, i just update the Intel ME Firmware to the 1,5Mo v12.0.10.1127).

I fault by doing this raid 0 i will have a better speed... But it worst than an 1 970 Pro alone without raid...

like your bench and like you see here too they make an test of the 970 Pro (version 512Go):

https://www.guru3d.com/articles-pages/sa...-review,15.html

except for the write Seq Q32T1 i have a very bad speed compare to the non Raid 970 Pro...

my setting an bench (sorry for the french):

https://files.homepagemodules.de/b602300/f28t4197p66791n2_mFCEsYBH.png


I make many test in a cold computer or runing since 1 hour, is a fresh windows 10 pro with all last drivers and last bios.

thank you
Back to Top
parsec View Drop Down
Moderator Group
Moderator Group
Avatar

Joined: 04 May 2015
Location: USA
Status: Offline
Points: 4996
Post Options Post Options   Thanks (0) Thanks(0)   Quote parsec Quote  Post ReplyReply Direct Link To This Post Posted: 12 Oct 2017 at 3:16am
Originally posted by Brighttail Brighttail wrote:

The 16k vs 128k is a valid suggestion. I'll bring that up to him.



The difference in benchmark results between the default 16K stripe and the 64K or 128K stripe sizes was quite clear with everyone that tried both. The larger stripe sizes consistently gave higher large file sequential read and write speeds.

Intel apparently has the default RAID 0 stripe size set to 16K for SSDs specifically, in order to preserve the small file, 4K random read speed. There is about a 10% reduction in the 4K random read speed in RAID 0 compared to a single SSD, for some reason.

You can easily select the RAID 0 stripe size when creating the array, but it is easy to miss that option if you don't have experience in the creation process. The bad news is, we cannot change the stripe size of an existing RAID array without creating it over again.

Back to Top
Brighttail View Drop Down
Newbie
Newbie


Joined: 15 Aug 2017
Location: Canada
Status: Offline
Points: 5
Post Options Post Options   Thanks (0) Thanks(0)   Quote Brighttail Quote  Post ReplyReply Direct Link To This Post Posted: 12 Oct 2017 at 12:04am
The 16k vs 128k is a valid suggestion. I'll bring that up to him.

Back to Top
parsec View Drop Down
Moderator Group
Moderator Group
Avatar

Joined: 04 May 2015
Location: USA
Status: Offline
Points: 4996
Post Options Post Options   Thanks (0) Thanks(0)   Quote parsec Quote  Post ReplyReply Direct Link To This Post Posted: 13 Sep 2017 at 11:05am
Originally posted by Brighttail Brighttail wrote:

So I have a friend who got the Asus Apex 6.  Interestingly enough he tried a RAID 0 array with both his m.2s on the chipset DIMM.2 slot.   His two 960s EVOs in RAID 0 using the chipset was LESS than a single one using the chipset PCI-e lanes.

Amazingly enough, he was able to create a RAID 0 array using one M.2 in a CPU Pci-e lane and one m.2 in the chipset Pci-e lane.  We had talked about it and it looks like you can now create a RAID 0 array using Pci-e lanes from both the CPU and PCH lanes. :)  While not quite double he was able to get 5500 seq read and 2500 writes.

Making a RAID 0 array using only the CPU pci-e lanes he was able to get 6400 seq read and 3150 seq writes.

Another interesting note was that  it looks that there may be an issue with RAID 0 and the chipset lanes, at least with the Asus board. Hopefully they will fix that with a BIOS update but I have heard it happening with Asrock and MSI too.


The 960 EVOs in RAID 0 being slower than a single 960 EVO is obviously due to the RAID 0 array being created with the default 16K stripe size. We went through that when the Z170 boards were first released. Myself and a few other forum members experimented with the then new RAID support for NVMe SSDs, and noticed the same thing. We found that the 64K and 128K stripe sizes provided much better benchmark results.

Not that 960 EVOs in RAID 0 with the other stripe sizes were twice as "fast" as a single 960 EVO, they aren't. We've always suspected that the DMI3 interface of the chipset was a bottleneck. That assumes other things involved here are perfect (PCIe remapping and the IRST software), so there are other variables involved.

About the faster benchmark results using the VROC RAID, if that is true, (and I'm not saying it isn't because the X299 system is the first one to be able to create RAID arrays using the Intel RAID software with the NVMe SSDs in the PCIe 3.0 slots) then that is great, and would highlight the limitation of the chipset's DMI3 interface. It also uses different Intel RAID software, RSTe. I'd like to see that myself.

The RAID array with a combination of SSDs in a PCIe lane from the CPU, and the chipset makes me wonder if that was using the Intel RAID software or the Windows RAID capability. If that was using the Intel RAID software, that is another new feature that has not been talked about at all.

One reality that seems to always be overlooked is, booting/loading Windows or any OS is not simply reading one huge or several large files. Many small to medium size files are mainly read when loading an OS. The sequential speed tests in benchmarks range from 250KB to 1MB+ file sizes, depending upon the benchmark being used. The point is the super fast sequential read speeds do not enhance the booting/loading of an OS. Even with the much improved 4K random and 4K high queue depth performance that a single NVMe SSD provides over SATA III SSDs, the reality is NVMe SSDs do not provide substantially quicker OS boot times. Something else is the bottleneck, possibly the file system itself, which was not designed for flash memory (NAND storage, SSDs) drives. If we were working with many very large single files, or unzipping compressed files, these RAID 0 arrays will be faster, but for many typical tasks they aren't noticeably faster.
 

Back to Top
Brighttail View Drop Down
Newbie
Newbie


Joined: 15 Aug 2017
Location: Canada
Status: Offline
Points: 5
Post Options Post Options   Thanks (0) Thanks(0)   Quote Brighttail Quote  Post ReplyReply Direct Link To This Post Posted: 13 Sep 2017 at 9:37am
So I have a friend who got the Asus Apex 6.  Interestingly enough he tried a RAID 0 array with both his m.2s on the chipset DIMM.2 slot.   His two 960s EVOs in RAID 0 using the chipset was LESS than a single one using the chipset PCI-e lanes.

Amazingly enough, he was able to create a RAID 0 array using one M.2 in a CPU Pci-e lane and one m.2 in the chipset Pci-e lane.  We had talked about it and it looks like you can now create a RAID 0 array using Pci-e lanes from both the CPU and PCH lanes. :)  While not quite double he was able to get 5500 seq read and 2500 writes.

Making a RAID 0 array using only the CPU pci-e lanes he was able to get 6400 seq read and 3150 seq writes.

Another interesting note was that  it looks that there may be an issue with RAID 0 and the chipset lanes, at least with the Asus board. Hopefully they will fix that with a BIOS update but I have heard it happening with Asrock and MSI too.
Back to Top
Brighttail View Drop Down
Newbie
Newbie


Joined: 15 Aug 2017
Location: Canada
Status: Offline
Points: 5
Post Options Post Options   Thanks (0) Thanks(0)   Quote Brighttail Quote  Post ReplyReply Direct Link To This Post Posted: 15 Aug 2017 at 12:43pm
QUOTE=parsec]If my previous post did not end your fears, I've got nothing else that will make a difference.


[/QUOTE]

I'm sorry I"m still new to all this.  The biggest question cam from friends who used their z270 to create a bootable RAID drive and was limited to the 3500MB/s.  I see what you are doing from a bootable RAID device and it is impressive, so i'm not sure if they just tested wrong or have no idea what they are doing.

It is nice to see that the CHIPSET can provide some decent numbers.  I have technically four Samsung drives, 3 960s and one 950.  My plan is to give the 950 to the wife and use the other three with a 28 lane CPU probably on an Asus APEX VI motherboard.  All the drives then would be on their special DIMM.2 slots and I already have some Dominator platinum Air flow fans that will cool the RAM and the DIMM.2 all at once and doesnt' look half bad, even with the RGB off.

My concern was spending over 1300CAD on a motherboard/cpu and not being able to create an OS drive and a decent storage RAID drive.  I know I could use PCI slots, but I spent a lot of time figuring out a way to mount my GPU vertically.  It looks awesome but the drawback is that it allows me to use ONE PCIe slot at the top, and even then I had to find a special PCIE to M.2 board, but I did it and despite not using hard tubing, i think it looks kinda cool, especially the special edition Dominator platinum RAM.  I could do hard tubing but I like being able to do minor maintenance without having to drain the system.  The boot drive sits on the PCIe board in the first PCIE slot.  My second M.2 is in the M.2 slot on the board and the third is a janky 1/2 sized board in PCI slot 2. It works, but it aint pretty. :)  I think the APEX or even the RAMPAGE Extreme VI might change that.

Thanks for the information.  I may pull the trigger soon and if I do, I can always return them if there is an issue, provided I buy it from the right spot.




Edited by Brighttail - 15 Aug 2017 at 12:45pm
Back to Top
parsec View Drop Down
Moderator Group
Moderator Group
Avatar

Joined: 04 May 2015
Location: USA
Status: Offline
Points: 4996
Post Options Post Options   Thanks (0) Thanks(0)   Quote parsec Quote  Post ReplyReply Direct Link To This Post Posted: 15 Aug 2017 at 11:49am
If my previous post did not end your fears, I've got nothing else that will make a difference.


Back to Top
Brighttail View Drop Down
Newbie
Newbie


Joined: 15 Aug 2017
Location: Canada
Status: Offline
Points: 5
Post Options Post Options   Thanks (0) Thanks(0)   Quote Brighttail Quote  Post ReplyReply Direct Link To This Post Posted: 15 Aug 2017 at 11:09am
Quote

Ever since we first saw the apparent limit of the RAID 0 arrays of PCIe NVMe SSDs using the chipset DMI3/PCIe 3.0 lanes (24 of them), I began doubting the ability of the benchmark programs to measure them correctly. Particularly when you use the same SSDs, using the identical interface, in a Windows software RAID array, and the benchmark results are different. That simply does not make sense. With Windows in control of their software RAID, who knows what happens.

So I did some experimentation with one of the benchmark programs that allows you to configure the tests somewhat, which produced a much different result. For example, this is a three SSD RAID 0 array using IRST software on my ASRock Z270 Gaming K6 board, run through the chipset of course:


What benchmark test did you use to test your 960 Pros in Windows software RAID 0? I'd like to see the results for everything besides sequential. I've never created one. High sequential read speeds don't impress me, and are useless for booting an OS. 4K random read speed, and 4K at high queue depth (not that it ever gets deep, not with an NVMe SSD, or even SATA) is what counts for running an OS.

I still don't trust storage benchmark programs, they were designed to work with SATA drives, and need an overhaul. Plus most if not all of the benchmarks usually posted are run with the PCH ASPM power saving options enabled (by default) in the UEFI/BIOS, new options for Z170 and Z270. CPU power saving options also affect storage benchmark results. All of the above add latency.

As a long time user of RAID 0 arrays with SSDs as the OS drive, I can assure you that the booting speed difference between one 960 Pro and three in a RAID 0 array will be exactly zero. Actually, the single 960 Pro will be faster. The real world result I've experienced many times.

RAID takes time to initialize, and NVMe SSDs, each with their own NVMe controller to initialize, overall takes longer than a single NVMe SSD. Any difference in actual booting speed is offset by the RAID and multiple NVMe initialization.

A mother board runs POST on and initializes one SATA controller in the chipset. That must also occur for IRST RAID, which acts as the NVMe driver for NVMe SSDs in an IRST RAID 0 array of NVMe SSDs. Yet another factor that adds to the startup time. VROC will add something similar, since it uses the Intel RSTe RAID software for its RAID arrays. Who knows, maybe VROC can perform better, although the PCIe lanes also now have ASPM power saving features. But X79 board users hated RSTe with SATA drives, so much so that Intel later added IRST software support to X79.

Many first time users of NVMe SSD as the OS/boot drive are usually disappointed, since their boot time is not improved. It becomes longer, again offset by the additional step of POST running on the NVMe controller, and the additional execution of the NVMe OptionROM. It really all makes sense if you understand what is involved.



HI there and thanks for the reply. First off let me be clear and say I'm not wanting to use any type of RAID for my OS drive. As you said that is silly.   I do some video editing and I move large files back n forth from my NAS constantly, so that is why I am using the RAID 0. It is really a drive to take the videos, bring them on to my computer upstairs quickly, edit and move them back.  I have some redundancy built in depending on how much I edit, but suffice to say I'm not using this for just normal every day computing.  I don't want to buy a RAID card either.

As for benchmarks I have used Atto, crystal mark, the samsung magician one and a couple others.  Crystal mark gives me this:





This is off an x99 that uses CPU PCIe lanes.  I'm wondering if I get an x299 and build this RAID with my m.2s on a DIMM.2 that goes through the chipset, will I get these speeds?  My fear is that the RAID drive will be limited to 3500MB/s ish.  Of course I would love to use VROC with my Samsung drives, but that isn't going to happen as I don't have a Xeon processor and Intel's being stingy with the technology. :(


Edited by Brighttail - 15 Aug 2017 at 11:25am
Back to Top
parsec View Drop Down
Moderator Group
Moderator Group
Avatar

Joined: 04 May 2015
Location: USA
Status: Offline
Points: 4996
Post Options Post Options   Thanks (1) Thanks(1)   Quote parsec Quote  Post ReplyReply Direct Link To This Post Posted: 15 Aug 2017 at 10:13am
Originally posted by Brighttail Brighttail wrote:

Hey there,

New user here and I'm trying to understand the RAID arrays using PCI-e lanes via the CPU and Chipset.

On my x99, I run two 960 pros at RAID 0 using windows software. Obviously I'm using the CPU PCI-e lanes and I can get seq reads of 7000 MB/s.

I have seen folks using the z170/z270 motherboards and creating RAID 0 arrays using the Chipset lanes, but they are only getting seq reads of 3500 MB/s.

Why is this?  Are the CPU lanes able to 'double up' while the Chipset lanes cannot?

I ask because some of the motherboards I'm looking at disable certain lanes when using a 28 lane CPU and thus I may only be able to use Chipset lanes to create a RAID 0 drive.  If this is the case, then I'm guessing it would be SLOWER than the RAID 0 drive I currently have because i'm using CPU PCI-e lanes instead of Chipset lanes. 

I agree, it would be a great thing if you could use a CPU lane/Chipset lane m.2 to create a RAID drive across both.  That would solve a lot of issues with a 28 lane CPU.  Hell for me it wouldn't have to even be bootable just a non-os drive!

Am I correct in this?  I would love to do a triple m.2 RAID but it sounds like it would be slower as a boot drive since you have to go through the chipset than a non-os drive and using the CPU lanes.

Finally concerning VROC, I don't think that even WITH a key you'll be able to use the 960 EVO.  From what I understand VROC is only for Intel drives UNLESS you are using a Xeon processor.  The key is only if you want to do RAID 1 and another key for RAID 5/10.  It really does suck that Intel won't allow non-intel drives to use VROC unless you have a XEON. :(


Ever since we first saw the apparent limit of the RAID 0 arrays of PCIe NVMe SSDs using the chipset DMI3/PCIe 3.0 lanes (24 of them), I began doubting the ability of the benchmark programs to measure them correctly. Particularly when you use the same SSDs, using the identical interface, in a Windows software RAID array, and the benchmark results are different. That simply does not make sense. With Windows in control of their software RAID, who knows what happens.

So I did some experimentation with one of the benchmark programs that allows you to configure the tests somewhat, which produced a much different result. For example, this is a three SSD RAID 0 array using IRST software on my ASRock Z270 Gaming K6 board, run through the chipset of course:



Can you see what the simple configuration that was done in the picture above?

Here are two 960 EVOs in RAID 0, same board:



Note that both of those arrays are the C:/OS drive, so servicing OS requests at the same time. We do see the usual decrease in 4K read performance in RAID 0, not a good compromise for an OS drive.

What benchmark test did you use to test your 960 Pros in Windows software RAID 0? I'd like to see the results for everything besides sequential. I've never created one. High sequential read speeds don't impress me, and are useless for booting an OS. 4K random read speed, and 4K at high queue depth (not that it ever gets deep, not with an NVMe SSD, or even SATA) is what counts for running an OS.

I still don't trust storage benchmark programs, they were designed to work with SATA drives, and need an overhaul. Plus most if not all of the benchmarks usually posted are run with the PCH ASPM power saving options enabled (by default) in the UEFI/BIOS, new options for Z170 and Z270. CPU power saving options also affect storage benchmark results. All of the above add latency.

As a long time user of RAID 0 arrays with SSDs as the OS drive, I can assure you that the booting speed difference between one 960 Pro and three in a RAID 0 array will be exactly zero. Actually, the single 960 Pro will be faster. The real world result I've experienced many times.

RAID takes time to initialize, and NVMe SSDs, each with their own NVMe controller to initialize, overall takes longer than a single NVMe SSD. Any difference in actual booting speed is offset by the RAID and multiple NVMe initialization.

A mother board runs POST on and initializes one SATA controller in the chipset. That must also occur for IRST RAID, which acts as the NVMe driver for NVMe SSDs in an IRST RAID 0 array of NVMe SSDs. Yet another factor that adds to the startup time. VROC will add something similar, since it uses the Intel RSTe RAID software for its RAID arrays. Who knows, maybe VROC can perform better, although the PCIe lanes also now have ASPM power saving features. But X79 board users hated RSTe with SATA drives, so much so that Intel later added IRST software support to X79.

Many first time users of NVMe SSD as the OS/boot drive are usually disappointed, since their boot time is not improved. It becomes longer, again offset by the additional step of POST running on the NVMe controller, and the additional execution of the NVMe OptionROM. It really all makes sense if you understand what is involved.

Back to Top
 Post Reply Post Reply Page  123 4>
  Share Topic   

Forum Jump Forum Permissions View Drop Down

Forum Software by Web Wiz Forums® version 12.04
Copyright ©2001-2021 Web Wiz Ltd.

This page was generated in 0.172 seconds.