Triple M.2 RAID setup |
Post Reply | Page <1 234 |
Author | |
Brighttail
Newbie Joined: 15 Aug 2017 Location: Canada Status: Offline Points: 5 |
Post Options
Thanks(0)
|
HI there and thanks for the reply. First off let me be clear and say I'm not wanting to use any type of RAID for my OS drive. As you said that is silly. I do some video editing and I move large files back n forth from my NAS constantly, so that is why I am using the RAID 0. It is really a drive to take the videos, bring them on to my computer upstairs quickly, edit and move them back. I have some redundancy built in depending on how much I edit, but suffice to say I'm not using this for just normal every day computing. I don't want to buy a RAID card either. As for benchmarks I have used Atto, crystal mark, the samsung magician one and a couple others. Crystal mark gives me this: This is off an x99 that uses CPU PCIe lanes. I'm wondering if I get an x299 and build this RAID with my m.2s on a DIMM.2 that goes through the chipset, will I get these speeds? My fear is that the RAID drive will be limited to 3500MB/s ish. Of course I would love to use VROC with my Samsung drives, but that isn't going to happen as I don't have a Xeon processor and Intel's being stingy with the technology. :( Edited by Brighttail - 15 Aug 2017 at 11:25am |
|
parsec
Moderator Group Joined: 04 May 2015 Location: USA Status: Offline Points: 4996 |
Post Options
Thanks(0)
|
If my previous post did not end your fears, I've got nothing else that will make a difference.
|
|
Brighttail
Newbie Joined: 15 Aug 2017 Location: Canada Status: Offline Points: 5 |
Post Options
Thanks(0)
|
QUOTE=parsec]If my previous post did not end your fears, I've got nothing else that will make a difference.
[/QUOTE] I'm sorry I"m still new to all this. The biggest question cam from friends who used their z270 to create a bootable RAID drive and was limited to the 3500MB/s. I see what you are doing from a bootable RAID device and it is impressive, so i'm not sure if they just tested wrong or have no idea what they are doing. It is nice to see that the CHIPSET can provide some decent numbers. I have technically four Samsung drives, 3 960s and one 950. My plan is to give the 950 to the wife and use the other three with a 28 lane CPU probably on an Asus APEX VI motherboard. All the drives then would be on their special DIMM.2 slots and I already have some Dominator platinum Air flow fans that will cool the RAM and the DIMM.2 all at once and doesnt' look half bad, even with the RGB off. My concern was spending over 1300CAD on a motherboard/cpu and not being able to create an OS drive and a decent storage RAID drive. I know I could use PCI slots, but I spent a lot of time figuring out a way to mount my GPU vertically. It looks awesome but the drawback is that it allows me to use ONE PCIe slot at the top, and even then I had to find a special PCIE to M.2 board, but I did it and despite not using hard tubing, i think it looks kinda cool, especially the special edition Dominator platinum RAM. I could do hard tubing but I like being able to do minor maintenance without having to drain the system. The boot drive sits on the PCIe board in the first PCIE slot. My second M.2 is in the M.2 slot on the board and the third is a janky 1/2 sized board in PCI slot 2. It works, but it aint pretty. :) I think the APEX or even the RAMPAGE Extreme VI might change that. Thanks for the information. I may pull the trigger soon and if I do, I can always return them if there is an issue, provided I buy it from the right spot. Edited by Brighttail - 15 Aug 2017 at 12:45pm |
|
Brighttail
Newbie Joined: 15 Aug 2017 Location: Canada Status: Offline Points: 5 |
Post Options
Thanks(0)
|
So I have a friend who got the Asus Apex 6. Interestingly enough he tried a RAID 0 array with both his m.2s on the chipset DIMM.2 slot. His two 960s EVOs in RAID 0 using the chipset was LESS than a single one using the chipset PCI-e lanes.
Amazingly enough, he was able to create a RAID 0 array using one M.2 in a CPU Pci-e lane and one m.2 in the chipset Pci-e lane. We had talked about it and it looks like you can now create a RAID 0 array using Pci-e lanes from both the CPU and PCH lanes. :) While not quite double he was able to get 5500 seq read and 2500 writes. Making a RAID 0 array using only the CPU pci-e lanes he was able to get 6400 seq read and 3150 seq writes. Another interesting note was that it looks that there may be an issue with RAID 0 and the chipset lanes, at least with the Asus board. Hopefully they will fix that with a BIOS update but I have heard it happening with Asrock and MSI too. |
|
parsec
Moderator Group Joined: 04 May 2015 Location: USA Status: Offline Points: 4996 |
Post Options
Thanks(0)
|
The 960 EVOs in RAID 0 being slower than a single 960 EVO is obviously due to the RAID 0 array being created with the default 16K stripe size. We went through that when the Z170 boards were first released. Myself and a few other forum members experimented with the then new RAID support for NVMe SSDs, and noticed the same thing. We found that the 64K and 128K stripe sizes provided much better benchmark results. Not that 960 EVOs in RAID 0 with the other stripe sizes were twice as "fast" as a single 960 EVO, they aren't. We've always suspected that the DMI3 interface of the chipset was a bottleneck. That assumes other things involved here are perfect (PCIe remapping and the IRST software), so there are other variables involved. About the faster benchmark results using the VROC RAID, if that is true, (and I'm not saying it isn't because the X299 system is the first one to be able to create RAID arrays using the Intel RAID software with the NVMe SSDs in the PCIe 3.0 slots) then that is great, and would highlight the limitation of the chipset's DMI3 interface. It also uses different Intel RAID software, RSTe. I'd like to see that myself. The RAID array with a combination of SSDs in a PCIe lane from the CPU, and the chipset makes me wonder if that was using the Intel RAID software or the Windows RAID capability. If that was using the Intel RAID software, that is another new feature that has not been talked about at all. One reality that seems to always be overlooked is, booting/loading Windows or any OS is not simply reading one huge or several large files. Many small to medium size files are mainly read when loading an OS. The sequential speed tests in benchmarks range from 250KB to 1MB+ file sizes, depending upon the benchmark being used. The point is the super fast sequential read speeds do not enhance the booting/loading of an OS. Even with the much improved 4K random and 4K high queue depth performance that a single NVMe SSD provides over SATA III SSDs, the reality is NVMe SSDs do not provide substantially quicker OS boot times. Something else is the bottleneck, possibly the file system itself, which was not designed for flash memory (NAND storage, SSDs) drives. If we were working with many very large single files, or unzipping compressed files, these RAID 0 arrays will be faster, but for many typical tasks they aren't noticeably faster. |
|
Brighttail
Newbie Joined: 15 Aug 2017 Location: Canada Status: Offline Points: 5 |
Post Options
Thanks(0)
|
The 16k vs 128k is a valid suggestion. I'll bring that up to him.
|
|
parsec
Moderator Group Joined: 04 May 2015 Location: USA Status: Offline Points: 4996 |
Post Options
Thanks(0)
|
The difference in benchmark results between the default 16K stripe and the 64K or 128K stripe sizes was quite clear with everyone that tried both. The larger stripe sizes consistently gave higher large file sequential read and write speeds. Intel apparently has the default RAID 0 stripe size set to 16K for SSDs specifically, in order to preserve the small file, 4K random read speed. There is about a 10% reduction in the 4K random read speed in RAID 0 compared to a single SSD, for some reason. You can easily select the RAID 0 stripe size when creating the array, but it is easy to miss that option if you don't have experience in the creation process. The bad news is, we cannot change the stripe size of an existing RAID array without creating it over again. |
|
Fraizer
Newbie Joined: 25 Dec 2018 Status: Offline Points: 2 |
Post Options
Thanks(0)
|
Hello all
I just bought 2 NVMe Samsung 970 Pro 1 To to make an raid 0 on motherboard chipset Z390 last drivers, i just update the Intel ME Firmware to the 1,5Mo v12.0.10.1127). I fault by doing this raid 0 i will have a better speed... But it worst than an 1 970 Pro alone without raid... like your bench and like you see here too they make an test of the 970 Pro (version 512Go): https://www.guru3d.com/articles-pages/sa...-review,15.html except for the write Seq Q32T1 i have a very bad speed compare to the non Raid 970 Pro... my setting an bench (sorry for the french): https://files.homepagemodules.de/b602300/f28t4197p66791n2_mFCEsYBH.png I make many test in a cold computer or runing since 1 hour, is a fresh windows 10 pro with all last drivers and last bios. thank you |
|
Fraizer
Newbie Joined: 25 Dec 2018 Status: Offline Points: 2 |
Post Options
Thanks(0)
|
sorry the image dosent appear:
|
|
Post Reply | Page <1 234 |
Tweet
|
Forum Jump | Forum Permissions You cannot post new topics in this forum You cannot reply to topics in this forum You cannot delete your posts in this forum You cannot edit your posts in this forum You cannot create polls in this forum You cannot vote in polls in this forum |