![]() |
RAID Setup Menu |
Post Reply ![]() |
Page <123> |
Author | |
parsec ![]() Moderator Group ![]() ![]() Joined: 04 May 2015 Location: USA Status: Offline Points: 4996 |
![]() ![]() ![]() ![]() ![]() |
Sorry to say, we can't do anything we'd like to do on every PC platform. Particularly with very new types of hardware like NVMe SSDs. We cannot ignore the details of the hardware specifications, like the number of PCIe lanes available or the features the chipset provides and supports. If we don't closely check every detail of a board's specifications, and the specs of the processors we can use, we are at fault if the hardware does not provide the features we want. The features a board supports is not controlled by a mother board manufacture. All features are provided by the CPU and chipset used in a board and any required software, by the manufacture of the CPU and chipset. That is Intel in this case. A mother board manufacture cannot change any features and limitations of the CPU and chipset. Mother board manufactures do not write the OS drivers or Option ROMs included in a board's firmware, that is strictly done by the CPU and chipset manufacture. Assuming anything about these things is a mistake. Hating ASRock and Newegg for that does not make sense. The ONLY chance, which I highly doubt is even possible, is if Intel, and ONLY Intel, found a way to provide IRST RAID 0 for PCIe NVMe SSDs on the X99 chipset. Given the differences between the X99 and Z170 chipsets, that will never happen IMO. I suggest you don't hope or plan on that happening. The Z170 and other Intel 100 series chipsets that support RAID are the first and ONLY systems that can use PCIe NVMe SSDs in IRST RAID. NVMe is a new and different storage protocol than SATA, using its own driver and storage controllers that are part of the SSD itself. SATA controllers are part of the board's chipset, and are not part of a SATA drive. How Intel managed to get their IRST RAID driver to work with NVMe SSDs is a miracle, but there is much more involved than that software. Don't expect a different mother board manufacture to provide RAID with PCIe NVMe SSDs on their X99 boards, they can't. If the hardware/chipset does not provide that feature, it cannot be added later. Sorry to say, software RAID in Windows is normally not an option. I have NO IDEA if the following would work, but you could try to clone an existing OS installation onto a Windows software RAID array. But if that worked we'd know about it now, and I've never heard that it does. Have you ever used a RAID 0 array of SATA SSDs? I have many times, and I've had a RAID 0 array of 950 Pros. Why people think that RAID 0 will be a magical, super speed result, I don't understand. In reality, it isn't. IF you are working with very large, 100MB - 1GB+ files and folders all the time, then a RAID 0 array can read and write them in half the time of a single drive. Otherwise, for booting an OS, there is no difference in speed. Plus the way the Intel IRST driver works with PCIe NVMe SSDs now, you will NOT get twice the large file sequential read speed. The best we saw with two 950 Pros was about 3,200MBs. I don't expect the new 960 SSDs will do any better, again limited by the RAID software for some reason. |
|
![]() |
|
Xaltar ![]() Moderator Group ![]() ![]() Joined: 16 May 2015 Location: Europe Status: Offline Points: 25633 |
![]() ![]() ![]() ![]() ![]() |
Software RAID cannot be bootable, period. The array is only populated once the OS loads and as such the drives are seen as individual drives during boot. I spent a lot of time and effort trying to get a software RAID array to be bootable some years back and had someone rather rudely inform me of this fact
![]() The only way an X99 board will support NVMe RAID is if a 3rd party controller is added to the board directly by the manufacturer or as an add in card. I have yet to see any such controllers or add in cards on the market however, I am not saying they don't exist, I just have not seen any (searched to no avail). I don't doubt that there likely will be add in cards at some later date with this functionality. Then you may well be able to indulge your RAID 0 NVMe itch but for now not only is it rather pointless but also only possible on 100 series based boards. Take a look online for RAID 0 NVMe performance results, you will quickly see that the performance is not worth the cost of the 2 (or more) drives. Parsec is our resident storage guru and if he says the performance is not worth it, I believe him.
|
|
![]() |
|
![]() |
|
eComposer ![]() Newbie ![]() ![]() Joined: 30 Dec 2016 Location: LA Status: Offline Points: 26 |
![]() ![]() ![]() ![]() ![]() |
Thanks Parsec, Interestingly I had followed all the steps listed above (except I used 64kb strip per Tweaktown, and did not clear the CMOS). I'm not sure what is going wrong, it just won't give the RAID 0 array as an option to install Windows to... From what you're saying though, RAID 0 has performance penalties to just using a single SSD. I'm mixing music with multiple channels (often 50+ depending on using "stems"), and each has multiple plugins running (including many high end emulations and samples etc, effects, amp simulators, console emulations etc etc), and I/O is key to avoid drop outs, distortion and other issues detracting from the real time audio output. Would you suggest just using a single Pcie 3 x4 SSD as the boot drive vs RAID 0, as I/O is the objective here to support the best mixing conditions I can achieve? (Essentially looking for the best streaming config available). I thought the specs for RAID 0 using the M 2 Pcie 3 x4 slots leveraging NVMe via the ASRock Z170 Extreme 7+ was supposed to deliver the best I/O, but your comment seems to indicate the reverse (that is non RAID vs RAID 0). FYI: I'm using the Thunderbolt 2 ASRock card with a high end Thunderbolt audio interface to minimize latency etc (especially when recording multiple tracks "real time" and monitoring vs mixing). RAID 0 or not RAID? - This would be helpful to know since I've held off a full update to build a new config from scratch aiming to reinstall everything on RAID 0 (and have frequent backup capabilities set up to guard against RAID 0 failure). Maybe avoiding RAID 0 is the better option then?
|
|
![]() |
|
eComposer ![]() Newbie ![]() ![]() Joined: 30 Dec 2016 Location: LA Status: Offline Points: 26 |
![]() ![]() ![]() ![]() ![]() |
I'm finding multiple technical analysis that's showing that RAID 0 should deliver considerable performance advantages over non RAID as outlined in the following publication:
Check out the whole article and the attached charts on that site, and there seem to me to be distinct advantages to RAID 0 vs a single equivalent SSD, unless I'm not understanding this correctly. Note: The I/O to the CPU, and in my case external audio interface will be a factor, not to mention RAM performance, but given we're just talking the SSD equation, then I'm not following why a single SSD vs RAID 0 in the configs we're talking about will deliver better I/O for my scenario. Interested to see more on this, as this Forum seems to be turning all that I've read and understood on it's head. :)
|
|
![]() |
|
eComposer ![]() Newbie ![]() ![]() Joined: 30 Dec 2016 Location: LA Status: Offline Points: 26 |
![]() ![]() ![]() ![]() ![]() |
Hmmm, this is one of several interesting conclusions drawn by Tweaktown:
While this isn't an exact test of the 960 Evo pcie 3 x4 SSD, it's certainly an indication of potential performance with RAID 0 outperforming a single drive. I'm still not clear yet though on other choke points for I/O such as the overall bus limitations and CPU capacity, RAM, and the speed that the variety of audio software programs in complex situations that real time mixing throws up in terms of performance needs and actual real world outcomes. Still, I have a strong suspicion that combining Thunderbolt throughput (to a Thunderbolt Audio Interface) with fast M 2 PCIe RAID arrays, and overclocked CPU and RAM on the Z170 Extreme 7+ is going to outperform most other PC based approaches at this point in time - Specifically for what I'm doing with mixing multiple audio channels via DAWs and all the processing associated with audio plugins (compressors, EQ, console emulations, amp simulations etc etc). It's just taking time to work this out, so any perspectives on this would be most welcome! :)
|
|
![]() |
|
parsec ![]() Moderator Group ![]() ![]() Joined: 04 May 2015 Location: USA Status: Offline Points: 4996 |
![]() ![]() ![]() ![]() ![]() |
That's strange the RAID 0 array can't be seen by the Win 10 installer. I went through the procedure again, and I don't think I left anything out. The RAID 0 array won't be shown as a drive until after the IRST "F6" driver is installed. Can you do a "refresh" in main the Custom installation screen, as in look for drives again? I did not mention that you must format the RAID 0 array after the F6 driver is loaded. All you do is click the New button and the installer will format the RAID array correctly, as GPT and all required partitions. Also do not remove the USB drive with the F6 driver from the PC until the Windows installation is complete. It's been a while since I've used a RAID 0 array of 950 Pros. But I know it works. The 960 should be no different than a 950 in RAID for a Windows installation. When the Z170 Extreme7+ board was first released, we had a thread in this forum about creating and using 950 Pro's in RAID 0 arrays. Several forum members and I worked out the details ourselves. One guy had three 950 Pro's in RAID 0. At that time with the very first IRST driver (14.0...) that supported NVMe SSDs in RAID with Z170 boards, the difference between the benchmark results of two vs three SSDs was minimal, at best 500MB/s faster for sequential read speed. That guy was disappointed, but we never could improve the results. Also, anything less than a 64K stripe size would result in terrible benchmark results with 950 Pros in RAID 0. At that time with IRST version 14, we all agreed the 128K stripe size was the best. If that has changed with newer versions of IRST, great. Personally, I always configure a full UEFI booting installation, meaning CSM set to Disabled. The only problem with that is your video source must be GOP compatible, a UEFI booting protocol. Older video cards (pre-Nvidia 700 series) may not be GOP compatible without a VBIOS update. No idea about ATI/AMD video cards. EVGA 600 series cards needed a VBIOS update to be GOP compatible, but it worked. Intel integrated graphics is GOP compatible since Sandy Bridge. Regarding the articles about how fast and great RAID 0 arrays of NVMe SSDs are: By all means, be my guest and use them! The only way to really know what they are like is to have and use them. I'll make one comment about the articles, the graphs in particular. Yes, you can see the clear difference in the benchmark tests with the RAID 0 arrays, with their multi-hundreds of thousands of IOPs. But check one axis of the graphs labeled Queue Depth (QD.) QD is the number of outstanding IO requests waiting to be serviced by the drive or RAID array. NVMe SSDs have even better high QD performance, and better 4K random read performance than SATA SSDs. It is well known that in home PC usage, since even a single SSD is so fast, that the number of outstanding IO requests is rarely, if ever, four. That is called a QD of 4, or QD = 4. That statistic was done with SATA SSDs. Notice in the test graphs, the maximum QD=32 for IOPs, and for latency, QD=16. Unless you are hosting a website on your PC, or running database queries against millions of data records, you'll never be doing IO at even QD=4. In short, yes the performance potential is there, but most people never use it. I can't predict what benefits you will get from you usage case, but do you think you will ever use the ~200,000 IOPs (IO Operations Per second) of a single NVMe SSD? Do we need 400,000+ IOPs of the RAID 0 array? I'm also very certain that a RAID 0 array of NVMe SSDs will not boot Windows faster than a single identical NVMe SSD. The same is true of SATA SSDs in RAID 0. From a cold start/boot, run Task Manager and click the Startup tab. Check the Last BIOS Time at the top right. |
|
![]() |
|
eComposer ![]() Newbie ![]() ![]() Joined: 30 Dec 2016 Location: LA Status: Offline Points: 26 |
![]() ![]() ![]() ![]() ![]() |
Interesting. Startup speed to me is really not an issue, marginal improvements really don't matter. The key objective is seamless mixing with heavy sound production workloads: The killer there is that music is sequential, and basically failure depends on the "slowest ship" - particularly in recording, although mixing around 100 separate tracks each with multiple plugins eats up CPU, RAM and especially I/O from storage. (Examples: VU meter emulations, compressors, console emulations, EQ emulations, amp sims/samples etc, and multiple applications like Kontact, BFD 3, Cinesamples, Cinestrings, and all the other sample apps, plus multiple channel bus setups, and a whole cadre of master Chanel processors, all running at the same time). Hard Drives were the Achilles heel to being able to mix large pieces in real time. SSDs helped to ameliorate this. With the capability to utilize Thunderbolt in the Intel space also helped a lot too. Hence it may well be that 400,000+ IOPs will contribute to being able to avoid drop outs, clicks, distortion and specific plugins failing or crashing... The challenge is that ALL key areas such as CPU, RAM, Storage, and Audio interface I/O cannot fail at any point in terms of a delay in real time. This kills the flow, and can ruin live performances that are being recorded. As you can imagine the variable are very wide ranging, so my aim is to minimize latency and optimize I/O in all aspects aiming for a seamless playback/mixing/mastering/recording. Hope that makes sense. FYI: Still no joy for the RAID 0. Something is still impeding this, and I honestly don't know what it is yet... Greatly appreciate your input, good to know the history and get a feel for what others have discovered so far. I do want to try the raid, then load it with my most demanding projects, that will tell us a lot about performance. Postscript: Personally, I think this is a very exciting time where you can effectively have very similar powers of sound production that not long ago would have cost hundreds of thousands if not millions of dollars in physical hardware, that now can be achieved with a tiny fraction of those costs. :) I'm sure this translates to other areas at a similar level too - ASRock have done some amazing development in this space, and glad I've been using their Motherboards for many years now!
|
|
![]() |
|
parsec ![]() Moderator Group ![]() ![]() Joined: 04 May 2015 Location: USA Status: Offline Points: 4996 |
![]() ![]() ![]() ![]() ![]() |
So you can create the RAID 0 array with the Intel utility in the UEFI/BIOS, and you can see it in the Intel utility if you restart the PC, but nothing at all in the Windows installation process? I forget, are you using Win 7?
I can't believe Samsung changed something with the 960 series... so I get to buy two 960s to check this?! ![]() Another reason I am (was) less than thrilled about the RAID 0 of NVMe SSDs, is they tended to be... delicate, as I termed it. What I mean is, if you had one created and Windows installed on it, if you simply cleared the UEFI/BIOS, the RAID 0 array would fail on the following restart of the PC. That would never happen with SATA drive RAID arrays. That seems to have been fixed with an update to the UEFI, by keeping the PCIe Remapping options alone during a UEFI/BIOS clear. If you have a Windows installation on another drive, try creating the RAID 0 array of 960s in the UEFI, and then boot from the other OS. In Windows, check if Disk Management sees the RAID 0 array and lets you format it. Don't forget the SATA ports are shared with the M.2 slots. If you have SATA drives connected to the shared ports, the M.2 SSDs won't work correctly. |
|
![]() |
|
eComposer ![]() Newbie ![]() ![]() Joined: 30 Dec 2016 Location: LA Status: Offline Points: 26 |
![]() ![]() ![]() ![]() ![]() |
Hello parsec, Good news - finally got the RAID 0 to work. Essentially stripped the whole PC to components and rebuilt (except CPU). Also cleared cmos, and only connected the 960 Evo SSDs. The followed the same procedure as you outlined that I'd done for the past couple of days over and over and over... LOL. No idea what caused the challenges, but hopefully taking everything to square one fixed whaterver it was. Re operating system, I'm running Windows 10 64 bit. Re the Sata ports and the M 2 slots, fully aware of this, and have been careful not to "double book". My build plan took this into account. Incidentally I use the 1 and 3 M 2 slots, given the GPU covers the "2" slot, and thought it would make sense to minimize heat given both may get hot if I drive them hard enough. Heat and noise are major challenges for sound recording. :) Anyway, thanks for the input. Having all the steps laid out by you was helpful in that no one else commented about clearing the CMOS, which makes a lot of sense. ASRock should surely provide this wisdom in the documentation. It would have been so much easier just having all the steps you laid out provided as a matter of course. Although I suppose it's only the enthusiasts and "professionals" who want this kind of firepower that would be looking for this info. Still, I'm guessing as time moves forwards, that using this kind of approach will become increasingly popular once users wake up and realize the benefits, and also as the pricepoint for these kinds of SSDs becomes more accessible.
|
|
![]() |
|
parsec ![]() Moderator Group ![]() ![]() Joined: 04 May 2015 Location: USA Status: Offline Points: 4996 |
![]() ![]() ![]() ![]() ![]() |
I'm glad you have it working now. I have a feeling either clearing the UEFI/BIOS and then doing the UEFI option configuration, or setting them and applying them via Save and Exit, made the difference. Regarding documenting the procedure for creating the RAID 0 array of NVMe SSDs, did you happen to see this: http://asrock.pc.cdn.bitgravity.com/Manual/RAID/X99%20Taichi/English.pdf While it might not have every last detail about the Windows installation, it has all of the RAID 0 configuration information. IMO, Samsung and Microsoft, the manufactures of the SSD and the OS software, should provide information about installing Windows on their products, in various scenarios. I don't think it is fair to place all the responsibility for that on the mother board manufacture. |
|
![]() |
Post Reply ![]() |
Page <123> |
Tweet
|
Forum Jump | Forum Permissions ![]() You cannot post new topics in this forum You cannot reply to topics in this forum You cannot delete your posts in this forum You cannot edit your posts in this forum You cannot create polls in this forum You cannot vote in polls in this forum |