ASRock.com Homepage
Forum Home Forum Home > Technical Support > AMD Motherboards
  New Posts New Posts RSS Feed - RAID 0 NVME Read Performance Same as Non-RAID
  FAQ FAQ  Forum Search Search  Events   Register Register  Login Login

RAID 0 NVME Read Performance Same as Non-RAID

 Post Reply Post Reply
Author
Message
Caxton View Drop Down
Newbie
Newbie
Avatar

Joined: 26 Apr 2025
Status: Offline
Points: 95
Post Options Post Options   Thanks (0) Thanks(0)   Quote Caxton Quote  Post ReplyReply Direct Link To This Post Topic: RAID 0 NVME Read Performance Same as Non-RAID
    Posted: 26 Apr 2025 at 8:26am
The pure read performance issue, e.g. only at single-drive levels has been duplicated using all connection combinations of: M2_1 + M2_2, M2_1 + M2_3, M2_2 + M2_3 and lastly M2_3 + M2_4. Uniquely write performance is only on par with expectations when using M2_1 (CPU bus) + M2_2 (Chipset bus). Contrariwise write performance drops to single drive speed when testing all other connection combinations.

To prevent SSD throttling, active cooling (fan/heatsink) has been used on all connections except for M2_3 + M2_4. They are too close to support active cooling. Used ASRock's passive system board heatsink in that instanced. The included thermal resistors were used on all test scenarios to monitor real-time heat confirming all tests were performed well within design specs, e.g. to prevent throttling. Additionally, the room was maintained at constant 68 degrees Fahrenheit throughout testing.

The inability to scale read performance in all combinations using RAID 0 indicates that the controller is not effectively parallelizing read operations between the two SSDs regardless of the bus CPU/Chipset or connections: M2_1, M2_2 and so on.

All hardware/software are new, built exclusively for this rig.

The BIOS is running version 3.20
A RAID 0 array is comprised of two Gen 4 NVME SSD's setup in BIOS.
Win 11 was installed using Microsoft's public Windows 11 Installation Media while following the instructions noted in AMD RAID Driver v 9.3.3.00117 e.g. using rcbottom.inf, rcraid.inf and rccfg.inf during advanced setup to ensure the single 8TB single volume is recognize during first time setup. Windows OOBE was setup without issue.

Post OOBE the following core apps/drivers were installed (in order):
AMD Chipset Drivers Revision Number 7.02.13.148
AMD RAID Installer Revision Number 6.10.09.200
AMD Software: Adrenalin Edition Adrenalin 25.3.1
ASRock Motherboard Utility ver:4.1.12
MediaTek Bluetooth driver ver:1.1037.0.424
MediaTek Wireless Lan driver ver:5.3.0.1825
Nahimic3 utility ver:1.10.2_APO4
Realtek high definition audio driver ver:6.4.0.2395_UAD_WHQL
Realtek Lan driver ver:10.071.0425.2024
APP Shop ver:2.0.0.6
ASRock Polychrome RGB ver:2.0.191

Hardware:
Systemboard ASRock X870E Taichi
CPU AMD Ryzen 7 9800X3D
RAID 0 NVME Read Performance Same as Non-RAID

CPU Cooler Cooler Master 360 Atmos Close-Loop AIO Liquid Cooler
Memory TeamGroup T-Force Delta DDR5 Ram 32GB Kit (2x16GB) 7200MHz x2 = 64GB
Power SupplyCorsair HX1200i
Graphics - Using iGPU on the CPU for testing, no add-in GPU for now.
Storage Samsung 990 Pro 4TB NVMe Gen 4 SSD x2 (8TB)

Here are some example test results on three tests highlighting the issue with CrystalDiskMark v8.0.
[table]
[tr][th]Read Seq MB/s - 1MiB[/th][th]Write Seq MB/s - 1MiB[/th][th]Bus Config[/th][th]Comments[/th][th]Test Config[/th][/tr]
[tr][td]7143.922[/td][td]7001.150[/td][td]M2_2 and M2_3 and PCIe Gen 4x4[/td][td]RAID 0: Chipset only bus[/td][td]NVMe settings/profile[/td][/tr]
[tr][td]7342.190[/td][td]13748.016[/td][td]M2_1 and M2_2 and PCIe Gen 5x4 > Gen 4x4[/td][td]RAID 0: CPU and Chipset bus[/td][td]NVMe settings/profile[/td][/tr]
[tr][td]7148.922[/td][td]6921.065[/td][td]M2_3 and M2_4 and PCIe Gen 4x4[/td][td]RAID 0: Chipset only bus[/td][td]NVMe settings/profile[/td][/tr]
[/table]
Back to Top
Caxton View Drop Down
Newbie
Newbie
Avatar

Joined: 26 Apr 2025
Status: Offline
Points: 95
Post Options Post Options   Thanks (0) Thanks(0)   Quote Caxton Quote  Post ReplyReply Direct Link To This Post Posted: 26 Apr 2025 at 8:28am
Sorry, the BB coding used to format the table did not work. Here is the data in CSV format for clarity. The text can be saved to a notepad as a *.CSV then opened in your favorite spreadsheet app.

Read Seq MB/s - 1MiB ,Write Seq MB/s - 1MiB ,Bus Config,Comments,Test Config
7143.922,7001.15,M2_2 and M2_3 and PCIe Gen 4x4,RAID 0: Chipset only bus,NVMe settings/profile
7342.19,13748.016,M2_1 and M2_2 and PCIe Gen 5x4 > Gen 4x4,RAID 0: CPU and Chipset bus,NVMe settings/profile
7148.922,6921.065,M2_3 and M2_4 and PCIe Gen 4x4,RAID 0: Chipset only bus,NVMe settings/profile
Back to Top
M440 View Drop Down
Senior Member
Senior Member


Joined: 12 Jul 2023
Status: Offline
Points: 4055
Post Options Post Options   Thanks (0) Thanks(0)   Quote M440 Quote  Post ReplyReply Direct Link To This Post Posted: 26 Apr 2025 at 12:01pm
the nvme drives firmware is updated?

this is BIOS/UEFI RAID?

anyway i would blame:
-os/drivers
-nvme drives firmware
-motherboard/bios/hardware

asrock b650m/hdv.m2, ryzen 7700x@85watt
Back to Top
NDRE28 View Drop Down
Senior Member
Senior Member
Avatar

Joined: 08 Sep 2024
Location: Romania
Status: Offline
Points: 1575
Post Options Post Options   Thanks (0) Thanks(0)   Quote NDRE28 Quote  Post ReplyReply Direct Link To This Post Posted: 26 Apr 2025 at 1:19pm
Hi!

I don't know why this is so on your system.

I am running 3x Samsung 970 Pro 1TB SSDs in RAID-0, on an ASRock X670E Taichi motherboard, with the same BIOS version & RAID driver as you.

In my case, the sequential speeds are more than double, in benchmarks!

In CrystalDiskMark: Seq.Read=9000+MB/s & Seq.Write=8000+MB/s.
The Random Q1T1 Write=400+MB/s & Random Q1T1 Read=60+MB/s.

Unfortunately, Random Read & Write at Q1T1 is what matters the most!
(That's why I'll go with a DC P5800X Optane drive as my boot drive, as soon as it will arrive).
Back to Top
Caxton View Drop Down
Newbie
Newbie
Avatar

Joined: 26 Apr 2025
Status: Offline
Points: 95
Post Options Post Options   Thanks (0) Thanks(0)   Quote Caxton Quote  Post ReplyReply Direct Link To This Post Posted: Yesterday at 1:35am
I will give that a go and reply with results. Great suggestion!
Back to Top
Caxton View Drop Down
Newbie
Newbie
Avatar

Joined: 26 Apr 2025
Status: Offline
Points: 95
Post Options Post Options   Thanks (0) Thanks(0)   Quote Caxton Quote  Post ReplyReply Direct Link To This Post Posted: Yesterday at 1:38am
Good point on the random R/W values. It seems like it should on this system when compared to another similar setup. Thank you for helping validate the test scenario.
Back to Top
Caxton View Drop Down
Newbie
Newbie
Avatar

Joined: 26 Apr 2025
Status: Offline
Points: 95
Post Options Post Options   Thanks (0) Thanks(0)   Quote Caxton Quote  Post ReplyReply Direct Link To This Post Posted: Yesterday at 7:58am
Thank you all for helping debug this issue.

It appears the initial suggestion was AI-generated, based on the style and format in the original post that referenced "Can I use a Fedora Live CD to mount...". While it was helpful to some extent, the proposed solution ultimately did not work as expected due to several limitations. Fedora LiveCD/USB environments come in various versions, and commands can behave differently depending on the specific combination of version and environment used.

For example, the following command failed when executed in the Fedora 42 Live environment (booted via USB media):

sudo dnf install dmraid

The failure message was:
"failed to resolve the transaction" and "no match for argument dmraind:"

To provide additional context: Fedora 42's Live CD/USB environment does not include `dmraid` by default, nor does it allow installation via the above command. A subsequent search within the booted environment, executed as follows:

sudo dnf search dmraid
...returned:
"no matches found."

The search was performed after the install command failed, simply to confirm whether the `dmraid` package might already exist in the environment?”but it does not.

So, since this approach appears to be a dead end (at least thus far), are there any alternative methods to non-destructively test RAID 0 read performance?

For reference, when testing in the native environment (Windows 11), the issues are consistently related to read commands, irrespective of hardware connection configurations. However, write commands only produce at RAID 0 performance thresholds when one SSD is connected directly to the CPU bus and the other is on the chipset bus. These findings have been consistent across all tested configurations.
Back to Top
NDRE28 View Drop Down
Senior Member
Senior Member
Avatar

Joined: 08 Sep 2024
Location: Romania
Status: Offline
Points: 1575
Post Options Post Options   Thanks (0) Thanks(0)   Quote NDRE28 Quote  Post ReplyReply Direct Link To This Post Posted: Yesterday at 10:16am
I am running my RAID-0 setup on an AMD AM5 platform.
One of my SSDs is connected directly to the CPU, while the other 2 are connected to the chipset.
OS: Windows 11 Enterprise v24H2.

In my case, everything runs as expected, in terms of performance.
RAID-0 did improve the transfer speeds.
Back to Top
Caxton View Drop Down
Newbie
Newbie
Avatar

Joined: 26 Apr 2025
Status: Offline
Points: 95
Post Options Post Options   Thanks (0) Thanks(0)   Quote Caxton Quote  Post ReplyReply Direct Link To This Post Posted: 6 hours 9 minutes ago at 8:09am
Originally posted by NDRE28 NDRE28 wrote:

I am running my RAID-0 setup on an AMD AM5 platform.
One of my SSDs is connected directly to the CPU, while the other 2 are connected to the chipset.
OS: Windows 11 Enterprise v24H2.

In my case, everything runs as expected, in terms of performance.
RAID-0 did improve the transfer speeds.


Regarding RAID 0 performance, are you achieving close to 2x read/write speeds compared to a single SSD? This presumes at least one array is in a 2x NVMe/SSD RAID 0 config. Ideally, the performance should approach the theoretical maximum achievable across the bus and scale as additional, identical SSD's are added to a RAID 0 array.

I?™m particularly interested in whether you?™re obtaining optimal results, especially in comparison to my own setup. Interestingly, I previously purchased (and later returned) an 8TB WD NVMe drive that exhibited slower performance than a single Samsung NVMe. Theoretically, it should have been significantly slower than two Samsung NVMe drives configured in RAID 0, yet the actual performance difference was negligible.

At present, the results seem inconclusive. I?™d love to hear your insights on this.

My primary goal is to build a RAID 0 array using two Samsung 990 Pro 4TB NVMe SSDs, resulting in a single 8TB volume optimized for read/write performance in line with the hardware capabilities?”specifically, nearly doubling bandwidth for reads and writes. Theoretically, this setup should achieve read speeds of approximately 14,900 MB/s and write speeds of about 13,800 MB/s on par with Gen 5x4 single non-RAID NVMe's

However, RAID 0 read performance has failed to effectively utilize parallelization, with read speeds mimicking single-drive performance as measured by both Windows 11?™s winsat tool and CrystalDiskMark. For instance, across three CrystalDiskMark test runs?”each consisting of five repetitions?”the average read performance was 7,211 MB/s, irrespective of the slot configuration.

Interestingly, write performance closely aligned with real-world RAID 0 expectations, reaching approximately 13,748 MB/s when utilizing the M2_1 and M2_2 slots with PCIe Gen 5x4 with the Gen 4x4 configuration. In contrast, other connection and bus combinations delivered suboptimal write performance, averaging around 6,961 MB/s. This includes all tested physical configurations, such as M2_2 with M2_3 (PCIe Gen 4x4) and M2_3 with M2_4 (PCIe Gen 4x4).
Back to Top
NDRE28 View Drop Down
Senior Member
Senior Member
Avatar

Joined: 08 Sep 2024
Location: Romania
Status: Offline
Points: 1575
Post Options Post Options   Thanks (0) Thanks(0)   Quote NDRE28 Quote  Post ReplyReply Direct Link To This Post Posted: 3 hours 44 minutes ago at 10:34am
Regarding RAID 0 performance, are you achieving close to 2x read/write speeds compared to a single SSD? This presumes at least one array is in a 2x NVMe/SSD RAID 0 config.


Hi!
I am achieving more than double the speed of 2 NVMe drives.
However, please keep in mind that I am using 3 identical drives: Samsung 970 Pro 1TB SSDs (PCIe 3.0 x4).
Obtaining double the speed of PCIe 4.0 drives, through RAID-0, is harder.


My primary goal is to build a RAID 0 array using two Samsung 990 Pro 4TB NVMe SSDs, resulting in a single 8TB volume optimized for read/write performance in line with the hardware capabilities?�specifically, nearly doubling bandwidth for reads and writes.


In real world usage, one Samsung 9100 Pro 8TB SSD (PCIe 5.0 x4) will be faster than 2x Samsung 990 Pro drives in RAID-0.
That's because RAID setups add some latency, though the sequential transfer speeds go up.
[Please keep in mind that the 8TB version of 9100 Pro hasn't been released yet].


I went to the moon & back with this storage performance thing.
Finally, I came to the conclusion that sequential speeds don't matter so much.
The biggest limitation of these SSDs is their NAND (3-bit per Cell).
Going with a Gen4 Intel Optane, like the DC P5800X, or a Gen4 SLC (1-bit/Cell) drive, like the Solidigm D7-P5810, will lead to better results.

Back to Top
 Post Reply Post Reply
  Share Topic   

Forum Jump Forum Permissions View Drop Down

Forum Software by Web Wiz Forums® version 12.04
Copyright ©2001-2021 Web Wiz Ltd.

This page was generated in 0.078 seconds.