ASRock.com Homepage
Forum Home Forum Home > Technical Support > Intel Motherboards
  New Posts New Posts RSS Feed - ASRock motherboard destroys Linux software RAID
  FAQ FAQ  Forum Search Search  Events   Register Register  Login Login

ASRock motherboard destroys Linux software RAID

 Post Reply Post Reply
Author
Message
nh2 View Drop Down
Newbie
Newbie


Joined: 15 Nov 2018
Status: Offline
Points: 3
Post Options Post Options   Thanks (0) Thanks(0)   Quote nh2 Quote  Post ReplyReply Direct Link To This Post Topic: ASRock motherboard destroys Linux software RAID
    Posted: 15 Nov 2018 at 6:35am
Hello,

I have reason to suspect that ASRock motherboards accidentally wipe out Linux RAID metadata. Details below.

I'm a programmer and just upgraded from a Gigabyte H97-HD3 to the ASRock Z97 Extreme6 motherboard.

I use software RAID1 on Linux using mdadm using whole disk devices (no partitions).

After I installed the new motherboard and rebooted, I noticed that my software RAID was broken, because the superblocks (RAID meta information at the beginning of the disk) of all my RAID disks had been wiped out with zero-bytes.

In particular, the disk area between (hexadecimal) offset 0x1000 (inclusive) and 0x4000 (exclusive) are overridden with zero-bytes.

This happens on every boot of the machine. I can reproduce it reliably.

I am very sure that it is the motherboard UEFI that performs this zeroing during bootup, before control is passed to the bootloader:

With the previous mainboard, the zeoring does not occur. When a disk is not attached during boot, but attached when Linux is already running, the zeroing does not occur. When I boot into the UEFI Setup utility with a disk attached, and then immediately remove it and attach it to another PC for inspection, the zeroing did occur.

It is important to know that a disk configured to be part of an mdadm RAID array can look like a broken EFI disk. In particular, running `gdisk -l` on a functioning mdadm RAID array disk produce output like this:


GPT fdisk (gdisk) version 1.0.1

Caution! After loading partitions, the CRC doesn't check out!
Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!

Warning! One or more CRCs don't match. You should repair the disk!

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: damaged

****************************************************************************
Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk
verification and recovery are STRONGLY recommended.
****************************************************************************
Disk /dev/sdc: 7814037168 sectors, 3.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 5D940099-EC12-42B0-9DF9-CDAE167EE6EE
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7814037134
Partitions will be aligned on 2048-sector boundaries
Total free space is 7814037101 sectors (3.6 TiB)

Number Start (sector)    End (sector) Size       Code Name


Note that this is NOT an error, since we don't expect there to be a GPT partition table on the disk (because it's used as a whole disk device in an mdadm RAID and that one doens't have anything to do with GPT or partitioning).

However, I suspect that the device looking like it has a damaged GPT triggers some undocumented "recovery" features in ASRock mainboards.

I suspect this in particular because after booting through the ASRock UEFI, the disk suddenly has a "correct" GPT; sgdisk reports:


GPT fdisk (gdisk) version 1.0.1

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/sdc: 7814037168 sectors, 3.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 5D940099-EC12-42B0-9DF9-CDAE167EE6EE
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7814037134
Partitions will be aligned on 2048-sector boundaries
Total free space is 7814037101 sectors (3.6 TiB)

Number Start (sector)    End (sector) Size       Code Name


I suspect that the following happens:

The motherboard's UEFI finds that there's something on the disk that looks like a damaged GPT, and it "fixes" the GPT, not knowing that it is in fact destroying valuable data. It does this already before booting into the UEFI Setup utility (perhaps so that the UEFI GUI can then provide features like displaying disk contents).

Can you confirm or deny whether the ASRock Z97 Extreme6 motherboard firmware has such a feature to modify disk contents to "repair" broken-looking GPT disks?

If yes, can you confirm which other ASRock motherboards have this feature, and whether it is possible to disable this behaviour?

Thank you.
Back to Top
hrkrx View Drop Down
Newbie
Newbie


Joined: 04 May 2019
Status: Offline
Points: 1
Post Options Post Options   Thanks (0) Thanks(0)   Quote hrkrx Quote  Post ReplyReply Direct Link To This Post Posted: 04 May 2019 at 2:32am
I have also an ASRock motherboard and can confirm this Issue, in addition here is a link for temporary workaround:

https://forum.openmediavault.org/index.php/Thread/11625-RAID5-Missing-superblocks-after-restart/

Back to Top
oneiroo View Drop Down
Newbie
Newbie


Joined: 27 Oct 2019
Status: Offline
Points: 4
Post Options Post Options   Thanks (0) Thanks(0)   Quote oneiroo Quote  Post ReplyReply Direct Link To This Post Posted: 27 Oct 2019 at 10:16pm
I have ASRock X570 Steel Legend and probably the same issue. COnfigured RAID 1 with 2 x 4TB (HGST Deskstar and WD Ultrastart) every reboot destroy my matrix and also have wole disk for array.

I will change to partition RADI 1 soon to confirm but lost a lot time to find this information.   
Back to Top
oneiroo View Drop Down
Newbie
Newbie


Joined: 27 Oct 2019
Status: Offline
Points: 4
Post Options Post Options   Thanks (0) Thanks(0)   Quote oneiroo Quote  Post ReplyReply Direct Link To This Post Posted: 28 Oct 2019 at 2:14pm
Originally posted by oneiroo oneiroo wrote:

I have ASRock X570 Steel Legend and probably the same issue. COnfigured RAID 1 with 2 x 4TB (HGST Deskstar and WD Ultrastart) every reboot destroy my matrix and also have wole disk for array.

I will change to partition RADI 1 soon to confirm but lost a lot time to find this information.   


Can confirm, after switch RAID 1 from whole disk to partition issue is resolved.
Back to Top
ffasm View Drop Down
Newbie
Newbie


Joined: 29 May 2020
Status: Offline
Points: 2
Post Options Post Options   Thanks (0) Thanks(0)   Quote ffasm Quote  Post ReplyReply Direct Link To This Post Posted: 29 May 2020 at 6:21am
Can confirm.

ASRock X570 Pro4
I moved disks from old pc and destroy 2 disk of 6 for my raid6.

My heart is still pounding. Now there will be nightmares dreaming.
Thanks you.
Back to Top
ffasm View Drop Down
Newbie
Newbie


Joined: 29 May 2020
Status: Offline
Points: 2
Post Options Post Options   Thanks (0) Thanks(0)   Quote ffasm Quote  Post ReplyReply Direct Link To This Post Posted: 29 May 2020 at 6:50am
I thinks is that a copy of the GPT is stored at the end of the disk, which for some reason is not overwritten when creating raid.
Solve: gdisk, type 'x' for expert command, type 'z' for wipeout GPT.
Back to Top
 Post Reply Post Reply
  Share Topic   

Forum Jump Forum Permissions View Drop Down

Forum Software by Web Wiz Forums® version 12.04
Copyright ©2001-2021 Web Wiz Ltd.

This page was generated in 0.297 seconds.