Thanks for your time and sorry about the confusing statements.
I hope I'm not going to make it more confusing by trying to explain it but here goes. Sorry about the length.
Drives are tested for SMART, surface, and temperature issues before reuse. The drives used for the raid array on the 970m pro3 build were originally from a raid array created on the same PC but with the old Asus based HP motherboard, I believe an old custom LE series. Surprisingly the 970m pro3 actually recognized the old array when first connected to the 970m pro3 and booted up no problem. The OS just complained about missing drivers. However I was creating a clean image with a different OS so I therefore used the AMD raid controller on the 970m pro3 to create and full format a new 970m pro3 based array. In my post this is both known as the current and original raid array for the project build which appears to be causing the system to halt at the UEFI screen after having a 4 day runtime without issues, which included multiple user initiated shutdowns and reboots. During the post CMOS reset testing that was done yesterday, this array is known as the 'current array' prior to the addition of additional test drive and afterwards known as the 'original array' when used along with the additional test drive to distinguish the difference between the two test array sets. Not sure how to word it any better. Same array with and without additional non member drive.
Recap of tests performed after successful CMOS reset: -Using current/original array member set First test - all drives except DVDR disconnected. System would boot to DVD Second test - reconnected original raid array verified functional by raid utility. System halt before UEFI. Third set of tests - disconnected each of the members of the array one at a time and verified array degrade by raid utility. System halt before UEFI Fourth set of tests - tried various combinations using alternate SATA ports with same results as second and third test sets. -Using original array member set + random additional Fifth test - added additional drive to a random SATA port. Original drive array members no longer seen in raid utility (interesting, didn't expect that). System halt before UEFI Sixth test - disconnected original raid array member drives one at a time until the system booted to Windows recovery screen.
Wish it was a simple raid incompatibility between old and new boards, that would make sense, but this raid array was created and formatted using the 970m pro3 raid utility then failed in some strange way after 4 days of use causing the system to halt on boot. I've never seen anything like this happen before. Usually you would see an array member drive going bad but still the desktop would run although complain until you replaced the bad drive. In the past the AMD controllers always did well recovering from a degrade.
It would be nice to know what happened and if it could be avoided. Could a corruption in the UEFI lead to this?
The next test is to re-create the array and see if the same thing happens but I would very much like to save some if not all the work that was done so I don't have to start from scratch.
|