ASRock.com Homepage
Forum Home Forum Home > Technical Support > AMD Motherboards
  New Posts New Posts RSS Feed - Windows 11 24H2 now being pushed onto my system
  FAQ FAQ  Forum Search Search  Events   Register Register  Login Login

Windows 11 24H2 now being pushed onto my system

 Post Reply Post Reply Page  12>
Author
Message
Skybuck View Drop Down
Senior Member
Senior Member


Joined: 18 Apr 2023
Status: Offline
Points: 1575
Post Options Post Options   Thanks (0) Thanks(0)   Quote Skybuck Quote  Post ReplyReply Direct Link To This Post Topic: Windows 11 24H2 now being pushed onto my system
    Posted: 20 Jul 2025 at 6:06am
Delayed it for now... much dredded...

Anybody running Windows 24H2 on ASRock B650E Steel Legend WiFi motherboards ?

How was it so far ? Anything strange to report ? ;)

I would also love to hear from any Ryzen users ;):

AMD Ryzen 9 7950X3D or Ryzen 7800X3D users...

Back to Top
Skybuck View Drop Down
Senior Member
Senior Member


Joined: 18 Apr 2023
Status: Offline
Points: 1575
Post Options Post Options   Thanks (0) Thanks(0)   Quote Skybuck Quote  Post ReplyReply Direct Link To This Post Posted: Yesterday at 3:37am
Today 22/07/2025 here is how it went:

1. The Windows 11 24H2 tried to install itself onto the system.
2. I noticed it used one of the harddisk which is hosting the temp folder, this slowed it down significantly as it was probably extracting files. Advise has been giving to Microsoft to use RAM disk instead of temp folder. (Update took a few minutes not too bad).
3. After installing it said: 100% complete, the system rebooted.
4. On Reboot the screen remained black.
5. I tried to power on/off monitor and receiver, it did not help, the screen remained black.
6. I decided to shutdown the computer.
7. After power-shutdown I notice Prospect 700R indicated on display: unplug sata cable. I know from experience this indicates some strange power issue. Maybe electric power cycling in the cables, the motherboard or bios.
8. After unplugging the power cables this power issue went away.
9. Unfortunately this boot failure led to Microsoft Windows 11 believing there was a software problem with Windows 11, so it rolled back Windows 11 update real fast. It showed: "Undoing changes made to your PC/computer".
10. After this rollback login was achieved.
11. I immediately went to windows 11 update to try again. I wanted to re-download windows 11 24h2, this was unsuccessfull because the cable modem was still re-booting from the power cycle. Windows 11 24H2 update download seem to hang.
12. So I decided to pause the update/or reboot computer. Both were done more or less.
13. After reboot Windows 11 update offered a different update:

Here is my short log of today:
"
Windows 11, version 24h2

Failed to install on 22/07/2025 - 0xc1900101

Trying again for second time...

It didn't try again now it's trying to install:

2025-7 Cumulative Update Preview for Windows 11 Version 23H2 for x64-based Systems (KB5062663)
(^ I clicked on it to allow it to install)
"

This update installed 100% quickly.

14. I complained to Microsoft update this on Microsoft Feedback, stating that I believe my Windows 11 is now side tracked on this update instead unnecessarily. Maybe in coming month Windows 11 24H2 update will be re-offered.

15. I also tried to run HWInfo64. It hangs on trying to read memory configuration. This was already an issue before these updates.

I am not sure why this is but it most likely has something to do with the I2C m bus or something... I can think of two causes:

15.1 Bad AMD chipset driver installation issue.
15.2 Maybe newer ASRock bios needed than the current 1.28 version.

However I am scared of installing any new bios version because it could brick the system.

Not being able to run HWInfo64 is a major heat security issue/risk, can't check temperatures.

Anyway before attempting to install a new bios version, I think it's better to try and install Windows 11/whatever version to an external disk to first test if these issues re-appear, if so then maybe a bios update needed, otherwise no way.

Bye for now,
Skybuck.






Back to Top
Xaltar View Drop Down
Moderator Group
Moderator Group
Avatar

Joined: 16 May 2015
Location: Europe
Status: Offline
Points: 27323
Post Options Post Options   Thanks (0) Thanks(0)   Quote Xaltar Quote  Post ReplyReply Direct Link To This Post Posted: 16 hours 30 minutes ago at 4:32pm
Hi Skybuck, thanks for the detailed info. I would start with updating your AMD
all in one drivers. I have similar issues every time I grab a windows update, I
almost always have to update my all in one drivers afterwards or end up with odd
symptoms, like HWInfo64 hanging up in your case.
Back to Top
Skybuck View Drop Down
Senior Member
Senior Member


Joined: 18 Apr 2023
Status: Offline
Points: 1575
Post Options Post Options   Thanks (0) Thanks(0)   Quote Skybuck Quote  Post ReplyReply Direct Link To This Post Posted: 7 hours 55 minutes ago at 1:07am
Hi,

Short Version: Good News: Dominator RAM Set/Leds Detected again and working. I've been looking forward to this for some time. I just got them working again, not sure if it will last, assume it does if not I will report back ! ;)

Thanks for your reply. I have not yet uninstalled HWInfo64, but I might try to do so in the future, I have also not run it yet.

What I did do today though on 23 july 2025 is indeed a partial update of drivers.

At least the most important drivers where re-downloaded, new versions, old ones removed, reboots, re-installs. The only thing I did not re-install was asus armoury crate because it's a bitch to re-install but I updated individual components at the end as well. Finally I did a computer reset and went into the bios.

I choose discard changes, somehow this restored the Dominator Ram Set detection in iCUE software.

So I suspect maybe some kind of bios setting was kinda F-up... and maybe it lingered on somehow, the computer reset button and going into bios and discarding changes might have helped. However I also suspect software updates has something to do with it, especially NVIDIA Graphics Drivers.

Anyway, as reported above: Corsair Dominator RAM/LED set detected again. It required some reboots, first it was detected at some point, then corsair iCue installed something for the DRAM set, the GREEN lights came back on, instead of the red chrismas falling light, couldn't still control it yet... when back into bios/reboots etc, finally it came back on, Especially after installing NVIDIA plugin and MSI plugin in iCUE before that I also installed Asus plugin and also manually installed auro sync plug in before that.


Here is the list of currently installed driver and software versions:

1. AMD Chipset(v7.03.21.2116).zip

This is an important one, all components were nicely installed which is important.

2. NVIDIA Graphics Driver (brand new):

577.00-desktop-win10-win11-64bit-international-dch-whql.exe

I would actually start with these two, it will probably fix issues on many people's systems.

3. MSI Center (full remove first, then re-install, it also changed for the fan settings are applied must install some kind of cooling fan center component for it):

MSI-Center-2.0.56.0.zip

4. iCUE:

Main installer/app: version 5.31.112
Plugin: CorsairPluginForAuraSync-2.2.106.zip
(Search for aura sync on their website).

5. Armory crate shows versions:

Armory Create: 6.2.11.0
Aura Service: 3.0.8.52
Game SDK Service: 1.0.5.0
Asus Dynamic Lighting Plug-In: HAL 6.5.0.0
ROG-STRIX-RTX4070TI
Asus framework service: 4.2.4.4
Asus Core SDK: 2.01.44
Asus framework service VGA Plug-in: 1.0.4
Asus Main SDK Plugin: 138.0.3351.95
Aura Kit: 20221202
HAL: 0.0.7.9
HTML: 3.01.10
Device SDK: 1.00.22

Finally:

I set Corsair Leds to "lightning link".

And choose some temps.

1. For ryzen package
2. One for mobo temp
3. Another one for mobo temp
4. And one for memory dram 4 temp...
^ Not going to check exact settings, don't wanna jinx it lol :)

Another nice thing is:

1. No BIOS update was necessary, pfew, still on well working 1.28.
2. No re-installation/experimentation of Windows 11 needed.
3. Apperently all hardware still working well, pfew ! ;) =D

Finally:

4. I was right, some strange software issue with nvidia driver most likely, or a strange bios hick-up.

Whatever it may have been it seems fixed for now ! ;) XD

Jippppeee ! =D

Bye for now,
Skybuck Flying.
Back to Top
Skybuck View Drop Down
Senior Member
Senior Member


Joined: 18 Apr 2023
Status: Offline
Points: 1575
Post Options Post Options   Thanks (1) Thanks(1)   Quote Skybuck Quote  Post ReplyReply Direct Link To This Post Posted: 7 hours 28 minutes ago at 1:34am
HWInfo64 removed and new version v8.28-5770 installed and working.
(This tool seems to manipulate/interact with the SMBus / I2C, so care must be taken not to accidently click on some things or whatever).

It can generate a full report of system... but I see no way of including such a long report in this forum...

So everything is working again.

To be more specific about lightning settings for iCue/DRAM:
Lighting Link1/Quick Lighting Zone: Dimm #1 set to: AMD Ryzen 9 7950X3D Package
Lighting Link2/Quick Lighting Zone: Dimm #2 set to: ASRock B650E Steel Legend WiFi Temp #1
Lighting Link3/Quick Lighting Zone: Dimm #3 set to: ASRock B650E Steel Legend WiFi Temp #4
Lighting Link3/Quick Lighting Zone: Dimm #3 set to: Dominator Platinum RGB DDR5 Temp #4.


(At lighting link #3 at had Temp #3 from mobo but decided to switch to temp #4 because it's slightly hotter, not yet sure which parts of the mono are being heat measured here, so that's where I am a little bit unstatisfied, but perhaps in time with HWInfo64 I can figure it out further, but this is something where mobo makers/sensors/hwinfo64 could do some more work to improve upon this in the future).

All temperature colors set to:
Green 20 degree celcius
Yellow 40 degree celcius
Red 60 degree celcius
(It shades them in between).

Current ambient/room temperature is 24.0 degree celcius
Dimm 1. yellow/orangish
Dimm 2. light greenish
Dimm 3. yellow/orangish
Dimm 4. yellowish green.

Power supply temperature 25 celcius. Power supply fan RPM set to approx 1000 RPM.

Very silent system at the moment, I do have outside balkany door open though for fresh air and I kinda like it, might close it soon though.

Inside asus armory crate:
(Dominator set not present anymore, I guess I delinked it in iCue or something, which is good I don't want armory crate controlling it because it doesn't have access to all sensors...)

Asus auro sync->Auro Effects->Smart<-Click it it shows:
By GPU Temperature (select it ;))
Low Point: 60 degree celcius, High point 85 degree celcius.

So far the GPU has been greenish the entire summer hehe.

This is cool/nice setup, can quickly see temperatures in a glance, assuming it works and it does seem to work ;)

It's an amazing system, it continues to amaze me every day ! ;) XD =D

Later, bye for now,
Skybuck


Back to Top
Skybuck View Drop Down
Senior Member
Senior Member


Joined: 18 Apr 2023
Status: Offline
Points: 1575
Post Options Post Options   Thanks (0) Thanks(0)   Quote Skybuck Quote  Post ReplyReply Direct Link To This Post Posted: 7 hours 10 minutes ago at 1:52am
Two more things to take note off:

1. The build-in gpu in the ryzen processor was disabled some months ago, I found AMD Ryzen Graphics drivers buggy for this build-in processor, however ASRock support shows new vga drivers available for this build-in GPU but also other radeon graphics cards, I am not sure if AMD has done something to make their drivers work better for this build-in gpu. For now I won't test it again because I want butter smooth PC operation ! ;) but I might test it again in the future, I did download a driver in case I want to install/try it out. To do this I would have to re-enable the build-in gpu in the bios and install this new driver:

AMD_VGA(v24.30.66.250610a).zip

Stored under:
G:\Downloads\Drivers\Motherboard\ASRock B650E Steel Legend WiFi\AMD graphics driver (Adrenalin)

There is also another folder where I downloaded these adrenline drivers from AMD site instead of ASRock's site, they contain somewhat older/different drivers or same, not sure if ASRock changes anything to them, I think more or less the same:

G:\Downloads\Drivers\Processor (Embedded Graphics)

Last version I downloaded and tried some time ago, but it was buggy:
whql-amd-software-adrenalin-edition-24.10.1-win10-win11-oct-rdna.exe
^ downloaded 4 december 2024

I just downloaded a new version, just in case AMD gets bombed in WW3 lol:
whql-amd-software-adrenalin-edition-25.6.1-win10-win11-june5-rdna.exe
^ downloaded today 23 july 2025
^ Have not tried it yet, but might do so in future.

I see there is also a slightly newer version:
Adrenalin 25.6.3 (Optional)
https://drivers.amd.com/drivers/amd-software-adrenalin-edition-25.6.3-win10-win11-june-rdna.exe
^ But it's not WHQL certified, so not going to download that one, though sometimes those might work better/newer.
^ Will not download it for now, have not tried it, but skip if newer in future.

2. I also saved a HWInfo64 log and uploaded it to my webdrive, might come in handy in the future if my SuperPC2023 ever dies and I want to know what kind of characteristics/performance it had compared to any new system in the future ! ;):

Here is the log file from HWInfo64 for Skybuck's SuperPC 2023:

https://www.skybuck.org/Hardware/SuperPC2023/HWInfo64-SUPERPC2023-23-july-2025.LOG

Bye for now,
Skybuck.
Back to Top
Skybuck View Drop Down
Senior Member
Senior Member


Joined: 18 Apr 2023
Status: Offline
Points: 1575
Post Options Post Options   Thanks (0) Thanks(0)   Quote Skybuck Quote  Post ReplyReply Direct Link To This Post Posted: 5 hours 6 minutes ago at 3:56am
One more future note so I can let it flow out of my brain lol:

I let gemini flash 2.5 analyze the memory situation/settings of my system.

The end conclusion of gemini flash 2.5 is more or less the following:

The memory chips are capable of running at 6000 MT/sec. The system can do it as well. However getting it stable might be difficult.

In case I or others with a similar system want to try it in the future:

The AI recommends:

1. Install the latest BIOS to increase the chance of stability.

2. Ofcourse at least version 1.28 for this particular system should be installed to prevent any Voltage Soc going to the processor from going higher than 1.25 volts or 1.305 volts. (On youtube I saw somebody set it to 1.2 volts to begin with).

3. Enable AMD Expo.

The expected performance gains are: 20% lower latency and 66% more theoretical bandwidth.

In certain use cases this might be noticeable, like perhaps virtual machines, maybe loading AI models.

However during benchmarking by others long ago, they only showed 1% or 2% performance improvements probably because of 3D V Cache or they didn't test it right ;)

The risk of damage is low according to the AI because it was all designed for this.

However system instability is still a risk.

For me system stability is very important so I am perfectly fine with running memory at 3600 MT/sec which is still a very fast system compared to what I used to be running and also compared to real world usage that I am getting which is lightning fast for a human, so I have little desire to experiment with this at the time being.

However it does make me a little bit curious.

Another however, if this system were to be damaged a replacement would not be easy since newer models seem to be blowing up and it would be a lot of work to take the system apart or build a new one.

So benefits do not outway the risks.

But it is interesting to note this down for the future in case I do buy a better/new or backup computer and I want to experiment with this system to see what it could really do.

Finally I will throw in some numbers computed by the AI:

Revisiting the Timings (30-30-30-58):

Knowing it's DDR5 running at 3600 MT/s (1796.8 MHz clock), let's calculate the real-world latency in nanoseconds (ns):

With current timings at 3600 MT/sec:

    tCAS-tRCD-tRP-tRAS: 30-30-30-58

    Clock Period: 1 / 1796.8 MHz = 0.5566 nanoseconds per clock cycle.

    tCAS Latency: 30 cycles * 0.5566 ns/cycle = 16.698 ns

    tRCD Latency: 30 cycles * 0.5566 ns/cycle = 16.698 ns

    tRP Latency: 30 cycles * 0.5566 ns/cycle = 16.698 ns

    tRAS Latency: 58 cycles * 0.5566 ns/cycle = 32.28 ns


    Supported Module Timing at 3000.0 MHz (EXPO Profile 0): 40-40-40-77

        This is the key EXPO profile your RAM is designed for. At a true clock speed of 3000 MHz (which is 6000 MT/s), the timings are 40-40-40-77.

        CAS Latency in ns (EXPO): 40 cycles * (1 / 3000 MHz) = 13.33 ns


Comparison of Real-World Latency:

    Current (DDR5-3600 CL30): ~16.7 ns

    EXPO (DDR5-6000 CL40): ~13.3 ns

This confirms that by enabling EXPO, you'd be reducing your primary CAS latency by approximately 20% (16.7 ns to 13.3 ns), in addition to the massive
increase in bandwidth (3600 MT/s to 6000 MT/s).


In summary, by enabling EXPO, you are looking at:

    A substantial increase in memory bandwidth (66% theoretical).

    A noticeable reduction in absolute CAS latency (around 20%).

    Tangible performance gains in CPU-intensive games and applications, especially smoother gameplay due to better 1% lows.

More:

1. Absolute Latency (in nanoseconds):

    Current (3600 MT/s @ 1796.8 MHz clock, 30-30-30):

        CAS Latency: 30 cycles * (1 / 1796.8 MHz) = 16.7 ns

    With EXPO (6000 MT/s @ 3000 MHz clock, likely CL40):

        CAS Latency: 40 cycles * (1 / 3000 MHz) = 13.3 ns

    You're looking at a ~20% reduction in CAS latency in absolute terms. This means the time it takes for the CPU to receive data from the RAM is
    significantly reduced.

2. Bandwidth (Theoretical):

    Current (DDR5-3600, Dual Channel):

        3600 MT/s * 8 bytes/transfer (64-bit bus) = 28.8 GB/s

    With EXPO (DDR5-6000, Dual Channel):

        6000 MT/s * 8 bytes/transfer = 48 GB/s

    This is a massive 66% increase in theoretical memory bandwidth! This means your system can move data to and from the RAM much faster.



Concerning voltage settings:

    1.35V for DRAM (VDD/VDDQ) is completely normal and safe for DDR5-6000.

    The 1.25V / 1.3V limit you're thinking of applies to the CPU SoC Voltage.

        During the initial Ryzen 7000 launch, some motherboard BIOS versions (before patches) were automatically pushing the CPU SoC voltage too high when
     EXPO was enabled, sometimes going over 1.4V. This excessive SoC voltage was indeed found to potentially cause damage to the CPU's memory controller
     (leading to "burn outs").

        AMD quickly addressed this with AGESA updates, and motherboard manufacturers followed suit with new BIOS versions. These updates now cap the CPU
     SoC voltage at a maximum of 1.3V (with 1.25V often being the recommended "sweet spot" for stability and longevity).

        When you enable EXPO, the motherboard should now automatically apply a safe SoC voltage that adheres to these new limits, typically 1.25V or
     slightly higher, but staying below 1.3V.

Key Confirmations and Insights:

    Corsair CMT64GX5M2B6000Z40: This confirms the exact part number for your RAM kit, which is indeed a 64GB (2x32GB) DDR5-6000 CL40 kit. The "Z40" at the
    end of the part number specifically indicates the CL40 latency.

    SK Hynix A-Die: This is excellent news! SK Hynix A-Die is currently one of the best DRAM ICs (chips) for DDR5 overclocking and stability, known for its
    ability to reach high frequencies and relatively tight timings.

    Module Size: 32 GBytes, Number of Ranks: 2 (Dual Rank): You have two 32GB dual-rank modules. This confirms your 128GB total memory is achieved with two
    modules, which is generally easier to stabilize at higher speeds than four modules, although 32GB dual-rank modules still put more strain on the memory
    controller than 16GB single-rank modules.

    Memory Speed: 3000.0 MHz (DDR5-6000 / PC5-48000): This is the rated effective data rate for the modules via their EXPO profile.

For now I am going to take what the AI wrote with a grain of salt, I do believe it somewhat, but confirmations by others would be nice to make sure the AI is not hallucinating :)

(There was more text, but I copy & paste the most essential parts of it ! ;))

Laterz !
Bye,
    Skybuck ! ;) =D







Back to Top
Skybuck View Drop Down
Senior Member
Senior Member


Joined: 18 Apr 2023
Status: Offline
Points: 1575
Post Options Post Options   Thanks (0) Thanks(0)   Quote Skybuck Quote  Post ReplyReply Direct Link To This Post Posted: 4 hours 56 minutes ago at 4:06am
Hmmmmm I just tried HWInfo64, it complained iCue was holding the PCI bus or something too long, PCI Lock.

I exited iCue from system try and this resolved the issue for iCue, but now the memory leds fall back to their programmed pattern... so this a bit of a strange situation...
Back to Top
Skybuck View Drop Down
Senior Member
Senior Member


Joined: 18 Apr 2023
Status: Offline
Points: 1575
Post Options Post Options   Thanks (0) Thanks(0)   Quote Skybuck Quote  Post ReplyReply Direct Link To This Post Posted: 4 hours 54 minutes ago at 4:08am
^ This happened while trying out the "sensors" part of HWInfo64....

PCI Lock warning, abort, retry, cancel, etc...

I tried retry a few times, it kept showing up. It did rapport which app was responsible for the like: iCue...

So then eventually I shutdown iCue, exited it for system tray, restart HWInfo64 and then no more complaints.

So at least this allows me to inspect temperatures, voltages, settings, etc.

Afterwards I shut down HWInfo64, restart iCue and everything should be fine again.

Though I do now wonder if maybe iCue could have an affect on system performance, not sure yet...
Back to Top
Skybuck View Drop Down
Senior Member
Senior Member


Joined: 18 Apr 2023
Status: Offline
Points: 1575
Post Options Post Options   Thanks (0) Thanks(0)   Quote Skybuck Quote  Post ReplyReply Direct Link To This Post Posted: 4 hours 34 minutes ago at 4:28am
More interesting information from AI:

The NVIDIA GeForce RTX 4070 Ti uses a PCI Express 4.0 x16 interface.

Here's a breakdown of the bandwidth for that configuration:

    PCIe 4.0 x16: This means it's a PCI Express Gen 4 interface using all 16 available lanes (x16).

The theoretical bandwidth for PCIe 4.0 x16 is approximately 32 GB/s (Gigabytes per second).

To be precise:

    Each PCIe 4.0 lane can transfer data at 2 GB/s.

    Since the RTX 4070 Ti uses 16 lanes (x16), the total bandwidth is 16 lanes * 2 GB/s/lane = 32 GB/s.

It's worth noting that while this is the maximum theoretical bandwidth, real-world usage rarely saturates the entire bus. However, for a powerful modern GPU like the RTX 4070 Ti, PCIe 4.0 x16 provides ample bandwidth for gaming and other graphics-intensive tasks.

Yes, the 32 GB/s bandwidth for PCIe 4.0 x16 is the unidirectional bandwidth.

For loading a 10GB AI model, the perceived speed is overwhelmingly dictated by your SSD's read speed and the initial CPU processing overhead, not by the difference between DDR5-3600 and DDR5-6000 system RAM.

So I was wondering if enabling MT6000 would benefit AI, perhaps loading, but this is determined by other bottlenecks.

For model AI inference/prompting/answering etc:
This is precisely the scenario where enabling EXPO (going from 3600 MT/s to 6000 MT/s) would have a much more noticeable and direct impact on inference speed.

Inference Speed (Tokens/Second): For models that significantly exceed your GPU's VRAM, you would likely see a noticeable increase in tokens per second (t/s) during inference. The GPU will spend less time waiting for data from system RAM. This is where the 28.8 GB/s vs 48 GB/s difference becomes a genuine bottleneck relief.

So it could be interesting to tst "Tokens/Sec" on 3600 Memory Transfer/Sec and 6000 Memory Transfer/Sec.

However I suspect the difference will still not make local AI models worth the trouble... and using data center AI models from Gemini is the way to go for the time being... At least for big models, big context windows etc.

I am also a little bit interested in small context 4K or 16k or 32k instead of 1M (gemini) vs local AI models (4k to xK), but gemini is just too good currently... but some interesting tests could be done later on... for example small samples of code conversion.

Long term if it indeed gives some significant speed up, might be worth the trouble, but more likely in the future if I really need to do long term have locally processing of code ! for now even using a few minutes or 2 hours per 2 days of gemini tokens would probably out run my system in progress and probably quality too, so not yet worth running AI on local system... also costs some energy.. and sucks in more dust.. more wear and tear which can be avoid by using gemini...

Another idea could be to process tiny little fast locally instead of in the cloud/gemini, but my software not advanced enough yet to detect that and do that...

Bye for now,
Skybuck.
Back to Top
 Post Reply Post Reply Page  12>
  Share Topic   

Forum Jump Forum Permissions View Drop Down

Forum Software by Web Wiz Forums® version 12.04
Copyright ©2001-2021 Web Wiz Ltd.

This page was generated in 0.076 seconds.