![]() |
PCI Slot Order - WRX80 Creator 2.0 |
Post Reply ![]() |
Author | |
SourceChild ![]() Newbie ![]() ![]() Joined: 18 Jul 2023 Location: US Status: Offline Points: 50 |
![]() ![]() ![]() ![]() ![]() Posted: 18 Jul 2023 at 3:04am |
Based on the manual, the best configuration for two GPUs is Slots 2 and 5. Logistically speaking, these are not my preferred slots. What kind of results, issues, and concerns are there using other PCI slots?
I don't currently see any notes regarding the PCI lane assignments. I guess that most, if not all, slots are direct, but considering the 1st NVMe, LAN, TB, and chipset likely take up more than 16 of the remaining lanes, I know this can't be the case. Does anyone have notes relating to the PCI lane assignments? Again, why slot-2 and slot-5 for two GPUs? Or why only Slot-2 for a single GPU? I'm getting the hint that slot-2 has the most direct traces to the CPU, but I don't know how to confirm this. What about Slot-1? What is its story? |
|
![]() |
|
eccential ![]() Senior Member ![]() ![]() Joined: 10 Oct 2022 Location: Nevada Status: Offline Points: 6530 |
![]() ![]() ![]() ![]() ![]() |
The only thing I can see in the spec page is that two of the PCIe slots are electrically x8.
PCIe 1, 2, 3, 5, and 7 are x16. PCIe 4 and 6 are x8. So unless your GPU only uses 8 lanes (e.g. RX 66xx series, RTX 3050), you'd want to avoid PCIe slots 4 and 6. I see no indication as to any difference among other PCIe slots. |
|
![]() |
|
SourceChild ![]() Newbie ![]() ![]() Joined: 18 Jul 2023 Location: US Status: Offline Points: 50 |
![]() ![]() ![]() ![]() ![]() |
Wow, thanks for the quick response Eccential! That was a good catch referring to slots 4 and 6. Those slots aren't specifically relevant to me (nor accessible), so I lost track of the 8x connect. This actually helps explain where the available lanes would come from for the other onboard features. I'm glad you caught that. TL;DR I sort of painted myself into a corner regarding testing. The main config is two 4090s. I've already adapted to water blocks, so the logistics of testing different slots isn't an easy task. In a perfect world, the goal is to add in three more cards. [ul] [/ul] If the intention was to use only those GPUs, there's be no issue. I would use the recommended two GPU recommendations of slots 2 and 5. However, even though blocks are only two lanes wide, they are irregular in shape, so they severely limit access to other slots. Since each of these cards is single-lane wide, they should work fine. However, the width of the water blocks on the 4090 cards makes that difficult. |
|
![]() |
|
eccential ![]() Senior Member ![]() ![]() Joined: 10 Oct 2022 Location: Nevada Status: Offline Points: 6530 |
![]() ![]() ![]() ![]() ![]() |
That is one monster computer from my perspective.
Maybe you can look into skinny water blocks for the 4090s. Quick search led me to: https://www.tomshardware.com/news/nvidia-rtx-4090-single-slot-ultra-compact-liquid-cooled-card |
|
![]() |
|
SourceChild ![]() Newbie ![]() ![]() Joined: 18 Jul 2023 Location: US Status: Offline Points: 50 |
![]() ![]() ![]() ![]() ![]() |
Again thank you for your thoughtful responses. We are on the same page, but I've done homework on this already and even had a system up and running. TL;DR To share a bit more. I've already had this build running. The system was set up with a 3975WX, 256GB RAM, dual 4090 on slots 2 and 5. However, due to issues with a DIMM slot and the TB PCIe slot, the MB was returned for RMA. I have just received the replacement MB and began this thread before rebuilding. With the exception of the TB DP not working and issues with memory (I simply removed the DIMM from the bad slot and reduced to 4 DIMMs instead of 8) the system ran well, worked fine, and both GPUs ran great. Due to the config of the cards, Slot-7 was the only one left open (with space) in the initial build. A capture card ran fine without issue. An NVMe riser card in slot-7 worked, but I did see issues I can only attribute to trace-length between the PCI slot and the CPU. I mention trace length because this is an issue I have seen before on commercial workstation builds using both WRX80 and TRX40. I note this because the WRX80 chipset and the TR PRO are still Gen4 spec but many of the NVMe and riser cards are now Gen5 capable. (Please note I am briefly summarizing). As to the continued question, I invite any insight on: What are the experiences others have had (and confirmed) on the WRX80 Creator 2.0 when using different PCI slots than recommended for different configurations of GPUs? I will note that with the two water blocks I am using, Slot 1, 4, 6, and 7 are actually visible, just not comfortably usable depending on the types of surface level components on respective cards. Since I'm not using NVLink (IE because 4090 and not A6000s or 6000 ADA), and knowing the pipeline demand of the GPUs on PCI lanes, I could technically run the 4090s (even comfortably with these water blocks) in slots 5, and 7, allowing slots 1,2,3 to be fully available for the other three cards I mentioned, DPU, NVMe Riser, and Capture card. I'm open to any thoughts, theories, or feedback the community has. |
|
![]() |
Post Reply ![]() |
|
Tweet
|
Forum Jump | Forum Permissions ![]() You cannot post new topics in this forum You cannot reply to topics in this forum You cannot delete your posts in this forum You cannot edit your posts in this forum You cannot create polls in this forum You cannot vote in polls in this forum |