So many things.
1. Heat is no problem. I have multiple 120mm and 140mm for in and out-airstreams. Most fans are controlled via the board, some via the cases fan-control.
2. The cases fan-control features 2 channels each up to 15W, each having 2x120mm connected. No problem here.
3. The boards molex connecter WAS plugged in.... the manual and ASRock supported tells that this plug mitght add additional stability if "MORE THAN 3 CARDS (4!!!) are used. I removed it, because PCIe#5 is not in use, but still having the SAME issue.
4. parsec... your explanation to the rail-power-distrubation doesn't help me. As I said i had each of the 30A rails (4x) connected to one socket of each 780TI (power rating max. 350W). I plugged off all peripherals (thx to M.2-Ultra) to not put share on the rails connected to the primary SLI card and shared the secondary cards second port with the rails that is also sharing the additional 8-pin MOBO-powersocket. The single rail ratings on the labels in fact only shows when the OVER-CURRECT-PROTECTION per each rail kicks in. The combined power-rating of the 30A-rails is ~112A. Thats more more than sufficient for about 800W max demand. So while one graphic card may suck up to 350W on 12V (~30A per card)... but each card has two connectors, thats why IMHO it is not wise to oversplit one rail over both connectors and therefore I intentionally used single rail per connector.
Using one rail on both connectors of each card is what the stupid Enermax support suggested as a quick hint/try before RMAing.
To make it short: I have a watt-o-meter... and NO, it is impossible that more than 112A via 12V beeing sucked from MOBO and GPUs because PC overall is below 1kW.
5. SLI-Bridge has of course been tested against a different bridge... no change.
6. I repeat that both of the GPUs do fine when i unplug the other, even with quite max OC.
7. "Have you ever watched your CPU-temp"... LOL. I installed and configured an 140mm-framed-window fan to suck of heat from the GPU-area (not from the fans, haha, only them coolers) when the CPU is beeing used. I ran stable FPU-burn-in with 95°C, linpack with 72°C and gaming usually is only below 60°C.... so... wait...stop ask me things like if you think Im a retard.
BTW: The CPU-watercooler is a Corsair Hydro H110i. :p. Oh dear.
I read the motherboards manual. I read the PSUs label. The molex on the downside is of no use in my case, because it distributes power to a PCIe-slot that i have even nothing plugged in. I know GPUs and CPUs produce heat. Can you start talking with an It-systemelectrician about whats the real problem here or you want to fool around furthermore? I exactly know my FINE CPU-temp. WTF.
Dont you think somebody who spends thousands of bucks to build is own PC from spare parts knows what he is doing?
WHat i wanna know from Asrock is following:
This board features a PEX, and it features an additional PCIe switch. My CPU i4790k has exactly 16 PCIe Gen3 Lanes. The PEX allows multi-casting that 16 to both cards. So far so good. I use a Samsung XP941 M.2-Ultra that is connected via PCIe 4x Gen3.
How comes GPU-Z tells me that both card can still handle PCIe 16x Gen3, while the XP941 is accessible at PCIe 4x Gen3 the same time??????????
What happens when the game actively accessing the game-data on the M.2-Ultra and at that moment game/driver/gpu-subsystem exceeds 8x/(possibly12x) and really has a demand for 16x PCI-Gen3???
May this magic cause failing and be the reason of power-loss?
|