Now, is there a way of making the primary graphics card be the one on the bottom slot, so that it is used for the unRAID console, leaving the more powerful card free to be passed through to a VM? I'm thinking there would need to be a BIOS option to select which slot has the primary GPU. Put a cheaper, single slot graphics card in the bottom slot. Put a decent graphics card in the top slot and a SAS HBA in the second long slot so they would get x8 each. With that in mind, here's what I'd like to do: There are usually two or three PCIe v 2.0 x1 slots (and often a second M.2 socket) that take lanes from the bottom x16 slot if they are used. Most ATX-size X370 and X470 motherboards have three PCIe x16 sockets, the top two being wired via a switch that allows either single PCIe v 3.0 x16 or dual PCIe v 3.0 x8 operation, while the bottom one is wired with four PCIe v 2.0 lanes from the chipset.
It would have been very bad if code intended for the second CPU ended up running on the second thread of the first CPU. The Intel numbering made sense when you had system code that didn't understand hyperthreading and you didn't used one processor with multiple cores but the servers (and also often the workstations) had multiple CPU slots where the motherboard often wasn't fully symmetric but required some hardware to be handled by a specific CPU. I'm no sure why they are different but the AMD arrangement makes more sense to my simple mind. This is different from how an Intel processor is arranged, where in an i7, for example, threads 0 and 4 represent physical core 0, threads 1 and 5 are physical core 1, threads 2 and 6 are physical core 2 and threads 3 and 7 are physical core 3. You'll see the arrangement if you switch to the Dashboard page of the webGUI. all the way to threads 14 and 15 being physical core 7. So threads 0 and 1 are physical core 0, threads 2 and 3 are physical core 1, etc. They are actually numbered from 0 to 15 and they are paired even-odd.
In my own case I'm idling at 95W with just a couple of minor container/VM running 24x7, and that spikes to 270W when gaming on a Windows VM. It's always fun to actually see hard data as opposed to supposition. If you are curious then you can always invest a few in a wall socket power-meter. The power draw will just be a trickle until it is in use. If unsure of the process then Gridrunner's guide is a handy reference. I'm using a GTX 970 without hassle, but there plenty of reports of success with your model too. There's no reason to expect any problem with the 1080Ti. I have been running mine now for a couple of months with much the same hardware (exact spec in my signature) and it has been both pain and lockup-free. I still have a 1080 Ti, will this work for pass through? I wonder what the power draw will be when not running the Windows VM and basically not using the GPU. So I ordered a 2700x, Gigabyte x470 Ultra Gaming and a whole bunch of other stuff.