New Ryzen Build

I'm building a new PC with dual RX 590's for the purpose of PCI passthrough to a Windows VM

I don't game much anymore, but the games I do play only work on Windows and I do like the idea of keeping things like Windows and game servers confined to a virtual space.

Do any gentoomen have experience passing a GPU through to a VM? I hear conflicting reports of it being both easy and hard.

Would it be better just to dualboot?

Attached: 2019-06-02_13-05.png (634x331, 53K)

Also forgot to mention that I will be using *buntu as the host. If it matters..

>dual RX 590
>gentoo
notbait.jpeg

I said I was using Ubuntu or a Ubuntu variant

You have to be shitposting.
On second thought it's very likely you're not...

its pretty easy once you realize how it works.

blacklist linux radeon drivers
make the gpu load a vfio or pcistub placeholder driver
attach gpu device to vm you created.

qemu I find is the easiest, also make sure to blacklist and attach HDMI audio device on the card as well.

amd is pretty easy, nvidia is harder cause they make their driver crash on purpose in a VM. Fuckers.

Basically if you already know how to setup a VM, it is just a few more steps.

How would this be a shitpost?
One RX 590 for Linux, one RX 590 for Windows

I've seen others do it with success, was wondering if anyone here had done it

I have two RX 590s, is this going to make it harder to single out one of them and blacklist it?

Also, is VMware a good solution? I haven't used QEMU before

I did it for mining once, it is no harder to do 2 in a VM than one but if you want one in the main and one in the VM it is wayyy easier to make them 2 different gpu architectures. You can manually specify the device to load a pcistub but blacklisting the driver is safer incase the kernel thinks it knows better and loads it first after an update.

never really used vmware, only virtualbox and qemu. Qemu just works for me and I have my gpu output to my second monitor's displayport. Sound and video passthrough without issue.

also qemu's gui virt machine manager will let attach hardware with a couple clicks, its really easy once you do it a few times.

one last thing, make sure that motherboard lets you turn on the AMD-v / iommu features. It's supported by the cpu/chipset but gigabyte still has to make it an option to enable in the BIOS.

Good to know, I will double check that when I build the PC tomorrow

I DO want one gpu for host and one for guest, wont blacklisting the driver basically kill both cards?

600W for two 590s?
damn son

the blacklisting isn't really the problem, it is the pci-stub.

you will manually tell it to load the stub/vfio driver for the device, but if two devices have the same manufacturer and device id (Which it will for RX 4xx/5xx cards) the stub will load for both without doing extra work that most tutorials don't mention. I've never done that before personally.

The reason to blacklist is incase the kernel loads the AMD driver BEFORE the stub driver on boot (which can happen on a kernel change / OS update)

I have Intel HD for my linux and AMD for my VM so I can blacklist all AMD drivers.

>nvidia is harder

it’s not, this hasn’t been an issue for a long time; AMD’s GPU audio chipset on the other hand can’t handle a VM being shutdown and started back up because it can’t into FLR, so you’ll have to completely shutdown your host and reboot to get GPU audio working again in the VM if you turn it off or reboot it for whatever reason.

lol, wat, I have never had that issue with RX cards.

sometimes the audio won't work if a monitor is shut off and the VM goes to sleep but you just disable and enable the device in the device manager and it just works again. At least for me.

also nvidia is still harder since you have to hide the VM's hypervisor from the driver.
Not harder in the sense of hard, but it adds more steps/complexity to someone who hasn't done it before.

AMD gpu just werks.

I guess I will just do a GTX 1050 Ti for the host
Most games will be in the Windows VM anyway.

Get a board with proper IOMMU grouping or else you will have multiple PCIE in the same group like MSI does.

This means you will be kiked

Is gigabye good for this?

>dual RX 590
Literally why? Just get one for passing to Windows and a cheap one like an RX 550 for the host. Also, wait for Zen2 and Navi, you dumbfuck, and I really hope you checked if that board has proper IOMMU groups, if it doesn't, it's going to be a painful experience.

Search for the specific model, when I built my PC B350 mobos were useless for this.

what are the 590s for though?

>you have to hide

No, you don’t. This hasn’t been a problem for some time now.

So let me get this straight. You now only need two GPU's and VmWare to do pci-passthrough? There's no crazy kernel shit to fight with?

No, you use KVM. You just need the VFIO driver and a CPU that supports hardware passthrough (most AMD does, Intel needs a non-K chipset) and that’s literally it, which you shouldn’t have a problem using, it’s not complicated.

>You just need the VFIO driver
What name does this go by, assuming this would be in my distros repo's?

Never mind. I found it in the Arch Wiki. Thanks for your assistance, anyways.

If you use esxi 6.7 as a type 1 hypervisor you can pass through both GPU and have excellent performance since it is managed via Web interface after initial set up