Assuming I want both windows (because much gaymes) and loonix (because windows is a shitshow for anything other than...

Assuming I want both windows (because much gaymes) and loonix (because windows is a shitshow for anything other than playing games), should I get two physically separate PCs or stick with the gpu-passtrough meme? I've been using the latter for almost a year now, and oh boy, it has a lot of problems. It crashes a lot (like, at least once a week), sometimes graphics drivers won't load (nvidia code 43, even though vendor is spoofed and it works after next reboot), etc. Is it due to my old hardware (4th gen i7, cheap mobo and only 16 gigs of ram) or does it always work like that? Since new CPU's are around the corner, I'm considering two options - getting a rather low end PC as my main PC and leaving the one I currently have as my gaymen station, or getting a really good MB/CPU/RAM and hoping that gpu-passtrough will work way better than it does now. Thoughts, experiences?

Attached: 1533392248533.png (529x523, 133K)

Other urls found in this thread:

wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#.22Error_43:_Driver_failed_to_load.22_on_Nvidia_GPUs_passed_to_Windows_VMs
wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#CPU_pinning
twitter.com/SFWRedditImages

Get two separate hard drives.

I do have two separate hard drives.

Considering you were dumb enough to buy an nVidia card, just dual boot or have 2 PCs. nVidia tries to brick the system on purpose if they detect you're using a passthrough on non-enterprise GPUs.
Or just buy an RX570/580 or the upcoming AMD GPUs.

Then install GNU on one and the other OS on the other. Pretty simple.

Yeah, I don't think I'd buy nvidia again, especially for a VM. I have an 1070 for the vm and a vega 56 for the host.
>just dual boot or have 2 PCs
The point is, when it works - it works quite well. I'm getting ~1060 performance on an old 4790k CPU. I feel like grabbing 3900X and 64 GiB of fast memory could improve things significantly. Also, a good motherboard. I kinda hope that someone with a more beefy gpu-passtrough setup will come and share his thoughts.

That's exactly what I did, but the "other OS" drive is passed to the VM as well as my GPU. Being able to run both simultaneously is a godsend. I'm asking if I should running the other in a VM (because it's far (but not very far) from perfect at least with my current setup), or get a separate physical machine for it.

Honestly, I have three issues with my current setup:
- It sometimes crashes, or not boots properly (reboot or two always fixes it). It used to be much better on windows 10 LTSC 2016, now I reinstalled with LTSC 2019 and it shits itself 5 times as often.
- I happen to ran out of memory (8 GB for VM, remaining 8 for host) quite often, same with the CPU - I have 3 cores / 6 threads passed, and everytime I F5 on a js-heavy website on the host OS, I'm getting massive fps drops in the VM. I think that this one would be 100% resolved with a new PC.
- Pulse audio server is dogshit. No dodging this one either way, be it 2 separate machines or not. I think of getting another DAC (for the VM) and a mixer to have it all on hardware level.

Attached: 1533558571988.png (580x417, 122K)

are amd shills this retarded?

He's actually right.
wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#.22Error_43:_Driver_failed_to_load.22_on_Nvidia_GPUs_passed_to_Windows_VMs

>gpu-passtrough
>nvidia
you are doing it wrong.

Yeah. Does that mean AMD cards are working much better? No code 43 bullshit sure is an improvement, but are there any other benefits?

I thought AMD cards had the reset bug. Also GPU passthrough should be near native performance. Since you're on Intel do you have NPT enabled?

I've spent a week trying to get passthrough working on Ryzen and right now there's a bios bug that prevents the GPU from turning on in the VM. It is just one big headache. Once AMD gets around to fixing the bug, I'll have a bunch of performance tweaks to worry about. Honestly I'm thinking about going back to my ITX board for Windows only and leaving my Linux install on a separate build. I already wasted a week of my life trying to get this shit to work and it's just not worth it.

>do you have NPT enabled?
Honestly, no idea. I have the cheapest z97 mobo possible (MSI z97 mate), it's docs don't even mention things like vt-d. No options in bios too. It's a miracle that it works. That's why I think that some of my issues can be related to my current build.
I have to:
- boot on intel hd
- early modeset amdgpu
- load virtio dummy drivers for intel hd and nvidia
Otherwise I either have no video on linux or windows. Absolute shitshow, wasted like 2 weeks to get it working.

>bug that prevents the GPU from turning on in the VM
I had something similar, that's why I boot on intel HD. If nvidia gets anything in it's frame buffer (bios logo, for example), it won't start in the VM. And I can't have my vega in the first pcie slot because it's too big and nvidia wouldn't fit in the other pcie slot.

> I already wasted a week of my life trying to get this shit to work and it's just not worth it.
I would say it's absolutely worth it, that is if I were 100% certain that all the issues are due to my hardware configuration.

>Honestly I'm thinking about going back to my ITX board for Windows only and leaving my Linux install on a separate build.
I'm often compiling shit from source, upscale anime with meme shaders in mpv, etc. so I need somewhat beefy box for linux as well. It's either 3900x (waiting 3 more months for 16 core counterpart is too much, I've been starving to change this setup for too long now) and 64 gigs of ram, or a small silent box just for linux (APU maybe? too bad new APUs will be zen+ and not zen2) and leaving my PC as is for much gaymes.

>conservatism
End your life

>I've been using the latter for almost a year now, and oh boy, it has a lot of problems. It crashes a lot (like, at least once a week), sometimes graphics drivers won't load (nvidia code 43, even though vendor is spoofed and it works after next reboot), etc.

That's interesting, I have had literally zero problems with my GPU passthrough set up. I have a 7600k and a 1070 Ti for reference. I understand the mobo matters a whole lot.

I actually also have a Z97 board and tried doing passthrough on it after a week of failure on my Ryzen board. Guess what..it worked fine right away. I had it up and running within 30 minutes of a clean Linux install. I put the usual stuff in the XML file to hide the VM being a guest and it was 100% fine. I bought a fancy X470 board specifically for passthrough so I'm a little miffed about this bug. When AMD fixes it I might recommend it especially since a 3900X would be fantastic for this setup but at the moment AMD doesn't have its shit togethet

works for me with some shitty MSI mobo, a ryzen 1600 CPU and a radeon 570. YMMV.

How's the performance? Also, is there any way to prevent host from using cores passed to the VM? That would be awesome.

What should I type into a search engine to find more about this bug? How comes got it working?

is lutris and wine really unusable for that? for now , i just dual boot my os, but restart is too annoying for me
im planning to use gnu and vm or wine for game

>AMD cards had the reset bug
>AMD doesn't have its shit togethet
buy a better mobo, and research your shit before doing so, retard

btw, I use Qubes in my desktop/gaming system.

I use lutris, and it works.

>How's the performance?
Literally 1 frame behind bare metal if benchmarks are to be trusted.

>Also, is there any way to prevent host from using cores passed to the VM? That would be awesome.
I think you want to do CPU pinning.

wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#CPU_pinning

>buy a better mobo
What AM4 board are you using, and is the bios current?

OP here (name, duh). I did use proton for a very short while (it was in beta back then, no idea if it's still the case). Some games worked really well, some... just worked. I dislike the idea of running proprietary software outside of a VM though. Feel free to call me autistic.

>Literally 1 frame behind bare metal if benchmarks are to be trusted
now that sounds awesome

>I think you want to do CPU pinning.
I did that, but it seems that the article got updated since then. Cool, I'll give it a second go.


What is the best X570 mobo choice? Preferably great compatibility with gpu-passtrough and no non-free blobs required. ASUS X570-WS-PRO looks pretty cool, but some design choices are really weird (Such as USB-C or video outputs in a "workstation" motherboard. High-end ryzens don't have igpu, not to mention a smartphone connector.)

DO NOT LISTEN TO ANYONE

Do not use emulators or VMs to play video games

Get a "SATA switcher" from Amazon or eBay or make your own. You'll use the SATA switcher to dual boot.

Why do you need a SATA switcher? Microsoft fucks with the bootloader on every drive it can detect. You'll only get problems if you let Windows detect your other OS drives.

You will also be safe from the spread of *ware.
If you plan to install pirated gamss and shit on your Windows boot drive, it will be nearly impossible for the Chink, Russian or Brazilian to infect your other boot drive.

Godspeed.

by better I meant compatible. mine is a MSI B350M Mortar, BIOS is 1+ years old.

>What is the best X570 mobo choice? Preferably great compatibility with gpu-passtrough and no non-free blobs required
don't be lazy, do your own research. there is a whole fucking subreddit where you could look for info and/or ask questions

I managed to set it up, and GPU performance was very good, but the system had this stutter every 10-30 seconds both video and audio, making it unusable as a dual-boot replacement. (DPC latency was high.)

I feel like 2 PCs is kind of shitty, because you can't use your fast hardware in whatever OS you like for whatever purpose you like. I almost feel like a simple dual-boot would be the best option, as it's easy, reliable and all your hardware (peripherals included) is always available to you in any OS. The disadvantage is the fact that you need to reboot, but rebooting is pretty quick nowadays. Some games will also run fine on Linux (natively or via Proton), so you could play those without bothering to reboot in the first place.

couldnt you just encrypt your linux drive sans the boot partition? no add hw required.

If the Windows install is in a VM, you can keep your boatloader safe and even easily restore your VM to a previous snapshot should you own yourself with malware.

Attached: virt-man.png (1920x1080, 209K)

Thank you Amazon Customer Service, you never cease to amaze me. You entirely missed the point which I stated multiple times in this thread. Make sure to enjoy your 2 rupee for posting in this thread.

>don't be lazy, do your own research
Sure thing. Any idea where one can get datasheets for motherboards (there are none on the asus website (yet?)) and if there's a list of hardware that "just works" on linux libre? Also, may I have the subreddit name? I'll check it out.

>I feel like 2 PCs is kind of shitty, because you can't use your fast hardware in whatever OS you like for whatever purpose you like.
My thoughts exactly.

>I almost feel like a simple dual-boot would be the best option
it wouldn't, I use both at the same time.

>Some games will also run fine on Linux (natively or via Proton), so you could play those without bothering to reboot in the first place.
My internal autist won't stop REEEEEing about free software if I did that.

Also this, but I don't usually do anything of value on windows, so just nuking the hard drive is also a viable option. All my programming projects that are windows-only are on git anyways.

Win10 tends to fuck up with bootloader and partitions, my friend dual-boots and like every 2 months his setup is ruined thanks to Windows 10 ™ Update ®

and they fail
passthrough works fine with nvidia gpus if you apply the workarounds

>datasheets
you absolute fucking RETARD. LEARN TO USE A SEARCH ENGINE AND GET SOME COMMON SENSE.
you just need to select a few mobos and search for experiences/ask people if they use them in their GPU passthrough setup. that's all. IT CAN'T BE THAT DIFFICULT.
r/VFIO

>it wouldn't, I use both at the same time.
Yeah but even if you use GPU passthrough you still don't get the graphics card in Linux, so if you want to use something which is GPU accelerated you're fucked, if you want to use a fancy high-quality upscaler to watch a video you're fucked and so on. With GPU passthrough you don't just have to relegate gaming to the Windows VM, you have to relegate all tasks which use the GPU to the VM. That's pretty shit.

Of course, you have "solutions" to this, such as buying multiple high-performance graphics cards to have one for each OS or constantly changing your system configuration (and monitor cables) to assign it back to your Linux host when you need it. One of these solutions is expensive and the other sounds like a huge PITA in its own right.

Wait, you don't get a list of all integrated circuits and check if they are fully supported by free drivers?
>you absolute fucking RETARD
>GET SOME COMMON SENSE

>you still don't get the graphics card in Linux
But I do. I have 3 graphics cards running in my system, see pic related and this post

Attached: 1547439125869.png (1013x504, 112K)

How do I get Aqua in my neofetch?

neofetch --w3m Pictures/neofetch/aqua.png --size 36% --yoffset 10

>oh boy, it has a lot of problems. It crashes a lot (like, at least once a week), sometimes graphics drivers won't load
I don't have these problems with my AMD GPU passthrough. Everything works flawlessly and games run with native performance.

I did notice at the beginning that I would get occasional crashes independent of load, but it turns out that I just hadn't updated my CPU's microcode and there was a virtualization bug in it that was fixed by the update. Ever since then I've never gotten a crash.

>I have 3 graphics cards running in my system
In that case I see you already took the expensive way out, so yeah, shouldn't be an issue for you.

What CPU? AMD or Intel?

AMD. R5 2600 CPU with an RX 570 GPU. I use the VM for playing VR games.

Not him, but mind sharing your motherboard too? Already have a 570 and planning to get a Zen 2 CPU next month.

Mobo is an ASRock AB350 Pro4

But also I fucked up slightly: I got an RX 580, not a 570.

So even ASRock should be fine.
I'll keep that in mind.

Use wine/proton for anything it supports and a VM for anything else.

I've been wanting to try the passthrough meme but I'm just wondering if using an audio interface (Scarlett Solo) through USB Passthrough works just as fast because I use Windows to make music and I don't want any latency. Also, I've heard something about people using some sort of PCI-E device with USB ports for Passthrough builds, is it worth the investment?

Yeah, usb passtrough works just fine. There shouldn't be any noticeable latency.
>I've heard something about people using some sort of PCI-E device with USB ports for Passthrough builds, is it worth the investment?
I have a $10 usb 3.0 pcie controller passed to vm and entirely disabled (via driver unbind on boot) on the host.

The best solution:
Laptop for linux
Desktop for vidya

Been thinking about it, but then I re-compiled firefox on my laptop and changed my mind. Shit took almost entire day. Also, hot as hell and loud as hell. I like me computers quiet.

Most distros have a precompiled package for Firefox

Custom build script, also additional optimizations as web browser is what I use like 24/7. Also, there are a few projects like mpv where I use git releases rather than stable binaries. I was actually thinking about switching from arch to some source-based distro, but I dislike the idea of not having AUR. Shit's super handy. I guess I'll end up on Parabola if I can get a hw that works on linux-libre.

>nvidia
>intel
Found your problem, nvidia gimps their hardware on purpose for virtualization and intel by mistake with their vulnerabilities.
You can always dual boot, or do like me, desktop for vidya, laptop for work/linux, both of them share a dual screen setup and KB/M with an USB switch, the displays use their own input switch.

cross compile.

>KB/M with an USB switch
I have rebound my Fn+F1, F2, etc to F21, F22, etc and use synergy rather than a kvm usb stwitch. What's so great about that is that I made some global shortcuts for those keys (basically, mapped them back to their Fn functions such as mute, vol up, etc.). After all this input shenanigans I can use my "media" keys ob both systems and Fn keys go to the host because windows doesn't understand what F22 is.

>cross compile.
Yeah, I could. Still, I don't need a laptop, barely use the old one I already have. I have my pc running almost 24/7 and I see no reason not to get at least a decent APU with a good air cooler and passive PSU.

Save yourself the pain - get a second PC