Why is audio on linux so fucking shit?

It's not. Compared to WDM, ALSA is many times better at handling audio processing, especially in realtime contexts. ASIO drivers provide comparable performance but require exclusive control over audio devices. Trying to set up a complex audio environment in Windows is a complete bitch; unless you want to write your own loopback driver, you're required to use ones provided by 3rd parties (VB-Audio) and they all offer shit quality playback or fuck up the signal. With PulseAudio you can create loopbacks or virtual sinks and sources using a simple pactl command.
This is blatant misinformation. None of the sound systems available for Linux rely on X11 in any way. ASIO is also a driver protocol, not an interface.

freetards made their own sound system for no reason and it was shit of course

>None of the sound systems available for Linux rely on X11 in any way
That's what I'm saying, and that's a problem in terms of making stuff just work. There should be a single pathway for audio and video, not the split that we currently have.
>ASIO is also a driver protocol, not an interface
>driver protocol
>not an interface

pipewire.org

>no bluetooth without pulse
kek and there's people who say alsa is enough

werks on my machine.

Attached: CopyQ.CmXarI.png (172x172, 5K)

Driver protocols are not interfaces, ASIO4All would be an interface, ASIO itself isn't.

Why should video and audio in a single pathway? That's not going to help make things "just work", you're still going to have to implement a sound system to manage streams and handle processing. Take your pseudo-intellectual bullshit suggestions elsewhere.

It good enough for the end user though

X11 is a display server, you shouldn't have it also be a sound system, it would make the already massive clusterfuck X11 is an even bigger one than it already is, which is completely counter-productive to making things "just werk".

>Why should video and audio in a single pathway
For the exact same reason all the stuff that is currently handled in X has a single pathway. Chances are, if your application outputs audio, it also displays a window, and takes user input from keyboard/mouse.
You could just configure your audio device as part of your X server, and then applications on that X server would have their audio routed to the correct place.
Remote X forwarding? Great, you don't have to do any additional work to get audio to forward. Currently, if you wanted to do this, you'd have to configure the audio forwarding separately, whereas on Windows with RDP, audio forwarding just works (it's not exactly the same as what I'm talking about, but the point is it actually functions with no extra configuration needed).