AUDIO FORMAT HZ

Audo format.
How does it work?
Higher is better right?
Yes I know this has been answered before but it wasn't really explained well

Any help would be greatly appreciated

Attached: audio format default.png (333x166, 6K)

Other urls found in this thread:

people.xiph.org/~xiphmont/demo/neil-young.html
twitter.com/NSFWRedditImage

It's like megapixels. The more you have the better it is.

16bit 44.1kHz. Anything else is meme.

i cant tell if im getting memed on but im willing to try anything at this point

I think it really doesn't matter what you select, it's all going to get resampled by your sound card to whatever it outputs and its probably either 44.1kHz or 48kHz. I seriously doubt your sound card supports 96kHz or 192kHz

look up the nyquist-shannon theorem. you do not need more than 44.1kHz

I've noticed a very tiny handful of synth-heavy albums actually have a slightly noticeable difference between 44.1kHz and 96kHz, at least from what I can hear. Other than that, 44.1kHz 16bit is virtually always sufficient.
The primary reason for the 40-something kHz frequency number comes from Nyquist frequency. In essence, your sample rate has to be at least 2x the highest frequency you want to be able to reproduce, and the highest audible frequency is in the range of 20-22kHz typically, depending on the person.

If you're an adult you can't hear anything beyond 16 or 17 khZ (try an online tone generator and check for yourself).
Per the Nyquist theorem you need a sampling rate that's twice that to reproduce a signal including this frequency, i.e. ~ 34 kHz.

CD quality is 44.1 kHz. There is literally no point going above that because you cannot possibly hear any difference.

The only reason you would want to go above is in audio *processing* systems where you can do up-sampling to make it easier to implement various processing (filters, etc.) but nothing you should be concerned about.

i wanna believe this but im noticing a difference whenever i just pick a higher hz

is this similar to the "you can see above 60 fps" meme?

>44.1kHz and 96kHz, at least from what I can hear
I very much doubt it.
You most likely can't hear shit beyond 18 kHz (and I'm being generous), and generally music is produced _way_ under that.
Neither your ears nor your audio equipment are linear, and for you to hear anything beyond 10 kHz the rest would have to be incredibly loud.

>i wanna believe this but im noticing a difference whenever i just pick a higher hz
Try to do a blind experiment. Find a way to randomly play files without you actually knowing the sampling rate. Note which ones you think are highest. Then check and figure out if that has predictive power.

Otherwise it's just some placebo effect.

You need to double the sample rate. You can't sample 18kHz at 18kHz. Mind blowing you post on Jow Forums.

Where in my post did I say that?

You can't hear anything beyond 18 kHz (and again I'm being generous), translating to a Nyquist frequency of 36 kHz, so how would you hear a difference between 44 and 96 kHz? You can't.

"Yeah mate don't worry you can't hear a difference. *I* can on certain albums because I have superior ears but for normies like you don't have to bother"

i didnt glrean that from his post, what i read was more like "no human on this planet can tell the difference" which is even more absurd lol

How would this be absurd?
Some humans have better hearing than others, but it stops at some point.
Just like there are taller humans, but we have not found anyone to be 4 meters tall.
So there is definitely a point at which no human can hear the difference.

Bitrate actually matters for increased dynamic range

why the fuck would anyone even want to hear anything over 20kHz? it's just brain damaging screeches from there up.

The bit count is the "resolution" of a given audio sample. More bits means a more accurate representation of the original audio as it was playing at that exact moment.
The sample rate is the total number of samples taken within a single second. As audio is very much a temporal thing, the sample rate is very important to accurately represent the audio as it was heard at time of recording.

HOWEVER; If the recording was done at a lower setting, or if the audio ever went through a downsampling before you got your hands on it, resampling it to a higher bit rate and sample rate will yield you absolutely nothing.

It's also important to note that you get no benefits from higher bit rate and sampling rate unless you have equipment that supports those higher formats.

TL;DR: The higher settings are important for audio professionals who need to preserve the quality of recordings until they do the final master. You should know if this applies to you.

True but 16 bits are enough.

The reason we see those high-specs (24 or even 32 bits, 96, 192 kHz etc.) is because in can matter when doing digital audio processing (i.e. it's safer to feed 192 kHz / 32 bits to some plugin because even if it's slightly buggy you won't really lose important parts of the signal). But the final rendering should be done to CD spec, i.e. 44.1 kHz and 16 bits (keeping in mind that it is already over-engineered).

>You most likely can't hear shit beyond 18 kHz (and I'm being generous), and generally music is produced _way_ under that.
>Neither your ears nor your audio equipment are linear, and for you to hear anything beyond 10 kHz the rest would have to be incredibly loud.

This isn't about frequency, its about sample rates. They use the same units but work completely differently. If you downsample a song to 10kHz you'll absolutely be able to hear the difference. It's like turning down the frame rate of a video down to 6 basically. As long as there aint much going on, it'll be fine but once you have several instruments playing they'll just sound like complete garbage.

>If you downsample a song to 10kHz you'll absolutely be able to hear the difference.
Are you fucking dense?
All I'm saying is the guy mentioning a difference between 44.1 kHz and 96 kHz is full of shit.
I know the difference between a sampling rate and the signal frequency, thank you.

For everyone who doubt the validity of the claims prior, here's a good article by someone actually qualified, explaining that anything beyond 44.1 kHz and 16 bits is a meme:

people.xiph.org/~xiphmont/demo/neil-young.html

You're just changing the format rate and bit depth of the DAC, any sound you probably have will sound just the same (maybe a tad different cause of the upscalling effect), plus this formats aren't meant to be used for hearing and hearing comparison, they're meant for mastering and mixing.
This.
Higher frequencies sound more open, you're not supposed to hear tones of 20>kHz while listening to the music.

>Higher frequencies sound more open
you mean crisp brainlet

ok but the second i think a song doesnt sound the same im shooting this back up to 24/192 :P

Attached: audio done.png (363x62, 3K)

These guys are right.

is the short way to put it.

16b/44.1k is enough for perfect reproduction of an audio signal. Higher bitrates and sample rates are useful when mastering/editing sounds because it (basically) gives you a higher dynamic range and how far you can "bend" the audio before it starts distorting or sounding like shit.

Same thing with video - good cameras shoot at 100+Mbps to give you room to work in post without adding a ton of artifacts to the output, not so you can deliver a 100Mbps video.

The post I quoted said

>You most likely can't hear shit beyond 18 kHz (and I'm being generous), and generally music is produced _way_ under that.

When applied to sample rates, all of this is literally factually incorrect. Music is sampled at way more than 18kHz. 96kHz at 24-bit would be acceptable for a master recording at a professional studio. Anything below 18kHz would be considered absolute garbage. You'd literally get fired from a recording studio if you set the recording equipment to record at that sample rate unless the recording artist literally told you to for some reason.

Either you're confused as to which fucking post I was responding to and just read my response or you're literally beyond ignorant of how fucking sample rates work. Like literally just download audacity, downsample literally any song sampled at 44.1kHz to 10kHz and if you can't tell the difference you're most likely severely hearing impaired

Depending on what for. And it comes down to each individual, but 22khz is a good average. 48khz is however used due to it being an industry standard for movies.

24bit does have more advantages compared to 16bit, such as a higher signal to noise ratio.

>im shooting this back up to 24/192
Do whatever you want but please read the article I posted earlier (disregard the stupid URL, it's not actually about Neil Young but precisely what we are talking about: why 24/192 are a complete meme)

people.xiph.org/~xiphmont/demo/neil-young.html

Whatever amigo.

>I've noticed a very tiny handful of synth-heavy albums actually have a slightly noticeable difference between 44.1kHz and 96kHz

This most likely happens due to low quality resampling. It's why it was recommended for ages to use a HQ resampler in foobar2k in your output chain. Granted, that was for those times back when Sound Blasters were fixed at 48/96khz in hardware and had a really broken internal resampling engine. Then for the X-Fi they added an extremely powerful floating point chip just to handle that.

>music is produced
>music is sampled

are you retarded?
You won't have any music, the actual signal, the pitch, beyond 10 kHz. Generally it's actually lower than that (an orchestra does not go beyond 5 or 6 khZ).

Pick any random song you like and see for yourself (I'm talking about the actual signal, not the sampling rate).

>(try an online tone generator and check for yourself).

Or in foobar2000, press ctrl+u and add
sweep://20-20000,30 for a 20-20000Hz 20 sec long tone sweep.

You can also use tone://440,10 for a 440hz 10 sec tone. Or any arbitrary number for the tone numbers you want.

I have a bunch of tones set on a playlist for 20/40/60/80/100/120/140 Hz, I use it to calibrate the bass redirection on my subwoofer (the front speakers and the sub have different frequency cutoffs, if I cut off too early, the sub will get frequencies it cant play back and the front speakers will not get freqs they could still play back, resulting in quite a drop in mid level bass).

oh yeah i read it thats why i switched it to 44.1

Literally a square wave, nigger

>True but 16 bits are enough.

24 bit allows for lower noise floor and higher dynamic range. With proper mastering, 16 would be enough, sure, but albums didn't have proper mastering since the loudness wars of the 90s (you can blame bands like Oasis for that).
With 24bit, you have less clipping on those stupidly loud albums. Of course, only if you listen to 24bit masters of songs that were mixed at 24bit precision, not something mixed at 16bit in the first place, or a CD Audio music converted to 24bit.

If you play back cd audio at 24bit, the only advantage you get is more precision for potential DSP effects like equalizers.

>It's like turning down the frame rate of a video down to 6 basically.

More like turning down the colour count to 64, and then videos would look like something on a Sega CD.

a speaker makes sound basically by making a "click" at very high rates, higher the rate higher the pitch. more hz = more resolution

Nigga are you retarded. You go into a thread about fucking sampling rates and bit depth and start talking about fucking audio frequencies. Hardly anything you say applies to whether you can hear the difference between a sample rate of 44.1kHz and 96kHz.

>You won't have any music, the actual signal, the pitch, beyond 10 kHz.

One of the Beatles albums had a 16khz sine in the outer groove of the vinyl, making it repeat constantly. Most people listening to it could not hear it, but it made all dogs freak the fuck out in the vicinity.

And then there's electronic music and chiptunes.

Attached: nigger waveform.png (750x442, 71K)

>Hardly anything you say applies to whether you can hear the difference between a sample rate of 44.1kHz and 96kHz.
Kek. They are pretty obviously related (the sampling rate tells you the highest frequency you can represent without distorsion).
My point is that you can't hear any difference. 44.1 kHz SR can represent a 22.05 kHz signal, 96 kHz SR can represent a 48 kHz signal.
Since you can't hear shit beyond 16 kHz, and since in general music happens under 10 kHz anyway, I question the user who mentioned hearing a difference between a 44.1 kHz and a 96 kHz sampling rate. Unless he has a crappy setup where playing at 96 kHz actually introduces distorsion at lower frequencies, in which case there is nothing to brag about.

another thing to know is that a single musical note is actually a combination of multiple notes, and each of those notes in turn a combination of multiple notes. if you take something like speech, its an incredibly complex multilayered thing.

It's going to produce a bunch of harmonics, and the ones above something like 12 kHz won't matter since you'll barely perceive them.

Unless you happen to be a golden ear who listens to fucking gameboy chiptune music.

>you can't hear shit beyond 16 kHz

I can hear up to 17-18khz with decent headphones. Only 15-16 from speakers.

Also, going up to 96Khz introduces more aliasing, which can make lower frequencies sound smoother, even when played back at speakers that can't reach that high.

Note that most speakers can only do 20-20, ie. 20Hz to 20KHz, so not even maxing out CD Audio. You need specialist equipment for getting beyond that, and some kind of amp that can drive them with enough power to make the higher frequency parts get enough volume.

So higher frequency is normally bullshit, however on a PC you have to deal with 16 layers of abstraction doing their own resampling, in which case having more headroom actually does make a difference. Or you can just use ASIO playback to get around all that, but then you are still limited by what your DAC can do (for example Sound Blasters are limited to 48/96/192KHz and internally resample 44.1KHz, which can make things sound shit).
That, and 24-bit bit depth can help a lot to get less audio clipping and higher quality DSP or equalizers.

24/96KHz playback is not snake oil, it's just not as big of a jump as the number imply. You don't get three times the audio quality, you just get closer to how it should normally sound on a dedicated audio setup.

>I can hear up to 17-18khz with decent headphones. Only 15-16 from speakers.
Well it depends how old you are. It goes down with age.
I'm 34 and I can't perceive anything above 16.5 kHz in a non-lab setup (i.e. at home with headphones). In a lab it would probably be a bit higher since outside noise is filtered out.
And of course it varies a bit from people to people.

When it comes to music, that thing you perceive at 17 kHz when putting volume to the max would be completely drowned by the rest anyway. Which is why most music happens at lower frequencies (there's also the fact that at higher frequencies we don't really distinguish frequencies as easily, it's non-linear).

Not OP, but what pulseaudio config should I use with an external dac?

This guy is seriously trying to say that the audio frequencies actually get converted into a smooth waveform, but any sort if "smoothing" would create loss in fidelity. Furthermore digital audio is 100% stair steppy, and will come out that way through your speakers.


A single second of an A440 tone will get chopped into 100 different samples of audio. This does indeed create loss in fidelity but its very minor. Everyone in this thread is talking about audible range of sound and nyquist theorem, but its not just that simple


See, the thing is, all sound and music are just collections of different frequencies put together. A guitar that strums an A440 is indeed oscillating at 440hz, but the reason why it sounds like a guitar is because its actually full of different harmonic and inharmonic frequencies that change over time. When an analog signal gets converted into digital, you do lose some of this information as the waveform is actually way more data dense than just notes or noise.


Truly though the gains in resolution are minimal comparing 96 to 44, but they are still objectively real

Attached: IMG_7567.png (462x132, 3K)

It will not come out of speakers "stair steppy" no matter what, that would require infinite frequency response.

You fucking retards. The sampling rate of sound files is NOT the same as the frequency of sound waves. 16 bit, 44,100 Hz means the sound files contains 44,100 samples per second, each in turn containing 16 bits of information. None of this has anything to do with what frequency the sounds can reach. To make an analogy to pictures, the sampling rate is the resolution and the bit depth is ... well, the color depth.

16 bit 44.1 kHz is a commonly agreed on industry standard, including regular digital audio and CD audio, and if you multiply 16 by 44,100 and the number of channels (usually 2) you'll get the bitrate of CD audio, roughly 1,411 kbit/s. MP3 adheres to the same depth and sampling frequency in the same way that JPEG can have the same color depth and resolution as a BMP, but MP3 and JPEGs both compress or skip information they consider unnecessary to maintain most of the quality, how much information they throw away depends on the settings when they're converted. And FLAC works in a similar way as PNG, they both lose unnecessary data without compromising the quality.

I use 192 khz universally, - as a musician and recording artist.

The difference to 44.1khz is audible but not because of how humans hear per se.

It's about how if you allow higher sampling rate, your computer gets access to the bullshit sounds that are hiding in the hidden 7th dimension. When you're using effects like reverb that unlock potential chakras and allow synergy this is what happens : the several tracks you have layer upon each other, including the output of the effects you use to multiply chakra. When the final mix is being produced - the tracks are overlapped unto each other and though you can't hear beyond a certain range as a human appartus, the overlapping high-resolution hidden frequencies produce an audible effect by virtue of them synergizing.

So, when I produce a track, if I want to use MP3 format - I don't export as MP3 because the synergizing chakras are lost in the low resolution by default. You first produce a WAV which has the synergy evidenced upon the universe, you can hear it. And then you transform the WAV which already has present in it the audible overlaps into MP3. It becomes lower resolution - but the audible component remains.

AMA

This user actually knows what he's talking about.

pulseaudio -kill

Speakers aren't digital devices

Is there any disadvantage to setting my audio output to 24 bit just as a casual music-listener?

None that I ever heard of or experienced.

You cant see it but im heavily rolling my eyes at you.


Yeah no fucking shit, you can also go ahead and say that because of varibles like frequency response, its way more important to focus on speaker quality rather than sound quality, but thats not what anyone is talking about in this thread.


When the AC signal hits the coil in a speaker it takes time to move. But its still a stepped waveform being sent to the speaker, which in turn causes loss of sound information. I need you to realize that sound waves can have any sort of shape to them. A digital soundwave is a recreation of that original sound made out of square waves. And a 440 square wave has a way way way different sound than a 440 sine. Square waves are every odd harmonic of the root note. If you convert a sine wave into digital you literally wind up adding harmonics to it that didn't exist previously, AKA loss


Sure (you) do buddy

>None of this has anything to do with what frequency the sounds can reach.
lol

It's not stored as a square wave. It's stored as samples, and Nyquist proves that you can perfectly represent a sound by using a sampling rate double the maximum frequency.

What do you think a sample is? what do you think digital is? Pic related this is what all data on every digital computer looks like, on your hard drive, in memory, while being processed, and what is inside every single audio file. Samples are just measured analog values stored at digital data. a 44.1khz sampled file is literally a set of numbers plotted on a number line 44100 times.

So again, an A440hz musical note, at 44100, is sampled accurately 100 times a second. This means you can only have 100 points of that line sampled for a second of audio. At 96 you get 218 samples, also known as higher resolution. To put that in perspective, in real life at the speed of light (which is the speed of all reality), an analog signal could be plotted on a line 681,000 times in just one second.

Maybe im not being clear enough. Nyquist is a proof that you only need 2x the sample rate to capture all audible frequencies, but sound waves are way more complex than just frequency and are made up of many frequencies stacked together.

So when your DAC polls the analog signal 44.1khz, its reading a very cut down version of that signal, and converting it into a stair stepped, digitally graphed version of the wavefrom. Therefore slightly changing the harmonic content in the wave. Yeah you can capture a 12khz tone, but you can not capture it perfectly.

Attached: digital data.gif (774x431, 23K)

why do people keep posting this bullshit?

Attached: nomnom.gif (480x270, 1.1M)

>I seriously doubt your sound card supports 96kHz or 192kHz
you're on Jow Forums, a large number of people here are using USB external DACs which can support likely at MINIMUM 24-bit 192khz.

Hell, there are people who have 32-bit 384khz, or DSD 1-bit 2.8224MHz.

Really? You faggots keep saying the onboard has caught up with everything else. Pretty sure my soundcards have supported 96khz since... the mid 90s?

okay faggots you've made me fall for the bait so far so now im going all in

Here is me playing a three note chord with a base note, the tone being a square wave with along with a saw wave pitched an octave higher, on an analog synthesizer recorded at 44.1, and that exact same tone and chord recorded at 96 through my zoom l12.
If you put on your fucking glasses you can very clearly see that on the left side the 96khz recording smoother than the 44.1khz above it. And on the right side you can very easily see that not only does the 96khz recording have 2x as many samples, but it also has to be displayed on a slower time scale because its literally that much more detailed. And you can really see how much smoother it is.

Smoothness is what reality is, 44.1khz is close enough im not debating that, but you retards who think you know anything about analog sampling need to brush up because theres a fucking lot more to signals than frequency ranges.

Attached: what is sampling.png (2444x941, 140K)

>needing anything more than a 320kbps mp3

uh huh

Should I just use this and move on?

Attached: Capture.png (257x119, 2K)

24bit/96khz
>This
what a pathetic bunch of fucking retards.
>Higher frequencies sound more open, you're not supposed to hear tones of 20>kHz while listening to the music.
shut the fuck up, retard. you know absolutely nothing.
>is this similar to the "you can see above 60 fps" meme?
no.
> selects cd format
what a fucking retard.
>16b/44.1k is enough for perfect reproduction of an audio signal.
just isn't, stupid faggot.

>Smoothness is what reality is, 44.1khz is close enough im not debating that, but you retards who think you know anything about analog sampling need to brush up because theres a fucking lot more to signals than frequency ranges.
what did you expect from a board filled with computer illiterates that know nothing about audio?
what a retard

>the audiophile is mad

>just isn't, stupid faggot.
IT L I T E R A L L Y IS YOU IMBECILE
OVER AND OVER AGAIN IT HAS BEEN PROVEN
THE ONLY DIFFERENCE YOU WILL E V E R HEAR BETWEEN A 16/44.1 FILE AND ANYTHING HIGHER IS IF IT WAS SOURCED FROM A DIFFERENT MASTER OR IF IT'S A DSD TRANSFER MADE BY SOMEONE WHO DIDN'T KNOW WHAT THE FUCK THEY WERE DOING AND DIDN'T IMPLEMENT A LOW PASS FILTER

I HATE YOU

Attached: 123.jpg (400x400, 17K)

>Sampling rates over 48kHz are irrelevant to high fidelity audio data

Lets say I have a bunch of 24/96 (and some 24/88.2) files that, several of which were always 24/96 but some were downsampled from 24/192 (and the 88.2k files came form 176k files etc.) using the SoX resampler in foobar (no aliasing, 95%, "best" option)
Now let's say I need my entire library to be redbook, or 16/44.1, using the SoX resampler in foobar once more. Would I be doing damage to the files that would make them significantly or even noticeably worse than an equivalent CD master or rip? I really don't want to have to track down CD rips for all of these because I just recently got done tagging all of the hi-res versions.
>inb4 why are you doing this
space

Attached: you see this shit.gif (346x194, 1.84M)

>SoX
You are completely fine. In fact if you were as autistic as you imply about finding hi-res versions of your music, you probably have better masters than the CD versions. Even if they're from the same master (or especially if they're form the same master), I highly doubt ANYONE would be able to tell the difference, even under an analyzer. Two "generations" of downsampling from a hi-res file (let alone just one) with a high quality resampler like SoX is not going to do basically any harm, and is incredibly common in the music industry as well, even on "fantastic" sounding releases.

I'd say: 16bit/44.1kHz if all you do is consume audio
If you produce it: 24bit/48kHz
However, there is little downside from going higher in terms of performance. It is pure placebo though.

>24bits
This is the SNR (difference between the quietest sound able to be represented and the loudest sound). This is about equal to bits*6
24 bits would be ~144dB SNR.
>48000Hz
This effects maximum pitch equal to half. In this case 24000Hz is the highest pitch sound it can represent. 20kHz is the threshold of human hearing so I don't know why 96kHz and 192kHz are a thing (I guess the waveform is far more accurate for higher pitches?). 48kHz is more than enough.

if you produce, 96khz and 192khz are beneficial if you do any sort of manipulation and want to do so with as little damage as possible (like spinning a record at a different speed)

To put it together: The volume of the waveform at a certain point is a 24 bit number, and 48000 of these are stored over the course of a second to make a waveform.

More like 16bit 22 khz.

Human ears cannot hear below or above 20 hz and 20 Khz. Thats why almost all the speakers/headphones are designed as such.

Human ears can hear up to ~22kHz. By the Nyquist-Shannon sampling theorem, for a band-limited signal (i.e. one has a Fourier transform that is nonzero only on a finite interval of frequencies), we can reconstruct a signal exactly if we sample it at 2x the rate. Hence, since we can hear at 22kHz or so max, we need to sample at around 44kHz to be able to reconstruct a signal perfectly for human perception. Thus, any sampling rate above this is snake oil.

Yeah but the kHz is double precision, so with 44.1kHz you actually can only produce a real 22.05kHz of sound range (if you want to know more, google the Nyquist–Shannon sampling theorem)
Try setting it below that and you'll notice it getting worse.

Waveforms turn into sawtooths the closer it reaches the maximum. 48kHz seems pretty safe.

Absolute bullshit. NO analog system can create a step function like that, and ALL audio is converted to analog before playing it through a speaker.

Put simply, if you imagine a wave as an image, Hz is like horizontal resolution and the number of bits is like vertical resolution.

b/44.1k is enough for perfect reproduction of an audio signal.
>just isn't, stupid faggot.
You can't tell them apart, your ears are not sensitive enough.

I know you'd desperately like to feel special like all the other cork sniffers out there, but you aren't. Unless you're a 12 year old boy with extremely exceptional hearing, you've already lost enough audible frequency range that even 16b/44.1k is wasted on you.

Don't bother replying.

Attached: 1507162541646.jpg (200x200, 14K)

Hears how that'd look in audacity. You physically can't produce a higher frequency because of it takes two points to make the waveform.

Attached: 1551643207.png (1078x201, 15K)

most retarded post award

Anyone know why lmms uses 8 bits for sound?

Sample rate (Hz) is how often the analog audio signal has been captured.
Bit rate is how many separate points it can have in one sample.

Not him but does switching to OpenAL output achieve something similar to ASIO or WASAPI in terms of bypassing the PC abstraction layer?

Those are just approximate standards we created. Also mp3 cuts off at those ranges to save space so the case could be that it was concluded those are the ranges of human hearing because almost no one listens to music rendered outside those ranges.

Imagine knowing a few technical terms but being this fucking retarded

It's called bit depth. Bit rate is the amount of space/bandwidth the entire signal would take up per second, calculated as the sample frequence*bit depth*number of channels. It's constant for any uncompressed signal of the same combination of parameters and as such is mostly talked about when dealing with compression formats.

But how does that look in the output device?
I grantee, not like that.

>Bit rate is how many separate points it can have in one sample
Not really. Bit rate is just a measure of data speed in playback.

Bit rate is calculated by multiplying the sample rate, by bit depth, and by the number of channels.

Bit depth is the amount of bits, or possible values, that a point on a digital waveform can be. More bits means more precision with regards to amplitude and phase, therefore increasing the signal to noise ratio.

If you need help imagine plotting a waveform on a graph, and imagine your graph only scales from -127 to + 127. This is what an 8 bit depth is, and its very limiting because analog signals can be very loud, and like discussed previously they contain a lot of information in them. So if your data point doesn't fit on your graph perfectly, it alters the wavefrom from its original state and results in a loss of information. This is why things like digital radio transmissions still sound grainy even though they're still higher quality than analog transmission.


side note there are lots of bit masher effect pedals and plug ins that let you dynamically change the bit depth and sample rate of a digital waveform, and its really good fun to make wacky crazy aliased and distorted sounds and shit

There is no need to use 96khz or 192khz unless you are recording audio.

>grantee

Yeah, any speaker will change the waveform that is sent to it depending on its frequency response. This happens to perfectly analog waveforms too and is also a cause of data loss, though the air is also a source of data loss as well. So when you send a digitally converted waveform to a speaker, you're getting even more data loss than you would with the same analog wave.

I love SoX. Something about the name and the fact that it's so good makes the whole thing give me a pleasant feeling.

Attached: 1526896161965.gif (300x300, 206K)

Wot, my onboard audio supports 192kHz at 24bits over S/PDIF

>And on the right side you can very easily see that not only does the 96khz recording have 2x as many samples, but it also has to be displayed on a slower time scale because its literally that much more detailed. And you can really see how much smoother it is.
And do you know what those extra data points on the graph physically represent? Literally just the upper limit of the frequencies you can extract from the signal. It doesn't matter at all how many zero crossings you capture as long as the highest frequency you sample at can capture the signal going from positive to negative or being stuck at 0. Literally that's all you do to find frequencies, for every given number of samples apart your samples' values oscillate positively/negatively or remain the same. So by say, doubling the number of samples, you now have double the number of frequencies you could find by iteratively looking for oscillations at each gap size. Even though this is done in discrete steps, any intermediate values are encoded into the frequency values over time. You're not looking for one frequency at each iteration, but the next window of iterations. So going from say, 22.05KHz to 44.1KHz (22050 samples/s to 44100 samples/s), you skip by 22051 samples at a time and any frequencies from 11.025Hz to 11.0255Hz get encoded in that new value. That value gets taken at every sample, so the change in that value over time is actually how every intermediate gets stored. That's also why bit depth directly translates to better sound quality and not frequency, because higher bit depth means you get more accuracy in storing those encoded frequencies. Learn something about signal theory before trying spout retarded crap to dumb audiophiles on the internet.

What about spatial sound?