What's a good programming language to create synthesizers in?

what's a good programming language to create synthesizers in?

Attached: 7432803_800.jpg (800x449, 69K)

Other urls found in this thread:

makemusiconline.net:8080
musicdsp.com/
docs.juce.com/master/tutorial_create_projucer_basic_plugin.html
pastebin.com/raw/5N5LJEgm
my.mixtape.moe/hnjbpo.ogg
my.mixtape.moe/emjamb.ogg
my.mixtape.moe/bwvphk.ogg
youtube.com/watch?v=nHCCoNyNFtY
twitter.com/NSFWRedditVideo

any of them
but if you have to ask this, you can't do it

No one is born able to program a synth, user

>implying deadmau5 didn't C++ his own music as he came out of the womb

C++ with Juce framework

Attached: Screenshot_20180927-114347_Instagram.jpg (1214x1306, 569K)

I already worked with python and C so I'm not going in blind

Max/msp

any language which allows you to program sound-related assets...so any.

Synthesizers are pretty 'mathematical' in function. I don't know too much programming but I'm sure once you have some set of algorithms it's only a matter of implementing them with soundwaves.

supercollider

Yeah I was thinking I could do it pretty easily in Python with NumPy and such.

This.

What level of C++ would I need to know going in to this?

6502 bytecode

>any of them
Alright, I'll get started making my own synth in brainfuck.

I've written audio generation code in python before, speed may be an issue if you're doing real-time synthesis with a high polyphony. Shouldn't be so bad it you're using numpy, though.

Unless you're an electrical engineer I seriously doubt it'll be anywhere as easy as you're imagining.

you could try out Ableton Max, or the language it's based on PD (puredata). it's one of those visual languages but it's sort of tailored for what you're asking. Might be the best tool for the best job but it's not like a general purpose language by any means.

Basically this. It isn't so easy as it sounds like user. And if you think that you solve anything by switching language it means that you haven't matured.

you dont happen to have it online or remember the resources you used do you?

I'm happy to do it in python, but there may be a better way to do it. I thought choosing a language was about choosing the right tool for the job?

While it is true that some programming languages is better suited than others for some tasks, every second that you spend on learning an other language or library could be spent on the actual task: doing it.

For example, I would do this in MATLAB and that's because I know that there are signal processing tools and I kinda know how you would do it (hint: I did my thesis in signal processing using MATLAB, go and figure). There are even midi interface libraries ready to use to convert keys on a midi keyboard into notes.

With that said, as someone here mentioned there are tools in Python (using numpy or similar) as well, but I have no clue how to do it.

Good luck user.

You could be a beginner and work through the tutorials and probably manage.

I don't have any of my code online, nor do I remember where I found info from.
My work on it was standalone programs, more as a proof of concept. Never looked into making VST plugins, etc.
For instance, I made a python one-liner that generates a song, just for the hell of it. Pretty neat.
I can give a little insight into it, such as how to convert from the MIDI note numbering scheme to actual pitch, but I'm not sure what else to say.

For the inevitable question of how to convert MIDI to frequency, here it is:.
8.175798916*1.059463094**n

Where n is the note number (can be a decimal between notes for pitch bend). 8.17whatever is the frequency of the lowest midi note (C-1 iirc) and 1.05blah is the twelfth root of 2.
8*1.06**n is a close enough approximation for testing. I can probably share the one-liner after I'm done eating if someone wants to see it, even though it's not done.

speaking of synthesizers I have no idea what synthesizers are. but I did make this collaborative sonic pi webisite makemusiconline.net:8080

Attached: just_fuck_my_shit_up.jpg (600x600, 32K)

Please do user.

Alright, I'll post the one-liner after I'm done eating. I'm going to add in the rest of the song to it, as I finished transcribing it since I wrote that.
Don't expect greatness from it, just that it makes a tune. It'll be like 20-30 minutes.

see
>if you have to ask this, you can't do it
This, but if you want to work up to it here are a few pointers.

Saying you can use any language to write a synthesizer is like saying you can use any language to write an operating system. If you want the synth to render audio in real time then you need to have guarantees about the worst case performance for each unit of length of audio rendered. To put it simply you need to be able to guarantee that rendering 1 second of sound will never take longer than 1 second. If you do you you'll produce audio artifacts - audible clicking, popping and distortion. This means you can't ever perform blocking operations on the audio thread - no memory allocation or deallocation, no opening, closing or seeking files, no mutex operations, and also no garbage collection. Because it can be difficult to know what certain library functions are doing under the hood, this pretty much restricts you to basic arithmetic operators ( + - * / % ) and a few math functions.

So the problem with using anything other than C, C++ or Rust (which I wouldn't recommend simply due to it's maturity, although it was designed to operate on a low level which is great for audio) is that you probably won't be able to use any of the higher level language features anyway. I personally use C++ but when I'm in the audio thread I'm basically restricted to C + classes + templates.

Csound or pd.

>implyin that operating systems are as complex as a synthesizer

As long as you can tell your sound card to play a tone in a certain interval then it shouldn't be a problem.

cont.

While the language used makes a big difference to performance, it's still your responsibility to use the language as such: badly written C can be thousands of times slower than well-written python. Your rendering code needs to be fast. Like, fucking fast. Your code needs to be lean and un-bloated like it's 1990.

Let's assume your rendering a naive saw wave:
//for every frame of audio
x[channel] += pitch / sampleRate
if (x[channel] > 1.f)
x[channel] -= 1.f;
*outBuffer++ += x[channel]

This probably run at a sampling rate of at least 44100 Hz and at least 2 channels, and let's say my synthesiser supports up to 16 simultaneous voices and each voice supports up to 10 simultaneous waves. That means the above could need to run 14112000 times per second. Probably more on account of musicians often turning the sample rate up way higher than perceptible, or using more than 2 channels. A simple optimization could be turning the slow floating point division into a multiplication:
//when the sample rate is assigned (probably only once at start):
oneOverSampleRate = 1.f / sampleRate

...

//for every frame of audio
x[channel] += pitch * oneOverSampleRate
if (x[channel] > 1.f)
x[channel] -= 1.f;
*outBuffer++ += x[channel]

This actually provides a measurable performance boost. You can make it faster by replacing the if statement. As a general rule for rendering each sample:
- Try to eliminate branches. They're surprisingly costly.
- Simplify anything in your equations that doesn't need to be calculated every frame and take it out of your inner loops.
- A few virtual function calls is okay. A virtual function call every frame will slow your program down a lot.
- Disable CPU subnormals on the rendering thread.

Well, looks like it'll take a little longer than I planned. I'm not quite done eating, and it turns out I haven't automated converting the timing of notes in a midi file to the format used by the line of code. I should be able to put it together, but I'm probably just going to post an old version of it with the song incomplete.
I have a couple versions of it, but I'm just going to use the 8-bit one locked to sawtooth wave. I'll be a few minutes.
Should have said, it produces a raw audio file that needs to be played back with something like audacity, aplay, etc. Writing to a .wav would have needed more lines to import and set it up, or me digging into the docs to embed a wav writer into the line (which I didn't feel like doing).
I'll post an audio file with it so it's easy to listen to on windows.

cont.

You should also be well versed in:
- The Fourier transform
- How filters work
- Digital aliasing; specifically how to avoid it in sound rendering (look up the nyquist sampling theorem)

This is a great resource full of handy code snippets:
musicdsp.com/
Most of them aren't very well optimized or are inaccurate/low quality, but there are gems in there and as a newby it might be handy to look at some low level code.

JUCE has already been mentioned:
docs.juce.com/master/tutorial_create_projucer_basic_plugin.html
It's great for beginners because it holds your hand really hard, although you still need to understand the lower-level side of C++ if you want to take advantage of it. And while it does come with some basic audio processing utility classes (like filters, upsampling/downsampling and oversampling) at the end of the day they can only really help you to implement your own synthesis.

tldr; you're in for a shit load of reading. Use search engines, use books. Good luck user.

>>implyin that operating systems are as complex as a synthesizer
Obviously synthesizers are child's play in comparison, but performance is a huge deal. Go to any commercialized software synthesizer's web page and they will almost always boast about performance.
>As long as you can tell your sound card to play a tone in a certain interval then it shouldn't be a problem.
Modern synthesizers tend to do a lot more than that.

C with SSE intrinsics.

If you are insane, and you want more power, OpenCL C.

Assembly

Alright, here's an old version of the one-liner. Before anyone bitches about it needlessly convoluted and hard to read, that's part of the point.
pastebin.com/raw/5N5LJEgm

Requires python 2. Open the file it produces as signed 8-bit, mono, at 44100 Hz.
Listen to it here if you don't care to run it yourself: my.mixtape.moe/hnjbpo.ogg
There is extensive artifacting due to the low bit depth. Here's a 16-bit version with triangle wave: my.mixtape.moe/emjamb.ogg
The song I'm currently making into a one-liner can be heard here: my.mixtape.moe/bwvphk.ogg
I have working generation code, but not down to one line yet. Ultimately I want to make a one-liner that generates youtube.com/watch?v=nHCCoNyNFtY
I have made a complete transcription of it (the first as far as i can tell), and will release it soon™.

If this makes it through, the spam filter hates the line of code. Trying linking to it rather than posting it now.

About a year ago I made an additive synthesizer using MS Excel and VBA. It took a really long time to render, but you could feed it notes and build oscillators with harmonics, and it would spit out a wav file with what you wanted. I dont think I have the file anymore, but what I can tell you that i learned from that is that it is extremely difficult to do this. I thought I was well versed in audio, programming, and mathematics, but that project really humbled me. As the other user pointed out, you're in for a lot of reading

C with csound

Seconding this, used MATLAB/simulink in a uni course for DSP stuff and it's pretty damn intuitive.