Neural networks... more like calculus clusterfuck

Neural networks... more like calculus clusterfuck

Attached: 08f162d1-6943-490b-8e0b-ee6fd5ab256e-62ac63e2d63f.small.jpg (378x225, 16K)

You should be able to solve this.

Attached: OwO.jpg (3543x1418, 756K)

>Artificial Intelligence
>Machine Learning

Attached: Least+Squares+(LS)+Approximation.jpg (960x720, 67K)

Sounds hot

Attached: monodromy.jpg (1063x1569, 243K)

>m_oe^2
>moe
~~

Attached: 1.jpg (420x392, 36K)

>calculus
I thought machine learning was more stuff with sequences and sets?

Literally baby-tier
Are you retarded or what

Xavier or tx2?? Getting the edu discount here
ta

Xavier if you can afford it

I studied theoretical physics, then got a phd in integral equations/numerical analytics. Now I do a psotdoc in the deep learning field.

This field is pathetic. The level of research is low. People publish rubbish that hasn't been thought through. They don't know the literature either.

It's just a big clusterfuck of arrogance and ignorance.

Not really.
Calculus is used for optimization, typically through gradient descent, and the rest is linear algebra.

Attached: backprop.png (1520x497, 55K)

The computation of a derivative is just high school calculus. It's not that complicated.
And everybody uses copy pasted models in pytorch or tensorflow anyway.

90% of the users don't understand it anyway. But that's of no consequence since it's just about cppy pasting and writing some i/o and logging shit.

Doesn't change the fact that calculus is used. And still need to be aware if you your forward pass is differentiable when using autograd or define the backward pass yourself otherwise.

>90% of the users don't understand it anyway.
People copy-pasting "My First MLP" are not worth considering, it's basically the Hello World of Deep Learning.

Attached: IMG_20190405_225628.jpg (3136x4224, 3.74M)

well. If that's all you've got to remember...

I'm working on image processing. Everyone is basically using unets or similar. While a unet is quite shit in many cases...

neural networks have nothing to do with calculus

>too stupid for calculus

Attached: 1549218954397.png (586x578, 37K)

NNs themselve don't. But gradient descent does.
You can optimize NNs in different ways, like genetic algorithms.

I thought generally you do not optimize neural networks (unless they optimize themselves), the idea is that its better to run unoptimized nn and just add hardware instead to increase performance

you can optimize by things such as dropout, early stopping etc
things that will make the network less pone to memorization and overfitting

Can you even parse this?

Read

Attached: IMG_20190406_002240.jpg (3136x4224, 3.74M)

I don't mean optimizing the structure of the net or the hyper parameters, I mean optimizing the weights. Or in other words training the network.

In an NN you get inputs as a vector of numbers, those get multiplied with weights and added to each other etc. to produce an output. You optimize those weights in such a way that the desired output is created, for example 0 for cat and 1 for dog.

You should read my post first. For example you can’t differentiate ReLU at 0 so the backpass is fudged to accommodate for this. You need to be aware of this when writing your forward pass to know if the backpass can actually be automatically computed or if you will need to write it yourself.

optimizing is used in the analytical context here, it is the process of finding the best model in a high dimensional space based on your training and validation data sets. Without optimization, your model is either trivial or useless.

*i guess regularization methods doesn't necessarily qualify as optimization
with nn's it simply comes down to the amount of training samples
more is less

No, I wasn't pointing anything wrong. I was reading this stuff the same time.

Regularisation isn't optimisation.

What do you want? It's perfectly readable. Well, mostly readable. As for parsing it, after a year in theoretical physics programme, you can.