AI thread? I'm surprised Jow Forums has no interest in machine learning

AI thread? I'm surprised Jow Forums has no interest in machine learning.

Attached: QSkHQr5PR0iOwErik6rJ_neural_net.png (301x301, 29K)

Other urls found in this thread:

en.wikipedia.org/wiki/Chinese_room
trends.google.com/trends/explore?date=today 5-y&geo=US&q=deep learning
en.wikipedia.org/wiki/Action_potential
philpapers.org/rec/MLLFPI
twitter.com/NSFWRedditVideo

ai is just a bunch of if statements and you can't prove me wrong

Everything is a sequence of if statements

your mom is a sequence of if statements

Attached: maxresdefault (5).jpg (640x360, 57K)

Retards who don’t code but claim to understand code think that everything is made up of if statements

>everything is made up of if statements
it is true though

Never claimed otherwise

Statistics? Yea, I like it. What do you want to talk about?

Because anyone who actually knows how AI works realises that it's not special or interesting at all.
Tell me, user, what the fuck is there to be "interested" in a bunch of if statements?

You clearly don’t know shit about AI. The idea that a program can learn and improve is fucking exciting as shit

>just a bunch of if statements

Sort of, but not exactly the same if statements you wrote in that Java tutorial you did a while ago but never ended up finishing.

>not exactly the same if statements
they are exactly the same if statements though

because ML jobs require a phd while SWE jobs don't.

>you need a phd to learn how to import packages in python

It's not exciting when you stop treat it like a biological object and realise it's just linear algebra

Suppose I want to make an image recognition system in Java. Something like this would apply:

public static void main(String[] args) {
Pixel[][] image = null; //In reality, set this to your image.
int HEIGHT = 1080;
int WIDTH = 1080;

List remainingPossibilities = loadAllPossibleOutputs(); //A list of all the entities that your AI will recognize.

if((image[0][0].red == 0 && image[0][0].blue == 0 && image[0][0].green == 0) == false) {
remainingPossibilities.remove(Entities.ALL_WHITE_IMAGE);
}
//... and many more if statements
}


Basically, you just load in your list of possible outcomes, and your "training data". Every time the training item gives an incorrect output, you add another if statement which will catch the incorrect thing and remove it from the list of possibilities. Eventually, you'll only have the actual item left.

Wow. Wonder why Jow Forums isn't interested in this shit.

>it's just IF STATEMENTS
rocks and humans? just bunch of atoms lmao, boooring

true

Dumb code money

Intelligence is just a bunch of if statements and you can't prove me wrong.

M__herless is hiring?

How does one learn neural networks?
I started studying it very recently, got the basics, did some trivial experiments with Keras.
Now I want some directions to learn the non-trivial stuff. What techniques are used to vamp up those accuracy ratios? Is there any useful community to share and discuss projects/ideas? Kaggle maybe?

fast.ai

import $library_name
$library_name.$function($data)

done

Jow Forums has a lot of interest in machine learning
Jow Forums is concerned with programming -> machine learning

you are a neural network.

This
>The universe is composed of 2 states over a fundamental building block and you can't prove me wrong

en.wikipedia.org/wiki/Chinese_room

seething cstard cant prove him wrong

Jow Forums has no interest in normal intelligence, let alone artificial ones.

That's an interesting idea, but it still doesn't prove intelligence is more than if statements.
Though if you want to be precise, it's if statements and memory.

>Leibniz gap
It always seemed weird to me how Searle could say that the book didn't know chinese. It did, by his definition. Or at least the room-book-operator system did.

Because it's not fizz buzz.

That's literally an algorithm designed to overfit lmfao. Neural networks use probability theory to generalize to the dataset, not if-statements to memorize it.

Find a field in deep learning that you find cool -> read research papers about it. It will be hard af at first because of all the math and terminology you wont understand but after you get past that initial struggle it gets easier.

Ai? Nigga pelease you should move foward and take the Si pill.

Machine learning is way too imprecise. I always see the most garbled results.

Just stir your parameters and run it again.

the deep learning meme was exhausted. Get your short positions setup for the next 20 year AI winter. I feel kinda bad for the people trying to do grad school for this shit now because they missed it.

trends.google.com/trends/explore?date=today 5-y&geo=US&q=deep learning

It's just ML moved to a pretty shitty state where the only way to archive something is to be a part of a big corporation that will give you data and resources to process it.

Yep. They dug up a bunch of old has been papers from 30 years ago and realized their big racks of servers could make a dent in the inefficient computations required.

Nope. It's just if statements, DUMBASS.

Because AI requires math and and cs-lets can't do math. People who can't even integrate simple PDEs shouldn't be allowed anywhere near an AI system.

It's some basic fucking Calc I, Calc II math with some differential equations added in.

Nothing very special or interesting. Only really blew up recently because GPU computational power has skyrocketed thanks to nvidia and CUDA.

Look, I am not saying that Deep Learning/Machine Learning aint useful but there are severe limitations to it, even doing Deep Learning with all sort of multi hidden layer shit, LSTM and other crap added in.

If I was someone serious here, I'd skip really going deep into DL and start looking past it.

>deep learning
it's not THAT deep user...

We have the fags over at /sci/ this is an Apple appreciation board

Everything is a sequence of variables saving and changing numbers. There are lots of methods that do not use a single conditional. Sesrch inverse square root or sonething like that. Very famous method and not a single if.

It's not that Jow Forums doesn't care about it, it's just that, personally, I see AI/ML/DL being used for surveillance and nothing else, and I also mostly see it as a bunch of if statements, I could be wrong about it, but it's how I sew it now
Maybe we should share more news of a good use of AI, like helping doctors identify cancer cells, and more technical stuff on the matter to get brainlets like me to stop thinking its just if statements

do you guys seriously think AI is just if statements

>integrate simple PDEs
are you talking about weak solutions or are you actually this low IQ

The idea is exciting. The reality of it is boring as shit.

I do
Care to prove me wrong?

None of you know what you're talking about. You're living in a meme and you don't know it. I'm an enterprise DevOps engineer and I could teach all of you motherfuckers how to meme money with tech knowledge. Data is the new black gold. Random screenshot as minor trickle truth because i'm not spilling my beans. I'll start a discussion but I want to do some hivemind shit if I'm talking openly.

Attached: Screenshot_20190810-033125_Files.png (1440x2560, 1.22M)

You could obviously write code with the same result using only if statements but your default artificial neural network for example just calculates an real valued output vector. Maybe you are thinking of a spiking neural network?

I've been doing an online course out of curiosity. Not even planning to use it for anything, except perhaps a slightly more interesting CV, because as of now it just seems like something that gets boring real fast. I mean, it's mostly statistics.
Now I won't say it's not kinda interesting, but I have a very strong feeling most of the stuff this course is teaching is redundant. As in, it's great if you're starting from scratch, but as it is, there seems to be dozens of frameworks out there capable of doing most of these basics for you.

Attached: you got me.png (541x376, 26K)

Congrats nigger, your overfitted model can't predict shit because it's constrained by your idiotic design. You have to expect some error and minimize that.

Imagine thinking AI is just Deep learning and Machine learning

This. I work with automatic planning for industrial machines and I'm really tired of this AI == Machine Learning meme
ML is neat but isn't nearly as useful for most businesses as people make it out to be, especially deep learning.
If you ask me, the next jump in productivity in our industries will come from real time large scale combinatorial optimization

This.
I work in ML, but I also work for a fortune 5 company. Small companies can't afford the CPU/GPU power it takes to get work done.

you can encode if statement as 0 1 and build matrix probability using 0/1, matrix 0/1 encode graph and combine graph and probability tensor.

Ai isnt programming. We figured out a way to generalize a process to interpret data and make decisions based on trends, big deal. It's like an ASIC brain, good at only one task and nothing else. How is that interesting? The fun part of designing programs is the creative ways of solving problems, and AI is the antithesis of creativity. Fucking lame.

Attached: 1290919873504.jpg (450x600, 57K)

What do you expect? People here don't even know Minsky, ask them, they wont know shit even though they try hard

TRUE. (except not all AIs are trees).
But human intelligence is also a bunch of if statements.
en.wikipedia.org/wiki/Action_potential

so granted you are right, what changes?
(You) still have no idea how to put together the [abstraction of] if-statements to do thing X, no matter how much you screech about "muh if-statements"

Attached: nyastare.png (1200x800, 365K)

I do. But every thread about it has little to no replies.
Reddit is unironically a better place when it comes to machine learning (or emacs for that matter).

I recently implemented a baby's first resnet in pytorch, what are currently the top hyped architectures?
I'm thinking about trying out NLP stuff and understand that "attention is all you need" stuff, since it's been talked about for a while. (But I never did anything beyond LSTM when it comes to language).

Attached: x3.png (677x1548, 473K)

you have no idea what you're talking about
neural network machine learning systems are essentially massive matrix multiplication function optimizers, the kind of stuff you learn in first year of algebra but on a massive fucking scale because there's no efficient way to make these calculations quickly
as with any system, at a big enough level of complexity new properties arise, and those properties are what the subject of ai research is
it's all really fucking boring math making exciting results

also if you have an idea of how it works then you understand how your own body functions much better, why muscle memory is a thing, how your senses work et cetera so there's value in that as well

I mostly work with gans but some interesting arches in that area you should check out are BigGAN, progressive GAN, styleGAN, BBMSG GAN.

They all create unbelievably detailed pics

Only gan I've made was some baby tier mnist just to see what's up. I wanted to do a cyclegan for fun but I always delayed it.
I've recently (not so recently anymore) heard about adverserial sounds that managed to "hack" into phones voice assistants, sounding like some stuff, but being classified as something totally different. Are you familiar with that?

ML is the ultimate basedboy meme. It will never reach human-level intelligence because it's stupidly inefficient. You can't realistically emulate organic consciousness with niggerlicious equations and silicon.

/thread

Why can't you emulate consciousness?
>tfw consciousness is NP hard

Yeah that sounds easy enough to do, if you have a set of sounds that can activate the phone as a training set... not sure how useful that would be though? Would it be used to ring premium rate phone numbers?

Get your head out of your ass, half-wits. Half this place consists of gamers. They know all too well that AI used in games f.e. learns fuck all. ML is just a hot topic in tech atm.

>Why can't you emulate consciousness?
Because consciousness is a clusterfuck of neurochemistry. Retards like Elon Musk just think it requires a fuck tonne of neurons and computing power. That's not the case.
I'm willing to put every penny on general AI not happening within this century.

You're thinking of data science, which is not the same as machine learning research. Machine learning engineering, however, merely requires a master's degree.

One application (different topic) was an adverserial noise that humans could not pick, but that would get classified by the phone assistant. Maybe something like "hey Google, search for CP and tip the CIA afterwards" or something.
Actually I'm a robot engineer, not in machine learning, but I'm learning stuff on my free time so I can change jobs later on.
My goal is change jobs in 5 years if my company does not partake in machine learning by then. I'm trying to meme up neural networks for time series classification instead of the homemade handcrafted decision tree they have.

AGI is happening, and sooner than you think. I believe all the parts are there (not necessarily neural nets, or standard ML). We just need someone to put them together.

Here are some clever people saying it's likely to happen before 2040
philpapers.org/rec/MLLFPI

You didn't prove shit jackass, you just reafirmed what I already knew
Yes I'm a codelet, that doesn't make AI any more interesting, or makes it stop being onoy if statements

Tensorflow has TPU edge for embedded applications.
What's pytorch response? Nvidia Jetson nano?
I'm hyped by this idea of small onboard inference but I like pytorch better

What would be a good way to describe a computationally tractable non-autoregressive model assuming we want to model only continuous-valued data, but also that the range is unknown?
Second, what are some good ways to estimate signal-to-noise ratio in very highly noisy time series data (I'm mostly using biological data and identical manipulations can yield almost no overlap between two resulting spectra, but conversely the necessary information can be extracted from the spectrum given the groundtruth in pretty much all generated spectra)? In this case the "timesteps" are not evenly spaced.
Also, what are good modeling techniques for that kind of data? I already tried all the normal toolkits from econometrics and typical time series stuff like box-carr, fourier-based analyses and anova but it's all useless.

Fake and gay virtually all the advances are done on single gpus by research labs. You're being swayed by big PR campaigns that amount to "we copy-pasta'd some other dude's work but ran it for a long time which resulted in 0.001% improved performance we're totally innovative gays"

>One application (different topic) was an adverserial noise that humans could not pick, but that would get classified by the phone assistant. Maybe something like "hey Google, search for CP and tip the CIA afterwards" or something.

Ah I see. That's very clever.

>My goal is change jobs in 5 years if my company does not partake in machine learning by then. I'm trying to meme up neural networks for time series classification instead of the homemade handcrafted decision tree they have.

Do it senpai. One problem I always have with memeing up NNs with management is their black-boxness. How do you get around that?

>real time large scale combinatorial optimization
so... deep learning.
Fucking brainlet.

I doubt anyone is autistic enough to test new optimizers on one gpu

Has there ever been an attempt at allowing an artificial evolution system tinker with AI in order to see if it can improve how it learns via trial and error?

Attached: 1564982988293.gif (500x281, 1006K)

I said so what if you are right?
why should your idea of what is "interesting" apply to others?
and as you even admitted yourself you have no idea how to actually put together something that uses these "if-statements" (be it directly or indirectly)

Attached: images.jpg (315x160, 10K)

Look up neuroevolution

How do you guys feel about pic related?

Just standard imposter syndrome?

Attached: ab2e357d-4806-4ab0-8ef3-7cd3e8430a34.png (500x602, 152K)

Literally nobody doing actual research uses more than one gpu (per experiment at least), and some even just use CPUs. Do you think gans or vaes were developed on a massive gpu farm? No. Convnets? No. ADAM? No. RMSProp? No. When you want to do practical results (i.e. extra 0.000000000001% on some gay toy benchmark) then yes, you need a bazillion GPUs. But not when you're making a new model. Then you choose a couple promising architectures, try them on your available GPUs, and iterate based on the result. The thing is, in the first place, you rely heavily on feedback from the previous run to develop the model. Finally, when publishing, nobody cares that you're processing 9001 terraniggabytes of data. Hell, that would actually dissuade people from trying (and thus citing) your work because it's prohibitive to even try to reproduce it. It would also be a tremendous waste of time and resource to do these iterations on a billion GPUs and on that data when you don't even know if your novel method is actually able to split 2 points in a 2D space yet.

Yeah. The problem is that it's too slow because you need to train to convergence to have an idea of how well you perform to have an idea of what to try next. Random search is not viable because it is O(1/sqrt(n)) bounded as usual and the search space is too big. Ultimately human intuition simply works better... for now.

No, in fields that aren't too knowledgeable about computers, you can easily insert an oob solution for ML (e.g. just use sklearn lmao) and reach new sota in tons of problems people have been struggling with for years.
That said he's not an ML researcher or doing "work on ML" and he will definitely get a successful PhD from this because people are dumb.
Also he will be one of the only physicists to actually get a job once he graduates.
The more important part is to note he is NOT doing ML, that is, he is not a CS PhD in an ML department, but rather a physics PhD. He could not get anything if he was in an ML program.

I was talking about real life application not research. I literally baffled by your desire to explain to me obvious things everybody knows.

You seem to be backpedaling after having been rebuked, friend. No need to be so defensive, not everyone needs to know everything.

>Here are some clever people saying it's likely to happen before 2040
It's beneficial to their career to say that. Of course they're going to say shit that's completely untrue. I refuse to believe that it's possible with traditional computing, It's just so inefficient compared to actual neurology.

>One problem I always have with memeing up NNs with management is their black-boxness. How do you get around that?
I haven't done it yet, still doing my job and this NN stuff is a hidden side project I intend to show off for the next salary raise discussion.
The way I was thinking to handle the black box criticism was to achieve better test accuracy than the in-house stuff.
Another idea was to find edge cases poorly handled by the in-house stuff, but id have to make sure my NN is not too weak to adverserial input as well.
Last point would be to show off something like sailiency visualization, and to argue that the in-house parameters are adhoc as well, and they the NN params are chosen according to data, not human intuition.
Do you have any other ideas?
I don't talk about it too much now, since I don't want to tip on someone else's toes by doing their work openly.

>was to achieve better test accuracy than the in-house stuff.
NEVER
works.
Managers don't care. Can't explain what the model is doing? Fuck off.
>sailiency visualization,
This does work however.
>Another idea was to find edge cases poorly handled by the in-house stuff
This also works very well.
>adverserial input
That's a red herring. Humans can be fooled by optical illusions. It's the same deal here. That the nature of the illusion is different for a human or a computer vision system is irrelevant. The problem is, this fact will never pass, yet you have to find a way to dispel their misunderstanding about the topic.

One trick the two of you can do, though, is use model distillation techniques (basically you train a second, explainable model such as random forest or logistic regression on randomly generated inputs fed to your full model, using the model's output as the target of the explainable model - you don't care that the prediction is wrong, you want the models to be as identical as possible. Other techniques for distillation exist, it's the subject of much literature). It's a decent way to start introducing these technologies into your workplace, and before long they'll let you run your full NNs without having to distill them. Just point out that distillation is lowering performance of your models and the cost of explainability.

Also, the best way to pass a project is to discuss the monetary advantages of it. Identify the need, opportunity, and monetary gains relative to performance improvement of your method against the baseline. For example, if a call center operator costs 20k/y and your model can answer 1% of phone queries, and your company hires 100000 indians, that's 1000*20k gain per year. They'll appreciate that a lot.

I am not backpedaling. It's incredibly idiotic to believe that a new model or a new layer is being tested on some super bug cluster on gpus. There is literally no point for it.
It's just like trying to explain to someone that rocket fuel won't make your car a rocket.

alexnet was.
It was so big they had to split the network to make it fit on several GPU.
It was GTX580 at the time but you get my point.
Also, the current GAN stuff requires GPU with massive amount of ram.
The research on trying to find out better recurrent neural net (comparing lstm with gru and trying to make up some better architectures by trying stuff) takes Google tier amount of resources.
And the meta learning stuff that trains neural nets to train other neural nets can't be done on a citizen scientist scale.

what is in those if statements?

looks more like a schizopost to me

Sounds like a relatively novel subset of AI problem. Might have to just bite the bullet and build a custom NN from the ground up and try various models. Why is it important that it is non auto regressive? What kind of output are you trying to generate? You mention that the data's range is unknown, but this is useless without context. Are we steppiing in ints, polling floating point, using offsets? What input are we assuming? The represetation of the data is important to think about in achieving a problem that is computationally tractable. Also, how is the data continuos if the timestep isn't fixed? What axis is it continuos on? How many dimensions do we need to represent the data? I don't know a lot about biological data unfortunately. Also, you might want to cut back on the jargon if you want more replies trying to actually help you. You come off as an arrogant brat trying to show off with their knowledge of esoteric words. Break the problem into what you are trying to solve, givens, unknowns, various representations of these, and then look at transformations on the input gives that results in unknowns. Taking a step back and looking at the problem in its simplest form and it's possible representations can be very helpful for solving issues when you hit a wall. Set aside your assumptions. I find a lot of the time if you can't find a way to make a program do something your data model can usually be improved to a more standard model that you know how to handle... Usually. Again, you didn't give much info about what you are actually trying to solve specifically, so it is hard to give any good specific advice. Anyway, I've gtg, good luck.

Oh I know all there is to know about Minsky.

Attached: Screen Shot 2019-08-10 at 10.34.13.png (1108x737, 839K)