>Surviving DL as a brainlet edition
Discuss
>Books
>Courses
>Other Resources
Also general ML banter thread
/dlg/ - Deep Learning General
Other urls found in this thread:
sgsa.berkeley.edu
deeplearningbook.org
twitter.com
How much does the cybersecurity and machine learning worlds overlap?
Also anyone taken Andrew Ng's ML course and got an opinion?
Also interested in the first question
>Andrew Ng's ML course
It's a good course! forget about matlat, there are fork of the homework in a lot a languages
Apache Spot, a pretty cool open-source project that uses Latent Dirichlet Allocationto identify unusual network traffic.
I was going to ask whether I should just use Octave like he says or use something else? sorry I don't know what you mean by fork (noob)
This looks cool, thanks
this could be a cool thread if people posted in it
Both use linear algebra??
Don't worry next iterations will pick up
ayy
Someone upvote this trap and give it gold
Pretty useful info
You think this is a forum full of industry professionals when in reality, it's just edgy kids thinking they are mr. robot.
If you want interesting content go to hacker news
Yup, you won't believe just how low the quality here generally is.
But there is a sense of collective misery here, and we enjoy wallowing in it.
It's not about finding the best material user, but more of finding the material that we can agree to suit people like us.
Believe it or not, we are all autistic here, even if you aren't on the spectrum
Linear Algebra Shitfest General
>What are you training your nets to do Jow Forums?
>What are some really autistic data sets out there?
>I want a neural net waifu.
>I want to recurrently propagate deep into her back.
You don't like LinAl? Are you a onions goy?
>Don't you want to be able to tell with 97% accuracy if you should microwave your cum or not?
Autistic as fuck.
>Using ML to predict shit that doesnt apply the necessary assumptions
>Seeing ML results as probabilities instead of fuzzy truth measures
Please read a book or two about fuzzy logic before embarssing yourself with muh neural nets
I do a little work in it but I'm not extremely knowledgeable yet.
when you're too dumb to solve for f()'
See
at my cyber security club we had a presentation from a man who was working between the fields, based on a recently published paper.
The paper was focused on adversarial techniques against deep neural networks, and some the results were pretty shocking. He showed how by passing an image of a stop sign through an algorithm, a german deep neural network designed to recognize these street signs instead read it as a yield sign. It looked almost identical to the previous image.
So basically, yeah. They cross quite a bit.
>Namefag doing namefag things
Why aren't we surprised?
OP here
>I'm really glad you nigglings are liking this thread
If NNs want to imitate biological neurons behavior, why we humans dont need huge datasets to learn shit? Are there NNs for small data?
>Look into one shot learning
NNs are nowhere close to being as good as actual neurons
Shalom faggots how do I go about finding myself a pretrained neural net which I can fap to?
It needs to be at least 4 hidden layers deep.
If you want to actually think about it like that, we do have huge datasets. Imagine how much data you are constantly taking in. It goes beyond the trivial "me see video me hear audio me smell stink" as well.
This is also related to how """"deep learning"""" is relatively dumb shit and doesn't actually get anywhere close to solving any problem in an intelligent "learned" manner.
Just hypothetical, what would i need to study to create the following:
Input:
datasets for individual players in the german bundesliga. (weather, health, status of cuckoldry, the like...)
tehy would be acquired for the results of the last two seasons or more
Output: a prediction for the outcome of all the games on a particular gameday.
the Algorithm is supposed to calculate the strength of a team based on
the individual strength of the players, their cooperation and motivational shifts
And if possible to a degree that you would seize to loose money when betting in the long run on the outcome of games.
Neural networks don't start off with the same biases we do (they have bias if you want them to but it is different).
Do not assume people are blank slates. Problems are modeled naturally in the brain differently than a programmer putting information in. Also, do you really think the human mind juggles around random variables and plays with the weights according to a sigmoid function?
What is that user?
>Looks like you're learning how to drive a flying car
Uh yeah
Here,
scribd.com/document/382118808/Ml-Dl-Interns
Doing my Doctors in surveillance using ML and DL, did my masters on it too, this is the document i give for beginners, masters stundents under me and whoever who comes for internship to go through.
Also any other questions, i can help you faggots.
You a pajeet by by any chance?
nope. asian though if that helps.
why?
Nah just asking. I am a pajeet you see, and I'm interning atm. I need a quick intensive 1 month blitzkrieg of an intro to deep learning. I literally don't give a shit about the internals as long as I can reach a 94% accuracy of the objective, which beats the classical methods. Any advice? The prof is making me do Convonets in TF, for MNIST to get started with. However that faggot is going way too slow, I need more shit. Can you help me out?
are you serious mnist benchmark is already at 98-99% currently.
even improved mnist dataset has around 90% which has numbers which are rotated and with noise.
But, what you can do is, use auto encoders and variational auto encoders to speed up your setup as they help in finding similarity between images.
if you are using conv nets and are hellbent on it then you better print the outputs at each layer as if you are using already setup networks from github like resnet, imagenet etc. They have all been finetuned to hell.
in your case, it's bloat as fuck. so you need to print what the output is at each layer and if you studied even beginner course on ML, there will be certain parts in an image which are activated.
for example, consider 3 and 8 and consider till 3rd layer of the network the activations at this layer start lighting up on the left side of 8 image and 3 image, this means that your network is identifying the differences by this layer so any extra layers you can ignore.
also you need to finetune your softmax layer and the final fc layer and tune with all paramers if your number of layers is decided.
pretty sure this should improve your setup than running previous setup networks taken from github.
Oh hang on the 94 was not for mnist. That was for some other problem, the actual one I want to work on, which is making behavioral predictions. The mnist thing was just for practice.
Anyway thanks man
this sounds really fucking interesting. any chance he has info online about it?
based ravenposter
what are the security risks of ML/DL surveillance?
I'm having a flick through the document. Would you recommend that I go through the maths portions before taking the Ng course?
>calculate strength of team
What units do you want? Waving your hands around and saying "amount of teamwork and individual players" is not going to cut it
>Dick size
>Number of Jews in the team
>Num of homoerotic scandals
Etc
currently it's bias. because of these fucking SJWs and PC shit being pushed, we are wasting time on currently do stupid analysis on bias which isn't even the focus of our work such as consider we did a car detection in one area, we have to say why we didnt choose other areas and were there any political reasons behind it like ffs, kys you faggots.
The reason for this is most of the good conference's chairs are american and others from germany who are also libertarian in thinking most of the time.
1/3
regarding security risks inherently for us is, that you not only can completely clone an online persona but also create multiple personas of the same person. people just spout of bots on twitter and think that it's just some bots replying to you based on keywords, no they actually reply to you based on your sentence formation and reply in sentence formation with context.
machines have started learning that it's not the similarities which makes us human but indifferences,so that's why when alexa added "hmm" sound while talking completely threw off that person after he found out that it was a bot. because previously you would find out it was a bot if it waited unevenly at certain times. now under more training of highly variant examples it just using other unnecessary words while it's doing it's calculations.
2/3
FYI the machine from person of interest has been achieved a couple years back and improved a lot too on it as ive seen when i went to a talk at a defence organization.
another risk is when two machines were put to communicate with each other they suddenly created a new language and communicating in that. to understand that language and decode it, it took months to completely decode that language where it took a few hours for the machines to come up with it.
to tackle this, currently read upon PPDP and PPDM which are few guidelines you have to go through to create a safe ML algorithm. just google them.
3/3
absolutely, do not skip, dont get frustrated if it's taking a long time because once you get the math down, you will breeze through the beginning like anything and understand the reasons well.
if you think the math is getting too hard for you and feel you cant do the examples anymore, then you can start the courses.
another thing is
1/2
checking them myself
another thing is ML is a new topic in such a way that dont get worried if you dont understand something. remember how you learnt the math multiplication table, you didnt question and stuff right because that's the base. similarly you'll get a few words and concepts which connect later the deeper you go. just note down which you didnt understand and you'll see why it was needed there etc
I feel like these things will naturally become lower priority over time as the importance of security becomes more mainstream. Unfortunately like everything else where tech and politics overlaps I fear it will be too late.
thanks man. I'm taking heavy notes on the Ng course.
I saw the first book is a powerpoint presentation. Is there anything extra I need to use or can I run through it as a presentation?
check em
nothing just go through it, i've put it in such a way that if there are 4 topics under ML, they are in increasing difficulty so you can go through first links in all topics and 2nd ones and so on.
oh shit i almost forgot
one does not simply post without posting his waifu
Install Gentoo.
>
>>Dick size
>>Number of Jews in the team
>>Num of homoerotic scandals
>Etc
Alright i trained a leNet with caffe on my own data, got about ~30K examples of 96*96 images separated in 3 classes.
After 15K iterations, I get a decent 85% accuracy and a low loss, but when I try to deploy it it always give the same output whatever the input image is. How can I debug this mess ?
>same output whatever the input image
this means your classifier is become a one way function; always giving the same output class on any input data
There are number of reasons:
>Dead ReLU(s) in the network
>Vanishing gradient issues
>In correct weights initialization
>Weights are either getting saturated or deminished
>Data is not properly pre-processes like normalized, scaled, zero-centered
>The depth and/or width is inadequate for the input data
>There is a logic bug in the code
time to brush up on my linear algebra
yup you have to. Also looking at the attention this thread got, i think i'll start up a dl general whenever a dl general is not there if im free.