Pytorch on suicide watch

youtube.com/watch?v=jh4ITuOytE4
How will they ever recover?

Attached: google-tensor-flow-logo-black-S.jpg (1600x1200, 131K)

Other urls found in this thread:

youtube.com/watch?v=popwpnggc2g
twitter.com/SFWRedditGifs

import torch

@torch.jit.script
def foo(x, y):
if x.max() > y.max():
r = x
else:
r = y
return r


did you make this video? google has a couple hundred devs on tf yet fb makes it look easy with like 10 devs on pytorch

what's the problem here fag boy

>did you make this video?
lmao I wish I worked at google. Maybe indirectly by solving captcha.
I'm just doing meme machine learning as a hobbyist.
I got started with tensorflow until I realized it sucks at doing RNN because of the static computational graph. Currently seething because pytorch makes it easy.

I was trying to bait so a nice conversation would start and I would lurk and learn something.

Pytorch/Tensorflow is the new Linux/Windows

sounds like tensor flow is on suicide watch

Isn't tensorflow competing against both pytorch and caffe2?

thansk for the read gay OP I dind't know about pytorch and need was gonna start learning TF next week. now im not sure and will probably spend a month asking which one i should use instead

caffe2 is being merged into pytorch as its backend

its just pytorch v tensorflow now

pytorch without a doubt

pytorch/tensorflow is more like mac os/windows

Didn't know about that, thanks.
So the difference is basically static computational graph vs dynamic?

wtf is this shit

static v dynamic is akin to compiled vs interpreted

both pytorch and TF now have both types of APIs, but pytorch focuses more on dynamic and tf more on static

On the one hand I want to get on the pytorch train.
On the other hand, the inferior product is often the one who wins

Who the fuck uses pytorch? TensorFlow is a decent API.. Are people so brain dead that they need an API for an API that's making C calls?

pytorch is shit.

tensorflow you can use keras.

keras >>>>> ALL!

>caffe2
lol caffe. we are not in 2010 anymore.

TF is pretty good if you know what you are doing and have a lot of recourses. But it's so bare you should write code for everything

>neural network on python
>python = slow, resource hog
>not using openNN, C++ = fast

python is for pajeets.

Which one's easier to do neuroevolution in? As far as I've seen, pytorch has Uber guys doing that sort of stuff with it

deepmind got converted from torch to tensorflow quite recently if I followed the stories correctly

you pajeet, all the science python stuff are wrapper over C

does deepmind even do neuroevolution? I just know them for dqn ans alphago

>does deepmind even do neuroevolution?
I don't know much about them but youtube.com/watch?v=popwpnggc2g she did her phd in cognitive neurosciences, so I guess it's related

be a good goy and use (((pytorch)))

I was going to do that but instead I just used TensorFlow and somehow made a little money

AI fags are worse than frontend retards

Call me when TF isn't slower than molasses at both train and test time to the point even its loop unrolling was slower than theano compile times for non-trivial rnn's despite the fact the whole point of TF was no compile time at the time.

Torch:
- very flexible
- very fast
- very concise in syntax
- could do conditional computation since day 1

TF:
- cumbersome
- changes every minor version so code sharing never works and you can never update TF in the middle of your research
- slow as balls
- extremely verbose in syntax
- can only be extended with C/C++ modules (extremely error-prone, useless segfault if any issue happens, even if it's just the shape of a matrix being off by one somewhere)
- (((keras))), a.k.a. the buggiest, most unstable library known to man, developed in lennart-style CLOSED WONTFIX.
- Too low-level for actual use with (((keras)))

Literally nobody who does ML research uses TF. Even at google it's split between """engineers""" using TF or reimplementing research in TF, scientists using torch, and clueless retards crippling themselves with TF for """applied research""" (aka engineering).

Caffe was never really relevant. It was good for CNNs and not much else, so it never really got traction. Advances made in caffe were quickly ported to the other frameworks.

>neuroevolution
DeepMind isn't into memetics since that's what you're asking.
They do RL (dqn, atari), and they do classical NNs (DRAW, for example).
Most of their work is still "take this model that has existed for years and run it on a billion GPUs" rather than actual contributions.

Most NN python programs are thin wrappers around the fortran and c libraries that actually crunch the numbers

Not really, they're rather wrappers around C and CUDA. NN on CPU isn't really viable nowadays.

Any good resource for understanding backprop?

What makes torch so much better?
is there some high level artichectural decions that fucked TF?

Does CUDA even work on tpus?

>Literally nobody who does ML research uses TF. Even at google it's split between """engineers""" using TF or reimplementing research in TF, scientists using torch, and clueless retards crippling themselves with TF for """applied research""" (aka engineering).
I might have fell for shilling but I was under the impression that tensorflow was the popular framework of choice.

>pytorch/tensorflow is more like mac os/windows
sklearn would be linux
julia would be gnu herd