Which is better?

Attached: pytorch-vs-tensorflow.jpg (270x297, 11K)

Attached: download.png (417x121, 3K)

Pytorch

I prefer pytorch, good balance between high-level model definitions and lower-level control/fine-tuning. Easier to learn than TF too imo.

/thread

Use tensorflow.keras

Based

babby shit

Keras/tensorflow if you need something scalable.

Pytorch if you want to try new ideas or generally do anything without wanting to claw your eyes out.

I like Pytorch for building new models and testing them. Tensorflow isn't too bad though

I have limited experience but as far as I can tell it is easier to find models implemented with pyTorch, they're also easier to read due to being more "pythonic" and so, easier to reuse/extend. Maybe because pyTorch is more used in research and Tensorflow, in production, so there's more incentive for people to share on one side and incentives against sharing on the other.

Also, Tensorflow is in the process of becoming more "pythonic" so the API is being rewritten to keep the declarative graph aspects under the hood, so it's in this weird transition period where the stuff google has tested works well, but you might need to use something and find out it's currently unsupported in the eager mode, which leaves you with the option of waiting for it to be ported or rewrite everything using the old APIs. You don't want to use the old APIs from the start either, they're much more cumbersome, harder to understand and whatnot.

Another problem I saw with Tensorflow was the lack of documentation. You'd search for some Tensorflow material and sometimes stumble upon some amazing code with great abstractions but then find nothing of the sort in the documentation, so, to me at least, it felt like if you're working with someone from Google or one of the professors deeply involved with it, you can make it work well, but everyone else is sort of screwed. Maybe with the API changes, this will also change. Not sure.

tl;dr - at the moment, if I had to chose, I'd use pyTorch unless the model I wanted to use was only currently implemented in Tensorflow and/or someone who knows TF well would be available as a mentor of sorts.

flux.jl

Tensorflow 2.0 Alpha. Has Keras builtin, experimenting is as easy as Pytorch (no session bullshit), backwards compativle with previous TF versions, high performance.

Based and Juliapilled

I would like to start learning more about this shit, I'm taking a data mining class and we touched briefly on neural nets and perceptrons, but not much, do any of you guys have a tutorial to recommend getting into it, also what's it written in, python I would guess

Attached: 1533868133457.jpg (750x1334, 97K)

If you want to get shit done without knowing any math/theory, read this.

Attached: cBQwSYr_zecIOkd4OgUjvrl1RGgRsVkXLNrP2LBx9ougQhukOx0l4dRz9B2hnAO7pu7FPY52xDzQNzWvcTO6phjWMAoXOoM432Lf (318x400, 26K)

It looks like pytorch's distributed computing components are more mature than Tensorflow's, at least.
After I got Tensorflow to finally work on my Pis the distributed computing examples resulted in errors or anomalous output and sockets stuck open.
Pytorch seems to implement something similar to MPI, which is pretty simple to understand and use.

Start by getting a proper understanding of linear algebra.
Unless of course you want to be the ML equivalent of a script kiddie, in which case you should look for some dude with a fauxhawk on youtube who does ML videos.

I've taken linear algebra, got an A, understood most of what was covered

TF 2 has Keras integrated now.

keras is mostly used as frontend to tf, in what way is it better?

Then just pick up a maths heavy textbook on ML and go to town.

Is Kevin Murphy: A Probabilistic Perspective a solid choice?

Pytorch.