So nvidia has basically cucked gamers and went balls deep into deep learning.
Any aifags who wanna circlejerk about the new age of cheap affordable ML the 2080 will usher in? Personally I'm pretty thrilled about feature transfering and AIGI posibilities.
brainlets in 1960 >if you can't cough up 60 millions for a Cray you shouldn't be doing CS
Caleb Thomas
A) learn in my spare time B) the pleasure of building/owning a powerful machine (men like to do this shit, the 50s equivalent was buying and maintaining a macho car) C) I wanna do my own personal AI applications as a hobby.
Also, think of the difference between solving an everyday FEM problem in your pc vs having to use a supercomputer. For every engineering problem there will always be a PC market and a SC market for harder problems. I think we are making the shift towards PC ML nowadays. Also having tensor cores is going to matter a lot in the years to come, DL is invading literally everything and many applications will leverage these cores to cheaply evaluate models and deliver better performance. It's a matter of time.
Ryder Young
I know a lot research using 1080 or 1080Ti to training.
Ian Jackson
> As a master student working on ANNs so you are a "master" student and still dont know why people would want to study and play around with deep learning in their spare time ?
Lucas Hernandez
>Do people really do ML as a hobby? What do you even train models for in your spare time? The stock market? I personally trained one to recognize loss memes. It was not worth it
Adrian Russell
>20 years later >AI has advanced by leaps and bounds and has transformed everyday life >Think back to when you were playing around with Keras and making networks recognize memes >Smile wistfully as you continue walking to the human slave labor camp
Levi Ortiz
I've been working as a freelance artist for a decade and use it to speed up my workflow. What takes other artists a whole day only takes me an hour. If I had x100 more processing power I could train a model to generate art far beyond my capabilities and make automatic adjustments to it through additional instructions. ML research is much further along than people realize. It's just a matter of implementing it on real world problems and bringing it to market.
Lincoln Martinez
the only "real world problems" ML is solving is mass surveillance
Is a GTX 1070 enough power for someone who is starting on machine learning?
Leo Ross
It's pretty remarkable really how much AMD has dropped ball on this. They have zero support on TensorFlow, CNTK etc big tools. Why?
Daniel Murphy
It costs time and money to hire developers, time and money AMD didn't have until just recently. The last 4-5 years AMD focused almost all their effort in developing Zen, RTG didn't get much in the way of funds. However, now that AMD has positive cash flow, expect them to invest more into RTG and their enterprise business (ML). They will have a 7nm Vega specifically made for enterprise, I would expect them to support that on the software side to try and get people away from CUDA and onto their ecosystem.
Josiah Bailey
I've thought it's also a hardware problem, as AMD's GPU's are very different to Nvidia's in such a way that makes ML extremely inefficient. Otherwise I'm sure someone had already ported Tensorflow to support AMD's via OpenCL or something.
Jaxson Taylor
Should we tell him about Nvidia business practice yet? And how any unauthorized use is a violation of the terms you agreed to when you logged in and downloaded Nvidia drivers? Resulting in legal charges and banishment from future ownership rights to Nvidia proprietary properties?