Jow Forums how do we feel about the new NVIDIA development board

newegg.com/Product/Product.aspx?Item=N82E16813190009
nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/
So it seems to be made for AI and running powerful scripts that you couldn't do on your normal pc. It's not out yet but people got it early and are testing it. You could use it to power an autonomous drone but you can do that cheaper with other parts.
So what does Jow Forums think?

Attached: NVIVIDAdevelopmentboard.jpg (1280x960, 169K)

Other urls found in this thread:

osnews.com/story/15960/introduction-to-minix-3/
twitter.com/NSFWRedditVideo

I think giving Nvidia money for any reason is a bad idea until they make a proper FOSS driver stack.

>freetard
cringe

>no rs-232

>FOSS
fuck off linux cuck

It's a Linux board. Expecting proper upstream driver support is entirely reasonable.

>novideo
Just get a sipeed k210 devboard.
Agreed. If bored do grab a BlackIce II.

it's not the 90's anymore

Why would you want a fucking driver to be open source? i see no advantage to it being closed source

Linux drivers that aren't open source eventually stop working, and are typically poorly written because they have to reimplement a big piece of the kernel themselves. Open source drivers provide a 100% plug and play "just werkz" experience, whether it's for a USB dongle or an RX 580.

>It's not out yet
then why should i care?

enjoy no games cuck

> what is Cisco

>eventually stop working
>typically poorly written
>source: my ass

Attached: 1487456663955.jpg (236x236, 10K)

>Various studies have shown that software broadly contains something like 6-16 bugs per 1000 lines of code and that device drivers have 3-7 times as many bugs as the rest of the operating system.
osnews.com/story/15960/introduction-to-minix-3/

Attached: 1531504815254.png (876x556, 512K)

>Maxwell 128 CUDA cores
This isn't running anything that a normal PC can't unless that PC has like a GT 1030. What it is doing is putting modest GPU compute power into a tiny device that can be used in robots and so forth instead of streaming all that data to a PC or cloud server for processing.

It has UART I/O as on-board pins.

That says nothing about closed vs open source ones.

>eventually stop working
when the product goes out of fashion they drop updates from an already poorly written driver (a driver developed by a windows user with fucking interfaces and ignoring the cli ecosystem that linux uses)
So after the main driver starts failing some alternative open source appears and everyone switches to it. its how all gpu drivers end bro

I'm surprised by the high amount of damage control replies by nVidia employees in this thread.
Is this board normie enough for them to care about making sure no one mentions defects about their products, software stack and general policies?

Good luck fixing closed source drivers, especially once the vendor doesn't give a fuck anymore.

jailtards

why the fuck should i fucking get a tensor only device with propierary shit on it
and not getting a tpu board from google that most probably will be faster?

I'm so glad AMD finally got the amdgpu stack up to par and Bifrost is almost ready. The age of opensauce graphics is upon us.

>and running powerful scripts that you couldn't do on your normal pc
What? It's far slower than what you get on a GeForce card. It's a dev board for embedded applications which might have need of some level of "AI" but it's far less capable than an actual PC at the task.

>No games
>Thread about software and hardware development
Are you mentally ill?

>So it seems to be made for AI
Why do I need that to run Prolog ?

So professional programmers are doing a worse job than some fat and amateur wannabe linuxtards?

Shitty underpaid device vendor pajeets are doing a worse job than people who are writing drivers for their own use, yes.

Wouldn't it be better to use a beefy server/computer, even with some input lag, to control the AI in your bots? I of course mean for AI that require tons of resources. I guess, you can do both and reserve, "higher functions," to the remote server if the lower functions need them.

The idea with "edge AI" is to do the training on big GPU farms on your own servers or public cloud and then have the trained neural nets directly attached to sensor platforms in the field. The goal is to save on WAN traffic.

Most freetards are just whiny babies who only know how to execute commands on their punny distro

>reeeeeee

nice, gonna make a retro emulator when it comes oit

Attached: 1553149686168.png (1920x1080, 2.71M)

Tensorflow sucks fat cock to put on a Jetson

Couldn't you at least shill something?

Attached: not-even-bait-at-this-point-20002241.png (500x522, 64K)

Oracle is what happens when you trust solutions written by """professional programmers."""

That would be cool to make into a 4K ip tv box.

How is this supposed to be used? I like the idea of having a cheap Ai-focused gpu I can have as a secondary to my Radeon
Are you supposed to have this headless and then just ssh into it and run your meme-learning commands?

this

I like to see more stuff like this, ultimately it's a waste if we have all these ML models but can only run them in the cloud. I've done a little work on computer vision on phones. While it was fun to drop down to low level hardware (e.g. NEON instructions) to eek out the most performance, that's not the way of the future because it requires to much custom software for the task at hand. The future is hardware like Jetson that gives you a shitload of FLOPS that you can use easily via TensorFlow or PyTorch.

>So it seems to be made for AI and running powerful scripts that you couldn't do on your normal pc.
Not exactly, a PC with a powerful GPU could outperform Jetson, I assume, but Jetson is designed for embedded. It lets you build embedded systems that can do a lot more ML related compute. Since it's designed for embedded it makes different compute/cost/power tradeoffs.

Even if they did release open source drivers, the deep learning stack (CUDA, nvcc, cuDNN) will still be either closed source or somehow controlled by NVidia. It's not the best situation but there are a lot of things being done, e.g. XLA. Unfortunately NVidia are really good at building both hardware and software for deep learning, and so they are free to build proprietary solutions and people will use it because they are the best.

It's for embedded systems.

Cope

Make a competitor and then you can negotiate. Until then fuck off and eat toejam, because people are going to buy it whether you like it or not.

It's a development board for makers you pea brained faggot.

>hurr it doesnt play Fortnite

I bet you cant wrap your head around why there could be any interest in "weak" microcontrollers like Pi, Arduino etc

Dont worry kid, keep playing your vidya, leave the world of microcontrollers to the real men.

Unless you understand C and how it works, STFU

Most intelligent reply in this entire thread.

i guess it's the only arm shitboard without totally fucked gpu drivers?

Rs232 is used all over automation industry

>company pays thousands for TimesTen support
>find bug, so raise issue
>takes them 2 weeks to start looking into it
>a week later they admit they last tested TT in 2012
>probably doesn't get fixed
Oracle was a mistake

For that price I'd want a Raptor CS Blackbird instead, it's not going to be much more expensive for something that is much more powerful.

This is assuming you're not talking about embedded systems, of course.

I've known C for two decades. I got started with TCPL 2nd ed. I do systems programming. Does this count?

DRIVER BUILDING IS FUCKING GAY
there, I said it. Fuck you.

Probably gonna get it.
I first wanted the odroid H2, but it's been having issues with Intel CPU stock.

Imagine forgetting that AMD exists.

Likely one of these 1050ti-ordering morons.

>Jow Forums how do we feel about the new NVIDIA development board
>"NEW"
that fucking abomination has Maxwell from the previous decade, it's on 28nm, it fucking consumes 5-10 Watt for a mere 470 GFLOPs for FP16 operations.
Jacket man said new arch good for Meme-I and Meme-Learning.

Good job faggets, you just paid $100 for a cut down GT 720

I thought this was supposed to be Jow Forums?

It's being invaded by normies and paid posters though.

no games lol
what a shitty cuck OS :D

Attached: 1538363458193.png (1001x604, 84K)

Somebody will figure out how to port the switch firmware/os to this thing in a year or so. Buy one now before they double in price.

>believes Linux "nogaems"
>in 2019
>unironically

Attached: 1548644303015.png (876x556, 486K)

>posts weebshit
)lol

>insists on derailing conversation into running games in a machine learning, AI devboard thread
>complains about a manga cell
>unironically

Attached: 1533521505496.png (418x498, 203K)

It can run Dolphin at 60fps
So you have access to Gamecube and Wii games

>embedded ai
It looks pretty cool. How much is it? btw most of Jow Forums are just electronics "enthusiasts" or at the very best webdevs.

>t. inferior performance compared to a radeon vii running stock kernel drivers

The nano is up for $100 on Amazon

>Gamecube
>Wii
Ew

This seems more ideal as a mini console emulation device than a AI learning device. For $100 you get better specs than a Nintendo Switch and Shield TV while being cheaper.

We already have fucking supercomputers that scientists would have murdered for 20 years ago and we already don't know what to do with these.

What makes you think Jow Forums cares about such devkits?

The Shield is still better for emulation and specwise in general IIRC, but this is $100 cheaper so it stands to reason.