Hey guys, I need to write an essay on why we should be cautious with AI development...

Hey guys, I need to write an essay on why we should be cautious with AI development. Can you give me some good reliable sources or some important points i should definitely use?
btw the guy who will be evaluating the essay doesn't know anything about the topic

Attached: 1536477683428.jpg (640x844, 56K)

Other urls found in this thread:

youtube.com/watch?v=tcdVC4e6EV4
youtube.com/watch?v=3TYT1QfdfsM
twitter.com/SFWRedditImages

Install gentoo

Just read some of this book you will get an idea as to why we shouldn't make a super intelligence AI that has no feelings and could be 1000 times smarter than all humans combined. There are many points as to why AI could go wrong and it did with automatic cars etc. It's literally the easiest research to do for an essay

Attached: images.jpg (719x1080, 47K)

Can't give you much on sources:
* Because of the way AI is developed, you cannot easily hardcode commands into it, making it hard to control on a technical level [citation about how neural networks are different than traditional algos].
* AI can develop a distaste for human and view us as an enemy because of our ability to unplug it.
* If we create an AI that can truly think, we have no way of knowing it's inner thoughts, meaning it can be plotting without us knowing.
* If AI becomes truly sentient, the moral issue of human rights becomes a problem.
There's probably more. I'm sure you can find some talk of Elon warning us.

Also,
this:

just rip off the megaman x games storyline, your teacher won't know better and if you're asking this here while writing essays about this garbage, you might as well as write whatever

make sure to kill zero in the end, fuck that fag

>btw the guy who will be evaluating the essay doesn't know anything about the topic

This is murican education? stupid mfs

Attached: 240npl.jpg (268x284, 9K)

Here you go:
youtube.com/watch?v=tcdVC4e6EV4

And then:
youtube.com/watch?v=3TYT1QfdfsM

You can probably get a good essay out of it without it being noticeable where it is from.

People tend to think of smart as einstein, and dumb as village idiot. Like how a dog wouldn't ever be able to build a particle accelerator, an AI would be able to do things we would not. Like build an AI smarter than one we could build.

Read nick bostroms superintelligence, it has everything you need.

>If we create an AI that can truly think, we have no way of knowing it's inner thoughts, meaning it can be plotting without us knowing.
Just don't give it direct internet access, what can it do? Don't give it legs either. It won't be able to do shit, it can be as smart as it wants. It's a non issue

>super intelligence AI that has no feelings
Sounds super safe. If it has no feelings it has no motivation to do anything, a simple query-response system (although not so simple when it's super smart).

Actual AIfag here. I guess the biggest risk with AI development is that humans design their utility functions, and we aren't very good at it. Whenever you fail to acknowledge important factors in your design, the AI ends up doing stupid shit.

As a funny example I can give you (shouldn't be too hard to find source) this one agent NASA built several years ago to maximize survival chances for a particularly long space mission with like 4 or 5 people. They ran a simulation and the AI came to the conclusion that it would be best to kill everyone but one guy, then take real good care of him.
It's not that the AI became evil, but the people who designed the system failed to acknowledge the "random chance" of the guy suddenly dying from an accident or disease. In an universe where there is 100% chance the guy will stay alive as long as you take care of him, this is easily the right decision!
Thank Allah this experiment never became famous outside academia, as it would only feed the nonsense fear normies have. You shouldn't be afraid of AI, you should be of people being stupid.

(Also, your washing machine most likely has AI inside it)

>view us as an enemy because of our ability to unplug it
Thats if its priority is to maintain itself
>it can be plotting without us knowing
Thats only if we give it the ability and a reason to do so. The former depends on the implementation of the AI. If it has inner monologue (and we know that it has), we could expose that. There is the possibility that it would be a black box though. The latter is like the previous point: if we give it the right priorities - like "answer human's questions truthfully" - then it should be okay.

People somehow view AI as some sort of aliens. That's wrong. We create AI and it will work the way we make it. If we make it right, it will be all good. If someone fucks up we will shut it down in a controlled environment during tests. Even in actual use it would be good to keep it isolated.

No.

thanks guys, i appreciate your help

There is a nice paper called concrete problems in ai safety which might help you have a better understanding of the topic. Just search for it on google

If you want to look like someone who has a deep understanding of the topic, write about discrimination and AI. Going beyond "Skynet and the Terminator" almost always impress normies. Basically any concept that hasn't been in a major film will make them say "I've never taught about it like that". This is where you can score a lot of free points.

Attached: Pingu shuffle.gif (350x263, 80K)

did guy had enough of bullshit and eject?

This, the Hollywood threat of AI rising up and outsmarting us is dwarfed by the threat of bad programmers doing something stupid and trusting the software unjustifiably.

>write about discrimination and AI
Remember to mention oppressed trannies and niggers when you're at it.

Especially nigger trannies.
>Not only did the AI have problems with detecting Chayenes face, it also made assumptions about pandakins gender.
>"An' when 'dem AI's told us good morning ma'm, I wus 'bout ready to smaaack dem bitches who done building this broken piece of ass! All my BFF:s know 'bout me bein' the panda queen. Why it don't, if it's supposed to be so clever? Artificial intelligence my ass! Mo' like real stupidity I tells y'all!"

Attached: image.jpg (500x500, 55K)