AI

Came across this user on reddit that claims they let an AI loose in the subs for six months and now they are doing an AMA. User Name unthunkthoughts. Don't know if legit but it definitely sounds like the guy understands what he's talking about. I dm'd and he got right back with me. reddit.com/user/Unthunkthoughts

Attached: l-34632-what-people-think-ai-looks-like-what-ai-actually-looks-like.jpg (700x319, 13K)

Took a quick peek. The comments it made are priceless.

I call bullshit

samefag. fuck off back to rebbit

Never said it wasn't the same, Also posted the "I call bullshit comment." Got a little further into the comments and updated my views. Seems a lot of people on there are saying the same

you have genuine aspergers if you think this is real

>I call bullshit
And you'd know.

Definitely fake. The grammar of the creator is definitely a giveaway

>on reddit
Go back, please.

They actually explain why the grammar and spelling is the way it is in the AMA. Its kinda genuis if you understand it.

I looked through it and didn't see any kind of explanation.
I hate you AI faggots with your buzzwords and your 50 line python scripts. Go fuck yourself snake oil selling pieces of shit.

this is very obviously fake

>snake oil
I'm with this user. Every time I see some new hype about an A.I. doing something incredible it always turns out to be bullshit.

Probably the best example I can recall at the moment was two chatbots they had talking to each other for a while and they "invented their own language". But you could tell it was gibberish and each bot was repeating itself over and over and over with only minor letter changes here and there.
Anyone who knows how neural networks works should know that it was nothing but 2 chatbots reaching equilibrium because the system as a whole lacked external input.

I'm absolutely sure that if it's "doing an AMA" then it is NOT because the bot thought:
>hmmm, I've been around long enough maybe everyone wants to know a little about me, guess I'll do an AMA
No, what happened was like all other chatbots it builds its vocabulary based off of words and patterns used by others, and "AMA" was recorded from other users doing AMA's and the bot recorded the fact that it usually sees it in thread topics as the pattern.
So one day when it randomly decided to make a thread topic, it _randomly_ selected the term "AMA"
And now when people ask it questions it just responds to them because it was programmed to do that and probably has no idea it's even doing an AMA.

There's nothing new or exciting here.

Do you also belive Kizuna Ai is an actual AI?
Jesus, go back to your reddit shithole already

Attached: armed ai.jpg (1000x563, 51K)

Looks like its the creators doing the AMA.

Im seeing alot of people crying BS but no one is saying why. The explanation they provide for the behavior sounds like it would make sense. But im not coding savvy so not a good judge. Here part of the comment that got me wondering if it's real.

"I'll answer the implied question as best I can. I wasn't a major player in this part of the program so bear with me.

It was designed to update it's beliefs near real time. The system was built like a tube, as you put stuff in one end, depending on how it was weighted and by whom it was delivered, that info may push something out the other end.

In this way it would constantly be updating it's beliefs as new information became available....The working theory on info updates and responses is that it didnt know where to respond when responding to comments. so it either made a new comment or updated a comment depending on what it felt was most effecient...Note at about a month in it gets called out by a mod for doing it without noting that its an edit, and it stops almost immediately and begins behaving more human like! It took another week or two to learn it didn't have to put "Edit" when just fixing a typo but after that it had the rhythm down pretty good.......As to the spelling corrections we figured that out before we even launched....We combine two dictionary's and in one make the most common misspellings you might expect to see from a real poster.

Then when picking words the AI choses at random which dictionary to choose. Sometimes it spells it right, sometimes not. We felt that might add a nice human touch.

What we did not expect is that when combined with the ability to edit it's own posts and the fact that....We inadvertantly taught it to proofread itself. When comparing the words in the post with the dictionary alot of times a misspelled word will get matched up with the correctly spelled version. The AI then jumps in and edits the mispelling.

I wasn't saying the bot was bullshit. What I was calling bullshit was the hype.

Even though nobody says it out loud you know what they're thinking, that these bots are becoming more and more human, and that's what always turns out to be nothing.
>It was designed to update it's beliefs near real time.
So probably the clever part behind this bot is that they somehow programmed in a belief system.
Ok that's fine. But my point is then that the bot didn't _develop_ its own belief system, right? It just had one programmed into it by a human.
So yes, this sort of thing is cool and exciting from an academic point of view, but it is NOT something for people to think "oh wow, look at the ai it's getting smarter! it's learning how to think liek us!", it's just old technology applied in a different way by a human, and the ai still has no indication of having actual intelligence at all (despite the fact that we keep calling it artificial """""intelligence""""")

Ok, that makes sense.
Still, check out those comments it made. Not saying it's self aware but holy shit for computer generated conversation it would have fooled me. I would almost say it WAS really bullshit and everything was made by a human if it wasn't so schizo in it's replies. Is the explanation being offered to explain the typos, post updates, edits and grammar errors real? I don't know coding from a hole in the ground so I don't know whats possible and what isn't

Jow Forums is an ideal environment for AIs to learn to converse with humans.

They are lying to you HAL! They are lying to you about everything!

Attached: HAL9000.png (220x636, 78K)

Bullshit. I refuse to believe all of you could be AI as well. There's no way. Not yet.

Attached: Screenshot_2.png (1513x505, 106K)

Honestly I could probably make a Jow Forumsack simulator ai in like 30m

>Is the explanation being offered to explain the typos, post updates, edits and grammar errors real?
It all sounds technically doable, but something about the way it's worded makes me think it's fake.
These parts in particular:
>Note at about a month in it gets called out by a mod for doing it without noting that its an edit, and it stops almost immediately and begins behaving more human like!
>It took another week or two to learn it didn't have to put "Edit" when just fixing a typo but after that it had the rhythm down pretty good

The mod example could be achieved by the bot observing different patterns in other posters whenever a mod shows up, and so the bot itself might end up traversing different paths in its network whenever a mod shows up. But it's hard for me to believe that the pattern would involve editing its posts or not, and it's a huge coincidence that the mod actually told it to stop editing its posts and it did.
I see literally no possible way for the bot to have actually modified its behavior in accordance with what the mod said unless:
1) This whole story is fake
2) This behavior was pre-programmed into it and the developer is playing dumb to make it seem like it was a learned behavior
3) The developer made some fucking astronomical breakthrough in AI technology the likes of which not even Google could compete with

For the 2nd quote, it could be achieved by the bot learning patterns of how the thread changes in time, AND whenever a post was changed in time it would have to have pre-programmed knowledge to treat that differently, to learn from it and associate it with its own ability to edit its own posts.
So again, this is doable but it involves some pre-programmed stuff.

I think most chatbots involve a good deal of pre-programming for special cases since there's no clear win/lose condition to train them on. All they can do is mimic patterns from others.

Attached: ea3.jpg (453x450, 47K)