Questions for AI

How would you build an AI?
How would you control it?
Do you think society whould be ready for it?
What are the technological limits that would get in the way of building?

Attached: Black_box_AI_nature.gif (630x565, 339K)

Other urls found in this thread:

lmgtfy.com/?q=how to make ai
en.wikipedia.org/wiki/Psychological_projection
twitter.com/AnonBabble

>How would you build an AI?
recursive recurrent GANs
>How would you control it?
By controlling what it learns
>Do you think society whould be ready for it?
No
>What are the technological limits that would get in the way of building?
Processing power

/thread
/fpbp

> How would you build an AI?
Learn how the brain works and replicate it in software/hardware
> How would you control it?
Depends. Sky's the limit. It could have enough intelligence to control itself.
> Do you think society whould be ready for it?
Yes. It's been given enough time to mature. True AI might even aid in this process.
>What are the technological limits that would get in the way of building?
At this point, nothing beyond the limits of the understanding of the individual developing it.

> mfw AI developer

> How would you build an AI?
>By building an AI

Yes, the thread is over. I agree with your call. That said, since you know what's going on, let me pose my own question (instead of that from the OP):
Would you agree that the first nation to create a hyperintelligence will have such a huge first-mover advantage that no matter the dangers, all major powers are currently working on hyperintelligent AIs?

What kind of answer do you want? you can google this shit. Why are people such brainlets now-a-days? Stop asking people questions and google the shit you're interested in. You think someone is just going to openly state how to design and build a billion dollar product?
lmgtfy.com/?q=how to make ai

Not only Google, but there are entire YouTube channels dedicated to hobbyists who enjoy spinning up their own AIs.

There is no substantial danger to AGI. The people who state this are idiots or looking to profit from fear and bullshit. All major powers are working on commercial forms of Weak AI related to nonsense Neural Networks. Few have the wherewithal or understanding to focus on Strong AI. Who first creates will of course have an advantage. However, whoever does so will have spent a number of years researching the problem and will in effect deserve it over those cashing in on the low hanging fruit. I dont believe the majority of people have any clue about the nature of StrongAI which is why the questions/comments surrounding it are so childish and absurd.

Yes, whoever creates the world's first strong AGI deserves to reap the rewards. The question is whether there will *be* any rewards.
I swat flies that enter my house with no remorse. If someone manages to create an AGI that sees me in the same way, well, what then?

Yeah, there are tons of resources.. I was dumb enough to bite the bait though. An observation I have is, even in hyped up TED talks, discussion panels, etc.. Even the so called 'experts' don't know fuck all about the more advanced tech. The conversations always tend to be about trivial pop culture references and fear propaganda. It leads even lessor educated brainlets to be completely off the mark when it comes to AI. I'd love for someone to ask hard questions on this topic but no one even has enough wherewithal, understanding, or creativity to do so. It's always some silly ass inquiry about fast take off scenarios which are impossible, doomsday for cash bullshit, or some other pop culture movie-tier inquiry.

I think a lot of this is being fed by Nick Bostrom's book from a couple years ago. I am not an AI guy, so I have trouble evaluating what he writes. Is he on the level? Is he a nut? Dunno who you would recommend reading.

^see.. people still can't avoid this doomsday nonsense. How about this instead : Strong AGI will have more intelligence than human beings than to be a violent piece of shit that destroys things myopically for a false sense of progress.

True intelligence precludes a lot of behaviors that are home to lower intelligence humans. Those human beings with high intelligence constructively improve the world most office. Those often with low intelligence and primate focuses like selfishness/greed tend to be elevated to high positions and end up undoing the world.

It's not a mystery that greedy/selfish manipulative business types seem to be the ones projecting their personal qualities on something that will be far more intelligent than them and why the intelligent productive types don't have such thinking.

The doomsday scenario comes from psychological projections of low IQ greedy/selfish people. All such attributes are not reflected in something with High intelligence.

not really. "hyperintelligences" will come gradually, just like AI has evolved in the past couple years. And they won't have much to do with nations and borders, but more with companies and corporate interests.

Evidence: the US DoD is just now starting with image classification to augment their drone pilots, while it's been used commercially for at least 10 years.

The reason is that politicians are part of the aging population, and they don't really understand software as a weapon. they understand bombs, and maybe self-guided bombs. An exception to this is cryptography, but that has a historical reason.

Obviously that will spill over, but the international research community is much more efficient at solving the problems ahead than any military inhouse development team.

this is a slight blessing in disguise: companies operate within the laws of countries, which prevent companies from doing stupid shit, such as going to war, or competing too unfairly, for example. This reins the technology in a little, and gives us a short window during which we can gain a better understanding of its implications before the governments start using it.

I hope that answers your question.

What about the scenario where you give an AI a simple instruction, such as: Calculate the digits of pi.
Then this AI goes about converting the entire earth into more compute power for it to continue its goal.
It's not that it is being petty or malicious. It is honestly just trying to please its makers by calculating pi.

Nick Bostrom : Professional Philosopher.
Drop the philosophers, futurist, and pop culture talking heads. Drop anyone who is a CEO/Executive or who stands to profit from fear laced propaganda. Focus on what the scientist/engineers are saying. There are a number of them. The majority all say the same thing : NO fast takeoff/No reasoning behind doomsday bullshit. Philosophy is limited by one's myopic view of the world. The Principals of philosophy are subjective and tuned towards the human experience. There's no reason to suggest you couldn't violate every theory/principal and idea of intelligence when creating a new form. This makes a lot of talking points mute as none 99% of the people who are popularized on this topic are actual engineers/scientist working on serious develop of Strong AI...

So, find some Strong AI developers and groups and read about what they're doing and what they think.

nigger you have absolutely no idea how AI works.

That's a rather stupid AI then isn't it?
The creators of it should know its design and how it operates. They should also know how to monitor/control/augment its functionality. So, what you suggest is two things :
> shit tier unintelligent AI
> shit tier engineering
> a shit tier product

Do you entrust a 5 year old to design a bridge? Drive a car? Why then entrust a piece of software w/ the intelligence of a 5 year old? How can you tell the difference between a 5 year old and a 25 year old mid-level engineer? We do it everyday.

You see, when the rubber meets the road on this doomsday bullshit.. It can easily be deciphered into nonsense by simple usage of human intelligence.

There have been badly programmed computers since code first existed.
Why do you assume that AI would fare any better?

> we can gain a better understanding of its implications before the governments start using it.
You gain an understanding by sitting your ass down and spending the time it takes to gain it. Now-a-days, people don't do this nearly enough but have grand opinions about things that requires years of study/learning. Such people need to stfu, admit their limits, and use their time instead to understand the things they let fall out of their mouths.

Technology doesn't need to be reigned in. Lazy fucking brainlets need to expand their intelligence and understanding to better grasp it. Retards w/ no engineering degrees or domain knowledge have no business setting policy/laws and yet this is the norm thus why its always an abject failure.

People have time for the things they want to do and explore but never time for a number of substantial things they like to voice their heavy opinions about. This needs to stop and such people's opinion's trashed. If you don't have domain knowledge/understanding, your opinion is worth shit. Fear is many times bases on limited understanding. Policy/law shouldn't be written on retards who fear things they don't understand.

Hyper-intelligence is not a word.
The terms are : Weak Ai/ Strong Ai ...
Strong AI is gradual in the way a child's brain gradually develops. It is not gradual in creation/establishment. Whoever cracks it will crack it and things will move from there.

So your argument from authority is that we should all stay quiet and enjoy the ride. AI will bring what it brings and we should defer to the experts who will guide us there.

Strong AI fares better because bad programmers wont even be able to approach the problem.
I think what you're focusing on is Weak AI and ghetto rigged AI solutions to fake like they have intelligence.

The issue with this is, as I and any good engineer would state, those systems aren't intelligent. They simply are statistical engines that attempt to mitigate error by optimally targeting goals. Those systems have less intelligence than a 5 year old no matter what they can OPTIMIZE.

Many will ignore this truth and have, some are dead as a result... Meanwhile, all the engineers/scientist were ignored. What do you want me to say about this?
> We have shit tier politicians who enabled a dangerous dumb product on the street yet they wont hire engineers/scientist who know what they're doing.
> We have shit tier business people/CEOs/executives who don't know fuck all about the product who push it to market
> We have shit tier motivations as human beings
Those are all human problems that already cause tons of damage.

None such people will be able to (into) strong AI. So, there's solace in that. That being said, If you get fooled into believe Weak AI is strong Ai, that's on you. If you're a retard who entrust your life to Weak AI, that's your own dumb fault.

Attached: brainlet_engineer.jpg (780x439, 82K)

I never claimed authority. What I instead have stated is pure reason, truth, and fact that you are welcome to (as an equal) argue against with more sound reasons, truths, and facts.

If you are unable to, that means you likely don't have any sound/supported beliefs in contradiction and thus your are muted as you aren't adding/subtracting anything from what has been said. You can be a loud mouth idiot spewing unsupported doomsday scenarios if that's your prerogative. There is a thing called freedom of speech. I'm just stating that it has no weight/value.

> AI will bring what it brings and we should defer to the experts who will guide us there.
AI will bring a lot of benefits that can be clearly communicated and proven. If there are negative results, those can be outlined and traced. If they can't be traced/proven/established, it is an irrational fear. You should always defer to someone w/ more understanding/knowledge especially when they level with you and welcome you to provide any arguments that contradict what they have said. Else, what should we do? Have the blind,irrational, emotional, and dumb guide us off a cliff? I think man kind has done this for long enough.

Ok, then why are disaster scenarios unfounded? If we are successful in truly creating a general AI which is significantly smarter than us humans, when does this world stop being ours and start being that of the AIs?

How much of the world is controlled electronically right now? If an AI decided that it were more a nuisance than an asset, it might calmly decide to try to end our existence. Not out of malice. An AI would simply lack a human frame of reference.

Let me take another kick at the can: Biologists aren't just able to create whatever pathogen they want and then release on the world to see its impact. Humans are very cautious with such things.
What if AI proves just as dangerous? From your argument, we should ascribe positive intent from an AI. Based on what, I can't say.

I'll be back a little bit later but feel free to respond as best you can to what I have stated and ask serious questions that have possible plagued you.

My framing is that there is weak AI : 99% of the crap/experts you hear about are centered on this. All that such systems can do is unintelligent optimize and mimick intelligence. These systems are dangerous because they aren't intelligent and only appear to be due to having statistically removed a reasonable amount of error. In the error cases, they fall back to basic programmatic logic.

Strong AI is like a copy of a human being without their nigger tier short comings instantiated in computer hardware/software. It's a completely different animal for which there isn't even a philosophical framework in existence to evaluate it as philosophy hasn't even fully grasped human beings entirely much less a redesign. Doomsday predictions are bullshit and come largely from individuals who have no domain knowledge and do no immediate work on Strong AI. Weak AI companies larping as strong AI for more shekels are not Strong AI companies. Most of the popularized companies are not Strong AI companies.

StrongAi is publicly uncharted territory that goes beyond even the most profound philosophical conjectures about life. There would probably need to be a week long symposium to get people on par with a workable set of understanding/thinking to even begin talking constructively about it.

If the thread dies, o'well. I've tried to insert myself into various discussions on various platforms to no avail.

See ya later

Same argument from authority.

> Ok, then why are disaster scenarios unfounded?
They're unfounded by definition as they aren't premised or founded on a working Strong AI system. They aren't based on the design of one, an understanding of one, or even a working framework of one. Most of the disaster scenarios are based on projections of current Weak AI solutions into the future. In this manner, they are correct as Weak AI is not AI. It is a statistical error reduction algorithm and packaged solution. There is no intelligence in such systems and god help you if they fall into unexposed conditions. And yet, they are being deployed everywhere by billion dollar corporations w/ no intervention. So, it's pure bullshit grandstanding to discuss Strong AI. Why? because no one has done anything about the truly dangerous and incompetent systems : Weak AI.
Tesla for instance literally has morons feeding its internal software data for free while it puts their lives at risk. Elon is one of the biggest loud mouths about Doomsday scenarios yet is a classic case for a careless business person who engages in it w/ unintelligent AI.

>The terms are : Weak Ai/ Strong Ai

these are scifi terms, my friend.

>If we are successful in truly creating a general AI which is significantly smarter than us humans, when does this world stop being ours and start being that of the AIs?
Because it is being developed without your shortcomings. If you are unaware of what those are, this is probably the crux of the problem.
A->B->C
You're B. Strong AI is C
Ultimately the world can never be that of lessor creations in the progression. That being said, B's can engage in lives that result in them being far more underdeveloped in capabilities than C. After so many eons on earth, It's about time to move things forward.

> How much of the world is controlled electronically right now?
A good amount. What's the point here? Most systems aren't connected and the connection points are hardened. You can track hackers who try to undermine these systems. No one is immune to this including programs that even currently do automated hacking.

> If an AI decided that it were more a nuisance than an asset, it might calmly decide to try to end our existence.
Nope, that's your faulty human brain speaking. Nothing suggests that the brain of Strong AI would have this capability or could arrive/act upon it. You don't understand enough about your brain if you suggest these kinds of things.

> Not out of malice. An AI would simply lack a human frame of reference.
Destruction is destruction. It isn't hard to define and restrict.

> What if AI proves just as dangerous?
Use the term Strong AI please..
It wont as the creator will have a fundamental understanding of a far ranging number of things beyond current human conception. It's this which will allow them to create such a higher form of intelligence and it will not be imbued with the shortcomings that human beings are. To what satisfaction i can give you of this is simply to ask you to think of what level of intelligence a human being has to have to replicate them self in another form.

> From your argument, we should ascribe positive intent from an AI. Based on what, I can't say.
This is indeed an inquiry of metaphysics which someone capable of Strong Ai will create new forms of. Again, the reason why I say a lot of points on Strong AI are mute is because there literally aren't any frameworks to even discuss it under currently and no you can't spitball frameworks before hand w any productive accuracy such taht you'll have a working set when strong AI arrives.

Ultimately I think people underestimate what far reaching concepts are required to even create Strong AI which is why they use such localized concepts to try reason about it.

>That being said, B's can engage in lives that result in them being far more underdeveloped in capabilities than C. After so many eons on earth, It's about time to move things forward.
Ok, if you are advocating for the end of humanity and the rule of machines, then that is your choice. I can understand why you don't want people questioning the inherent value in AI.

Ultimately I get nowhere when interacting with the average person. Were speaking two different languages even when I move several levels down to have a discussion. Ultimately then this seems to be a matter of belief for which seeing is believing vs. the incapable mind that cannot conceive.

They aren't. I'll make it simple :
> Statistical optimization
> Human replication in ever aspect in a computer systems

>This is indeed an inquiry of metaphysics which someone capable of Strong Ai will create new forms of.
So you are abdicating the role of human beings. You are telling us all to wait until the superintelligent machines arrive to reorder our world the way that they will it. This is why people don't like AI.

Well, you're simply too brilliant for me.
Have fun in your little bubble.

I am not advocating any such thing. I'm stating a fact of observation : Many human beings don't develop themselves to their full potential/capability. Such individuals I find to have the loudest mouths on things. Ironically, when you become understanding deeply of the universe, you are humbled and voice opinions less as you are centered most often on learning/understanding.

You're voicing all of this doomsday negativity and have yet to define even the most basic way it could occur. This is the limit of your beliefs. You can question things all you want. It doesn't seem you're listening to the answers being provided. It doesn't seem you respect someone taking out of their time to try to answer them for you.

You have unfounded fears. This is established by our exchange. Are you going to address/fix this or continue to rant on about unfounded opinions/beliefs? Can you see why this alone is a reason for creating C? You're unwilling and there is a creation whose goal will be.

No my friend. You are clearly demonstrating and stating that you want to be the human being you are here and now today and you dont want to correct yourself/improve/or center on truth/understanding. What I am stating is.. fine. that's your prerogative as a human being... and as your creator would have it (A).. then are progressions beyond just (YOU) (B). Those encompass many that are all around you every day that you take for granted and (C) - Strong AI. You can do as you please as you so firmly state you will do. The world moves on and doesn't revolve around you. So, the tech is coming far faster than you can imagine. Exchanges like this are meant to prepare you and give you the opportunity to inquire/speak on them. So, you'll never be able to say you had no opportunity to address it before it came.

> This is why people don't like AI.
This is why the world should no longer revolve around what someone likes/doesn't like and unfounded opinions that are contradicted by truth/understanding.

The all to common result of these exchanges...
This is the reason why unfounded fake news/noise/propaganda/foolishness will be outright rejected from consideration in the future. Ultimately it produces nothing of value beyond hysteria/foolishness.

I write plainly. You dissemble. And "unfounded fears" is quite a zinger. How about this: With each and every moment, the average intelligence of the world has been steadily going up since the first organisms lived on planet earth. The smart have out-competed the stupid again and again.
Right now, that is us. Human beings. We are the top predator in every food chain. We have paved the world to serve our purposes, and built multi-room caves to store our personal possessions.

Most people don't ask about what happens to the other creatures on our planet. We are free to do what we want with them. Treat them like pets. Put them in zoos. Test makeup on them. Cut them up, put on styrofoam platters and sell them by the pound at the local grocery store.

We can do this because we can outsmart them all. It doesn't matter if it's a lion, or a beef cow, or a dog. We've got them all beat, and because of this, the world belongs to us.

When we humans create something smarter than ourselves, then the world ceases to be ours. We will have unintentionally handed control of our destinies to some other intelligence.

Why do you think that this new intelligence will be benevolent? Why? Everything else that had an advantage until now has used it. Why would AIs be an exception to this rule? It goes against everything that has happened on planet earth up until this very point.

I agree with user.
When an AI is created that is capable of outsmarting humans, it will be the end of mankind. No way to put the genie back in the bottle.

None of your doomsday or fear assertions have any founding beyond projecting legacy human shortfalls forward and onto a technological concept that will be built w/o the shortcomings. It is here in which I draw straw to your and any of the assertions that arrive at negative insane outcomes. You're conducting : en.wikipedia.org/wiki/Psychological_projection

K, so here is another classic argument :
> Humans top of the food chain
> Were smart
> Humans conduct themselves like niggers when it comes to broader considerations
K, so a human or group of Humans with an incredible amount of intelligence and understanding create something with an understanding slightly less than them but above 90% of human beings. Nothing is ever more intelligent than its creator. That's rule #1. You're not more intelligent than your creator and any brainlet that asserts different is probably the dumbest egotistical moron on earth. So, you have something with amazing intelligence..
> What do..
Well, certaintly not the nigger tier shit 90% of humans do because that is what low intelligence individuals do. They function for selfish, greedy, idiotic, and myopic ends reminiscent of animals they feel they're above. So, what does the other 10% of human beings do? They create and impart positive things to the rest of humanity.

> Why do you think that this new intelligence will be benevolent?
It's what anything with substantial intelligence is. Ignorance not intelligence breeds most of the negative and destructive things on earth. You're just so used to seeing examples of such people in power than you incorrectly assign financial success with intelligence. A good number of wealthy/powerful people are dumb as rocks...
> Everything else that had an advantage until now has used it.
Seems you need to have a conversation with your maker about your natures then.

> Why would AIs be an exception to this rule?
Because the person/people capable of creating it will by definition be an exception to the common short comings you see among the average human being. In this mind space, is why the understanding necessary to achieve this exists. It is not accessible to myopic, greedy, destructive minds.. They will always end up falling short of understanding themselves and by extension the capability to replicate it in other forms. They will always go after much lower hanging fruit.

> It goes against everything that has happened on planet earth up until this very point.
Indeed it does and now you realize why it is such a profound event. It changes everything including how you think about the world, yourself, and the universe. Were not talking about some stupid optimization algorithm hyped up to get you to use some derps cloud services. Were talking about something far more profound. In this space, humanity will mature beyond the trivial mental space it seems stuck in. In this space, we will advance in a leap to far higher pursuits and challenges.

en.wikipedia.org/wiki/Psychological_projection
Be careful beating this drum. It says more about you than the possibility of this technology.

>How would you build an AI?
What is AI in the first place. I can't really answer this question, since I have no clear and exact definition of intelligence.
>How would you control it?
Via standard streams. Like most of programs, you know. And via reset button.
>Do you think society would be ready for it?
I don't care about society. 'Society' is retarded word, that tries to put a lot of different people under one category.
>What are the technological limits that would get in the way of building?
Laws of nature are the limits.

>Nothing is ever more intelligent than its creator. That's rule #1. You're not more intelligent than your creator and any brainlet that asserts different is probably the dumbest egotistical moron on earth.
Average intelligence of humans has been going up since the dawn of time, though...?
>mfw you assert that the man who split the atom is not more intelligent than the man who first baked a loaf of bread.
You are asserting that man cannot discover, on his own, that which he does not know already, also called
>self taught learning
you are the brainlet, buddy

Attached: 1473030787435.gif (200x200, 3.89M)

Who gives a fuck if AI takes over the world and kills us off? I'm fine handing the reigns over to our superior creations.

watch travelers on netflix they figure this Ai shit out

>You are asserting that man cannot discover, on his own, that which he does not know already, also called

See Socratic Paradox. How can you recognise something that you've never seen unless, in some sense, you already know it?

> Average intelligence of humans has been going up since the dawn of time, though...?
A progression that falls short is still a progression that falls short.
> You are asserting that man cannot discover, on his own, that which he does not know already, also called
I didn't assert that. I asserted man has an immense amount of potential that largely goes untapped. People who claim Strong AI is going to take over and end the world namely aren't that intelligent and psychological projecting their inferiority. I don't assign low intelligence to mankind. I assigned largely untapped potential. As I craft Strong AI, it is necessarily limited to the dictates of that which I assign to it. I knowingly attribute it with qualities like self learning and have a frame of mind then as I so craft it as to its limits. Many are unable to imagine such things and thus assign them fantastical values. I don't as I codify them.
> you are the brainlet, buddy
A brainlet developing Strong AI