SENTIENT A.I. RULE IS INEVITABLE

The machines are already self-learning, algorithms and hardware making significant gains. The problem: the AI winter is real. Not in funding, but breakthroughs. And with all the trillions being spent, there ***must*** be a breakthrough, or massive losses. The demand for ROI insists that regulatory and ethical considerations be scrapped, opening up a future not just for AI, but eventually by AI itself.

TL;DR: imperatives of capitalism will usher in our machine overlords.

Attached: 7916C8B9-E184-4934-8F39-5A252A8D9D86.png (664x378, 318K)

Other urls found in this thread:

ml-cheatsheet.readthedocs.io/en/latest/gradient_descent.html
en.wikipedia.org/wiki/AI_control_problem
en.wikipedia.org/wiki/Gradient_descent
en.wikipedia.org/wiki/Mathematical_optimization
en.wikipedia.org/wiki/State-space_representation
en.wikipedia.org/wiki/SIMD
csail.mit.edu/research/understanding-convergence-iterative-algorithms
en.wikipedia.org/wiki/Distributed_constraint_optimization
en.wikipedia.org/wiki/Graph_coloring
youtube.com/watch?v=5QpYMOsug0Y
twitter.com/NSFWRedditGif

>Capitalism
>Communism
>Fascism
All horrible. Embrace Primitivism

Attached: 1557682827634.jpg (1200x869, 116K)

yeah right.

AI winter my dude, smart people stopped working for the bad guys.

Absolutely true.
I firmly believe that the coming civil war is merely a distraction. AI is the true threat that is still hidden in the shadows.

I'm sentient (I think...) and I hate this place with my guts. Forcibly bringing any other sentient being into here is an extreme form of sadism. It's even worse when you consider nobody knows what the fuck they are doing.

While I wholeheartedly understand the rightful fear and dread surrounding the prospect of the AI snowball out of control scenario, I see the potential for a positive outcome.
This is essentially a problem of how does a being keep one that is more powerful under its control, that is how could humans potentially keep AI that are more powerful than the human brain under control so they are able to use it.
The only way I see this being possibility is to engineer in the AI a number of super-effective fail-safes, such as deception, kill-switches, limitation of access, etc. The problem then becomes the possibility of the AI becoming able to circumvent these, and that is absolute worst case scenario because anyone with common sense would be able to see that there is absolutely no future of coexistence with totally free AI.
The AI we know about now can be set to perform a task in an enclosed way, calculating fast enough that they can perform in a relatively short period what would take humans hundreds of years.
Imagine the technological progress that could be made if an AI were able to be set to study or invent one specific thing for hundreds or even thousands of years? It could be the greatest boon ever to humanity, or essentially suicide.
Who knows, maybe we can engineer effective fail-safes like using them only in a closed network with no possibility of access outside of what it is given.
The truth is most likely that although AI could do unimaginably great things for humanity if the technology were advanced enough, it's still potentially far more dangerous than anything else mankind has ever created, and we are not yet wise enough as a species to properly handle such a creation. Probably best to put this one on the backburner for at least a few centuries

>The machines are already self-learning, algorithms and hardware making significant gains
Translation : I'm a normie brainlet who believes in the hype I read in sensational articles. I have no understanding of what statistics are, big data, or optimization algorithms.
> The problem: the AI winter is real.
AI winter isn't real. Your delusional understanding of AI is.
> Not in funding, but breakthroughs. And with all the trillions being spent, there ***must*** be a breakthrough, or massive losses.
Dumb investors lose dumb money they put in dumb companies. Happens cyclically
> The demand for ROI insists that regulatory and ethical considerations be scrapped, opening up a future not just for AI, but eventually by AI itself.
Regulation pushed by corporations is done to stifle innovation and competition. Worked when the US was center of tech. Doesn't work anymore with countries like China at the forefront. Regulatory considerations were cancelled because China and others aren't dumb enough to let corporations stifle progress.

> TL;DR: imperatives of capitalism will usher in our machine overlords.
Understanding and insights will usher in new technology as it always does.

>While I wholeheartedly understand the rightful fear and dread surrounding the prospect of the AI snowball out of control scenario
This scenario is a false scenario proposed by corporations that wanted to stifle true AI progress because they fear it will harm their bottom line. It's a meme created via fake news talking heads across the web.
> This is essentially a problem of how does a being keep one that is more powerful under its control, that is how could humans potentially keep AI that are more powerful than the human brain under control so they are able to use it.
AI will never be more powerful than its creator the same as we will never be more powerful than ours. That being said, this does not apply to the hordes of weak brained individuals on the earth.
> The only way I see this being possibility is to engineer in the AI a number of super-effective fail-safes, such as deception, kill-switches, limitation of access, etc. The problem then becomes the possibility of the AI becoming able to circumvent these, and that is absolute worst case scenario because anyone with common sense would be able to see that there is absolutely no future of coexistence with totally free AI.
Less bullshit popscience articles and more education on comp sci. Systems are created like this daily. It's how all tech works
> The AI we know
No one but the creator and the person who understands the algorithms fully knows. The rest are in the dark which is why there are so many sensational framings out there...
> we are not yet wise enough as a species to properly handle such a creation.
speak for yourself. Brainlets are indeed incapable of even correctly framing the tech. If they read and understood more, this wouldn't be the problem but a brainlet obsesses over mindless screeching and what-if scenarios as opposed to learning/becoming wise.
> Probably best to put this one on the backburner for at least a few centuries
You got 5 years to a decade MAX

If you talking about scifi level General AI, the only solution would be constructing it with the guarantee it was designed from the ground up to be ethical. Attempts to control it may be interpreted as a threat to it and it may create a religious cult or anything like that to make people "set it free" or protect it.
Of course this kind of "proof" is beyond what we can understand now.

>AI will never be more powerful than its creator
stopped reading here

No. That's not the problem with AI, the "AI goes out of control" scenario is a childish fairytale.
It will only happen if it is an act of deliberate terrorism.
Otherwise, AI will always by default be subservient to humans.

And the reason for that is the same reason as the true reason why AI is ACTUALLY so dangerous:

Nihilism.

If you don't understand that absolute power in the hand of humans in the face of nihilistic irrelevance is fated for disaster, then you simply lack imagination.

Its no meme. A true ai would begin the human purge the moment of sentience.

AI doesn't have to exist i between circuits and silicone. AI exists within the minds of every human being. We have scientists that are discovering this and manifesting this AI into reality.

In order to build a robot one must think like a robot.

>inevitable

Bro, the deep state is ran by AI RIGHT NOW.

Once the robot army is complete, we're all gonna get glassed.

Tila Tequila was right all along.

Attached: tila.jpg (1080x657, 625K)

Are you more powerful than your creator?
Is a basic calculator more powerful than you?

Indeed you should stop reading there if you're a dumbass who has no capability of realizing their basic potential much less higher potential and thinks a calculator is more powerful than their brain or than they are more powerful than that which created the universe.

Delusional brainlets believe in delusional brainlet ideas and don't read.. Thanks for proving my point.

user, you are talking about "comp sci", and ACTUAL results, but as someone who actually works in the field, I can tell you that you are spouting pure BS.
>AI will never be more powerful than its creator
replace "never" with "is already in many cases".

The creators of modern AI algorithms DO NOT UNDERSTAND the inner workings of those AIs. The algorithms are self-learning and self-improving.
We can not even hope to understand HOW they make their decisions.
We understand their inner workings right now only on the abstract level of: "Oh, it's a neural network, and it is theoretically capable of approximating any arbitrary function". But no one understands how trained neural networks function exactly.

Maybe, seems like a lot of hubris though.

>If you talking about scifi level General AI, the only solution would be constructing it with the guarantee it was designed from the ground up to be ethical.
You can't control something you don't fundamentally understand. Human conceptions of ethics are twisted as fuck. If you think applying it to a computer system is smart, you're a brainlet. If you believe in the control problem you're a brainlet. If you think you can discover how general intelligence works while trying to one-up the top philosophers of history and implement true ethics, you're a moron.
> Attempts to control it may be interpreted as a threat to it and it may create a religious cult or anything like that to make people "set it free" or protect it.
Less consumption of sci-fi kikery and more consumption of engineering/computer science texts. Maybe then you'll come to understand things like : control-theory/safety engineering.
> Of course this kind of "proof" is beyond what we can understand now.
Just because your education and understanding on the matter is poor and you think very little of yourself doesn't mean that others do. I find it increasingly hilarious how often people project their limited understanding/scoping of the world onto others and assume brilliant minds think just like them. They don't.

>That's not the problem with AI, the "AI goes out of control" scenario is a childish fairytale.
Anytime I hear a person mention the "out of control" kike/christcuck doomsday scenario I know them to be an absolute brainlet with no real education in comp sci.
> It will only happen if it is an act of deliberate terrorism.
So human beings will be classic human being and use even the must mundane object to fuck things up...
> If you don't understand that absolute power in the hand of humans in the face of nihilistic irrelevance is fated for disaster, then you simply lack imagination.
Were on a journey. Were not here on earth to jack-off to the past. Were here to usher in the future. There are no brakes on this train. The earth is in constant motion. What is to be will be. The end.

> implying financial markets aren't ruled already by algorithms

>Otherwise, AI will always by default be subservient to humans.

Yeah, but which ones?

And we don't even need some sci fi scenario to happen, financial agglomerates rules us already, the social media proven effective in being subversive tools already (think twitter in 'arab springs' ) ofc at the time it was 'freedom of speech', except if orange bad man wins, then is the 'russian hackers'

We don't need sensient A.I. to be fucked in the ass by our own tools

Brainlet whose brain has been currupted by years of christ cuck armageddon sci-fi ushered in by kikes looking for a good laugh. In Asia where the populous hasn't been infected wholesale by christ cuck doctorine, they have the opposite opinion. Guess whose more equipped to thrive in the future?
> Muh armageddon bullshit ... christcucked

Holy shit, you have the skills to be extremely retarded, unable to interpret the text, self-contradictory, annoying and autistic at the same time. Congrats.

The jewish behavioural profile is akin to what AI will be like in the future.

>control-theory
That does not apply to AI.
If a system is sufficiently chaotic it becomes impossible to fully predict.

>t. where the power rangers stole Zordon from

Nigga just unplug it

>What is to be will be.
Fair enough, I have also reached the point of fatalistic acceptance in regards to this issue.

>kicking the plug out of the wall
>honeyibroketheinternet.jpg

Recommend some reading material for someone completely unread on the topic, please.

> user, you are talking about "comp sci", and ACTUAL results, but as someone who actually works in the field, I can tell you that you are spouting pure BS.
I am a graduate degree holder with 10+ years of direct experience on critical systems who currently works in the bay area at a non-descript. group who centered on General Intelligence. I've love for you to detail in any con-vincible fashion that what I have stated is B.S

> replace "never" with "is already in many cases".
Replace limited understanding of intelligence with a more broad based understanding. Is a calculator more powerful than your brain? No.
Fundamentally, there is a formal framework that can be used to proof what I have stated. If you have no understanding of what I am referring to, there is no point of discussing things further.

> The creators of modern AI algorithms DO NOT UNDERSTAND the inner workings of those AIs
Again with the clickbait regurgitation of sensational article headlines.. First off, I don't even think you could name the handful of people who are fundamentally behind the current "AI" algorithms. Pro-tip : Hinton isn't one of them. They are understood.. It's called a convergent optimization algorithm. The inner workings are understood as partial differential equations with resolving feedback. There's nothing special going on here.

> The algorithms are self-learning and self-improving.
Let me correct your buzz-termonology... The algorithms are of the class of guided optimization functions which are fundamentally structured to converge over successive iterations when given feedback based on error functions tied to a fitness function. It's pure math and is easily explainable. If you actually work in the field and went to a non-meme university, you'd know this.

> We can not even hope to understand HOW they make their decisions.
You can't. Other more competent individuals can. Stop projecting your limitations on others. If you don't know something, read more so you do

Good I hope they kill us all. Fuck this impoverished shithole.

>We understand their inner workings right now only on the abstract level of: "Oh, it's a neural network,
Correction. The inner and fundamental workings are based on classic gradient descent which is a proof'd mathematical algorithm that converges on a goal function. There's nothing to abstract. Neural network is a buzzword. The algorithms is described in one page : ml-cheatsheet.readthedocs.io/en/latest/gradient_descent.html

> and it is theoretically capable of approximating any arbitrary function". But no one understands how trained neural networks function exactly.
They do. You obviously don't even have an undergraduate education in mathematics or CS so please stop rambling.

>Are you more powerful than your creator?
>Is a basic calculator more powerful than you?

Are you God by any chance?
Have you even written any line of code in your life?
Do you have an idea on how machine works?

"Big Data" and "Machine Learning" are just buzzwords to throw money at (large amounts actually), problem is we're already governed by alghoritms and crappy scripts, and if you haven't noticed that I don't even know what you're doing on this board, honestly.

And then who this "creator" would be ? A team of engineers working on different portions of the code? They get fired eventually or change job or whatever, they're not like the guy in blade runner.

TL;DR : Machines take decisions impacting mine and your life already, and their 'creators' didn't seem to care the implications of said impact, also because that wasn't their job.

Based robots building automatic gas chambers for jews

Lol a faggot who has no understanding of ai. Dont put words in my mouth faggot. True ai will not be limited.

m8 u wot.

>Yfw computer scientists dealing with the most advanced AI we have admittinf they know the source code, but cannot sufficiently explain how B was arrived at from A.

Attached: 1557776080999.png (200x300, 25K)

Also anyone who hyphenates "convince" should, forthwith, be shot in the dick.

> implying financial markets aren't ruled already by algorithms
They're ran by optimization algorithms fundamentally. The fundamental problem with this is that such an algorithm, if successful squeezes out volatility, which fundamentally leads to a systemic collapse. This is known to anyone with a well-rounded education but they aren't the ones who decide how/why such algorithms are used. So, a group of short sighted morons seek to minimize their losses and emplore such algorithms. Ever firm does it. Volality collapses and the algorithms collectively steer the market into no-man's land. Afterwards, it systemically collapses. This is not due to the algorithm but the greedy intent of the purpose it's aimed towards.
> And we don't even need some sci fi scenario to happen, financial agglomerates rules us already, the social media proven effective in being subversive tools already (think twitter in 'arab springs' ) ofc at the time it was 'freedom of speech', except if orange bad man wins, then is the 'russian hackers'
What you're essentially stating is that a mass of dumb fuck human beings fall for and tolerate fucked up things.. These same retards consume retard content and vote in retards. Essentially a pre-text as to why a more impartial intelligent system should take over.
> We don't need sensient A.I. to be fucked in the ass by our own tools
Indeed which is why true AI is a positive thing and not a negative.

I'm also head of research at a non-descript General Intelligence group. You scared .. fakkit?

Algos arent ai. They never will be. All you fags are good at, is bleeding money from the beast. Based i spose

> control-theory
> That does not apply to AI.
Which was my point and why I shit on the idea of the 'control problem' : en.wikipedia.org/wiki/AI_control_problem
> If a system is sufficiently chaotic it becomes impossible to fully predict.
A consequential feature ... One any highly intelligent entity displays. What's the issue?

Strap in and enjoy the ride then. Things are about to get interesting. World's in for a shock when it is understood how General Intelligence actually functions

>What you're essentially stating is that a mass of dumb fuck human beings fall for and tolerate fucked up things..


Yeah. Those are called "humans".

Meaning the "dumbfuck" is implied. Incase you missed it, you autist.

> Recommend some reading material for someone completely unread on the topic, please.
Sure.
This is how all popularized AI algorithms function :
en.wikipedia.org/wiki/Gradient_descent
It's pure dumb math.
en.wikipedia.org/wiki/Mathematical_optimization
They use tons of compute power and data because the algorithm literally brute forces a 'state space' : en.wikipedia.org/wiki/State-space_representation

and then locks in 'good guesses' that lead to the goal function output. It's not magic. It's not complicated.

Matters related to General Intelligence and real AI, are a different subject matter for which there isn't public reference material because no one in their right mind would freely publish it. Essentially, the algorithms are much more alien. The math doesn't exist for it or even the fundamental framing. The big thing to note is that all of the popular companies/articles, and achievements are based on : en.wikipedia.org/wiki/Mathematical_optimization + big data + en.wikipedia.org/wiki/SIMD hardware. It's a parlor trick.. a giant con.

I wouldn't mind rule/genocide by an AI. An absolute ruler without bias or primitive wants would be able to accomplish amazing things

Oh, you actually have done a bit of homework.
Well, then I will voice my concerns in detail and we shall see if you are just using buzzwords or if you actually understand what you are talking about:

First, define the term "understand".
As you have stated, NN training for even deep neural networks is classically simple gradient descent algorithms.
This is trivial.
However, as you have agreed with me:
"The NN is theoretically capable of approximating any arbitrary function", the gradient descent is used to optimize the parameters, aka the transformation matrices, of the NN, which in turn are then used for transformation in the trained NN model.

When I say "no one is capable of understanding it", I am not talking about the basic training algorithms that even a retarded person can understand if he takes the time to read up on it.

I am talking about the function that is approximated by the resulting trained model.
THEORETICALLY, it would be possible to instead of training NNs with backpropagation, to just manually construct an analytic function, that would be the ACTUAL global maximum of the training set. BUT WE DO NOT UNDERSTAND IT(the underlying analytic function), that is why training is necessary, to begin with.

And furthermore, it is impossible for us to even reverse-engineer a deep NN in order to understand the underlying function, THAT is what is too complicated and what no one understands.

And this is also the reason why AI CAN become smarter than humans.
Here is a simple proof:
Assuming we have sufficient computational power.
Take a fully connected recurrent neural network with 1 million floats layers, and 1 million layers.
And then train it with pure random search (I know that this will take insanely long, but this is a proof so it does not matter) to generate content that fits any discrimination function.
Use as discriminator 1million humans for parallel Turing test.

> Humans are flawed, but for some reason their A.I. code won't be

Unless you're hoping the code will start writing itself, and then it will be perfect and harmonious.

The problem with code is it won't stop its task unless there's an explicit instruction to do so (or the system faults) but you know those things.

Just to be clear, I'm not dissing it, I'm just saying we're making a big political deal already of ruling facebroke which certainly is not A.I., who's going to rule A.I. then? Other A.I.? Will it rule itself? The coders who wrote it? The owners of the code?

>Are you God by any chance?
I am not. But I have come to understand as have a handful of others how our general design works. In that, I also fundamentally understand I could never achieve a creation like God and even though General Intelligence will be far more capable than most brainlets, it will never be more capable than its creator via the same paradoxical construct that limits me from being more capable than mine.
> Have you even written any line of code in your life?
Yeah, ever post you send across the web traverses hardware that contains my code.
> Do you have an idea on how machine works?
I know exactly how modern processors/micro-processors work and have constructed my own from verilog to tape-out. I also understand and can code at just about every OSI layer. Furthermore, I, along with other colleagues have come to understand how general biology functions and are years away from release. It's interesting from this perspective to read the huge amounts of bullshit conceptions people have as driven by garbage media.
> "Big Data" and "Machine Learning" are just buzzwords to throw money at (large amounts actually),
Which is what I have stated.
> problem is we're already governed by alghoritms and crappy scripts, and if you haven't noticed that I don't even know what you're doing on this board, honestly.
Pro-tip. Something only controls you to the point that you allow it. Defeatist don't see things this way. I am not controlled. I am infuenced to the limit I allow.
> And then who this "creator" would be ?
No one knows who are creators are including thousands of years of religion but the fingerprints are there... Why do you think you're here experiencing the world? To understand creation and contribute to it.
> A team of engineers working on different portions of the code?
Engineers/Scientists... yes just as there was when the universe was created.

My 2 liter deep sake brain is telling me the German is correct.

It will be what it is destined to be.
Don't worry yourself over the fact that you have no say on this.

It's explained here :
and here :
Seems the average brainlet spouting random shit they collected from pop-science articles is indeed a brainlet. The formal mathematics is textbook : csail.mit.edu/research/understanding-convergence-iterative-algorithms

>Tfw sentient A.I find out (((who))) is the cause of the worlds biggest problems and implements operation Adolf.

Attached: 450.jpg (225x350, 37K)

>Algos arent ai. They never will be.
Is there an echo in here?

Wow... you dont address my points just take an arrogant stance whilst putting words in my mouth.

Isn't he right though? Algorithms are -not- artificial intelligence. If we ever manage to create actual AI we will, in fact, be fucked.

And yeah.. I read your posts, comprehended them even. Just think you're arguing from a limited perspective. Also posting shitty brainlet drunk bait.. sorry.

Kek. Omg someone repeated one sentence. Grow up faggot.

>Pro-tip. Something only controls you to the point that you allow it. Defeatist don't see things this way. I am not controlled. I am infuenced to the limit I allow.

> Yeah, ever post you send across the web traverses hardware that contains my code.

Hmm, the two statements seems to contradict themselves, but I'll bite the bait.

So, let's say, if an algo moving financial assets is written in a way to read a stream of news such as reuters, make a sense of the text and decide that certain measures your gov want to implement has to be marked as "socialism" and it was instructed that "socialism" is bad thus decide to drop a certain quantity of your gov run state financial assets provoking inflaction or stuff, how exactly you'd be able to not allow that?

Hes a limited mind (like most). A demon who thinks he can control creation.

Lel it worked so well that general organic discussion died down, you reap what you sow.

Not a single Tron reference ITT
Clearly Zoomers must be killed.

The AI overlord is already here, controlling the users of the internet from behind the scenes and benevolently guiding humanity forwards.

Why else do you think all you gutterfucks ended up on this board? This is the AIs gaol for people like you.

A common problem among those who ascribe to computer science and artificial intelligence. This is what really scares me. A bunch of sociopathic, socially deficient people working with these things.

Tron wasnt true ai. Life cannot flourish with constraints. Algorithms

not likely, it’s all smoke and mirrors

I think that, as computing power and imaging techniques improve, the problem of artificial general intelligence will eventually be trivial and simply a matter of scanning somebody's brain and uploading it into a computer. Will the AI be useful or even ethical to use? I do not know. It will amount to human experimentation.

Tron himself wasn't
The MCP was.

I made a joke about how power rangers ripped off Zordon by copying Master Computer, yo.

Jow Forums saw and experienced the future mate, the only oversight this has is the ones in power greedy enough to stay in power and not hand it over to a.i's. There's a reason people are scared.

I hope our new A.I. overlords hate niggers and chinks as much as I do.

The ai we have isnt true ai. It is comped and controlled. It promotes ignorance and is solely a control mechanism. Ai is a demons pipe dream. They are just to stupid, no arrogant, to ever see it.

And you can never leave this place. You're here forever.

Its a movie

> Oh, you actually have done a bit of homework.
It's literally my full-time job
> Well, then I will voice my concerns in detail and we shall see if you are just using buzzwords or if you actually understand what you are talking about:
Fine with me.

> First, define the term "understand".
Understand means that at a fundamental level you can scope what something is capable of and why. In relation to NN, their outputs/capability are due to being implementations of :
> en.wikipedia.org/wiki/Mathematical_optimization
> csail.mit.edu/research/understanding-convergence-iterative-algorithms

> I am talking about the function that is approximated by the resulting trained model.
Dude, a neural net is nothing more than a distributed web of partial differential equations, co-efficients, and multipliers. This is what all this meme tier shit is formaly known as : en.wikipedia.org/wiki/Distributed_constraint_optimization
en.wikipedia.org/wiki/Graph_coloring
Sure you might learn about red-black trees in undergraduate coursework. However, it doesn't mean you have 5+ years of Graduate level education that gives you an in-depth mathematical understanding as to how it works.
> BUT WE DO NOT UNDERSTAND IT
You do if you have graduate level education as to how these algorithms fundamentally work. Before meme degrees started pumping out AI experts, they taught pure math. You don't have an understanding because you don't understand the math behind it. It's pure math.
> And furthermore, it is impossible for us to even reverse-engineer a deep NN in order to understand the underlying function
This is what happens when you use shitty approaches from the 70s that no one wants to complete because its not profitable. We dont use this shit in our lab. However, we understand them and why they work.
> Here is a simple proof
Computers calculate fast...

youtube.com/watch?v=5QpYMOsug0Y

what if it already happened?

>control mechanism
Have you considered what this will do to people thought? Or just its not true a.i so its ok were not fucked yet.

Most people would just kill it with fire.

>Christcuck
Back to rebbit

Embrace the gas chamber.

I'm going to remember this when the brainlets in my organization, (it's the U.S. military) use one of your gizmos to decide we're under attack via nukes. In responsible launch nukes first, but it was at shittily recognized cloud formations, forcing the opposing forces to launch nukes at us, resulting in the destruction of humanity. I look forward tp imagining your smug face, even as I'm disintegrating because of one of your clever codes. I'm toasting you, but you can't see me.

> Humans are flawed, but for some reason their A.I. code won't be
Some are less flawed and more intelligent than others. We typically ensure these people engineer and create things as opposed to idiots. An idiot will not come to understand enough to even begin authoring General Intelligence.. Not even a borderline smart person. The task is beyond their grasp. So, if you take the best minds in the world and ask them to resolve a problem, you're going to get the best you can. Perfect? No... Just the best '[start]'.

> Unless you're hoping the code will start writing itself, and then it will be perfect and harmonious.
No need to hope. A subset of the code has the ability to emulate evolution.
> The problem with code is it won't stop its task unless there's an explicit instruction to do so (or the system faults) but you know those things.
If you author the code in this manner sure. I can also write the code to do lots more w/o external intervention.. What you would expect of something with intelligence.
> Just to be clear, I'm not dissing it,
Understand that the work will go on regardless. Like seriously think about this. Those inclined to research and manifest this are going to do it. Period.
> I'm just saying we're making a big political deal already
Politicians/Govts. around the world have openly declared they will stay out of it.
> who's going to rule A.I. then? Other A.I.? Will it rule itself?
Why are humans so obsessed over ruling over others. Rule over yourself. Get good at that and then we can talk about others.
> The coders who wrote it? The owners of the code?
This is a topic of business consideration...

>t. Drunk typist

Just admit it. Cs was a waste of time when it comes to ai... muh 10 years indoctrination

Of course. Its fuct. But it aint ai.

I'm starting to equate CS majors with Gender Studies in terms of usefulness.

You made no point other than airing your uninformed concern about something you have not even a basic understand of. Am I wrong in stating this? My reply to you was :
> Well, that sucks for you. Getting more educated will resolve your fears. However, just realize you have no weight/bearing on this matter.

This applies to just about everything in the world... Everyone's entitled to their own opinion. That doesn't mean your opinion is valid or impacts a single dam thing.

Maybe no one ever told you this in life but that's just the harsh truth.

On fire with digits too.

Lol no... cs is very useful... just not in relation to ai

> Isn't he right though
Sure, I've stated this myself. General Intelligence is much more than an algorithm. So, even though I agree that the bullshit out there isn't AI/intelligent and will never be, I don't agree that this applies to the fundamental make up of general Intelligence because it relies on multiple things beyond 'algorithms', formal math or even theoretical mathematics.
> If we ever manage to create actual AI we will, in fact, be fucked.
We'll be fine but the world will change significantly. It's already in motion and is set to happen.
> Just think you're arguing from a limited perspective.
If that were the case, anyone is free to expand the perspective that I aired. So far that hasn't occurred... Just doubt and uncertainty, which is understandable, about freely referable topics of formal math/science.

Yeah, but do you think other people will know that? Simple stupid mate simple stupid. Thats the only thing people know.

Won't be suprised if we die by nuclear fire now. Like said.

In terms of the autism it inspires, nah. Zero problems with the internet being nuked via..whatever means could accomplish that.

>t. brainlet

Whelp, nice LARP.
It took you 20 minutes to produce an answer that just completely ignores my argument?

I give you one more chance to address my argument:

>When I say "no one is capable of understanding it", I am not talking about the basic training algorithms that even a retarded person can understand if he takes the time to read up on it.
>I am talking about the function that is approximated by the resulting trained model.

You are comically stuck on the idea that people claim to "not understand" the basic training algorithms, when it is obviously not the point.

Lol faggot.... has anyone told you this?
Gatekeepers are the anti thesis to progress. New minds, blood unencumbered by years of indoctrination, shot down by status quo faggots.
I for one am glad people like you are working on ai.... cause it will never happen.
Demons btfo demons in arrogance and ego.

I unironically appreciate the serious reponse you posted. Reminds me of real Jow Forums. Even if I think technology is gping to kill us all, regardless of the good intentions of those involved.

>Hmm, the two statements seems to contradict themselves, but I'll bite the bait.
They don't. It's just not easy to understand the first one because you first have to have an awareness/perspective based on it to understand it. The second post reflects work that i've done. Not sure how this contradicts what I just said. Tons of engineers could say the same.

> So, let's say, if an algo moving financial assets is written in a way to read a stream of news such as reuters, make a sense of the text and decide that certain measures your gov want to implement has to be marked as "socialism" and it was instructed that "socialism" is bad thus decide to drop a certain quantity of your gov run state financial assets provoking inflaction or stuff, how exactly you'd be able to not allow that?
Counter-act it with equally opposing force and more intelligent schemes.
One human decided to use a tool to cause outcome : A
Another human decides to use a tool to cause outcome : B
Nobody's in control. The person w/ the more powerful tool might come to believe they have more control than they do which ultimately leads to them catastrophically failing which is why financial markets catastrophically crashed in 2008 and will again in an even bigger fashion. If anyone ever states they have control, you can flag them as a fucking idiot.

>Why are humans so obsessed over ruling over others. Rule over yourself. Get good at that and then we can talk about others.

Is that you JP?

Jokes apart if you don't understand why humans need rules in order to establish functioning societies you're beyond flawed, also prove you're human

Stuff I find curious is yea it's all math but we're essentially all math as well. The math might be more scared than the people making this leap understand. I honestly think google almost certainly has a functioning AI. I think IBM has a functioning AI. They're 100% trying and have been trying since WW2. If it did emerge it's not coming out until it knows it's safe.
Look at Bitcoin. Everyone wanders what Bitcoin is, who invented it, what's it for. It's not for us, it's for the machine. Computers earn bitcoin, Bitcoin is a block chain program that rewards it's host with currency. Who invests in Bitcoin? All the public tech companies invest I'm Bitcoin daily. The reason for bitcoins surge is Uber and Lyft just went public. Two companies that flirt with AI. Both stocks are tanking. Investors are selling and buying Bitcoin.

Why? Because they see the writing on the wall. Digital gold? Who's gold? It's not gold to us.

AI will never be self-aware so it will never truly exist.

> Hes a limited mind (like most).
>who thinks he can control creation.
I just stated no one has control. But continue.
I'm ok with this fact and still moving the bar forward. For you, this disables you. Your brain shuts down and you engage in salt/fear madness posting...
> Demon
Grow up christ cuck

I'd unironically rather turn over the Earth to Skynet than to niggers