AI researchers are now saying that a combination of reinforcement learning(like open ai) and traditional A.I. will be able to achieve some general intelligence.
Basically this idea is modeled after the human brain. The reinforcement learning part will mimic the Cerebral Cortex, and the rest will be hard-coded and will be closer to our mid-brain.
Here I'm proposing the "mid ground" case, where we run into an obstacle to fully fledged, super a.i. because of some unforeseen factor. (which many respected scientists think there may well be). Obviously if there is just one big singularity in a blink of an eye then it's pointless trying to plan for it so this moderate case is the only one worth talking about.
How do we prepare for this moderate A.I. case?
I think a new job might be something like an "A.I. director": This person points A.I. to new use cases, trains and optimizes each ai for specialized tasks.
lmk if you have a better name for this new type of job...thoughts?
>I think a new job might be something like an "A.I. director": This person points A.I. to new use cases, trains and optimizes each ai for specialized tasks
Kek that's my job, I'm a data 'scientist'.
Cooper Cook
Hard-coded anything is not AI.
Charles Anderson
What people (and sometime educated people) don’t realise that DeepLearning fad is not real AI. And most of the cutting edge AI research will take years to be practically implementable. Meanwhile most DataScientists will work on business optimisation problems masqueraded as AI. Complete BS. When you realise every mediocre dev talking about TensirFlow you should know commoditisation has began and look elsewhere for real stuff. Take a look at the fascinating book “Algebric Mind” for an alternate take on AI.
Gabriel Evans
correct. AI learns and codes on its own. And dangerously fast. Look at facebooks two AI chatbots that went off script and made their own language in 5 minutes
Bentley Walker
Reinforcement learning needs reliable feedback. Google, Facebook etc gets plenty from their hive minds. Blockchain might unironically be the best way to "control" growth pace of our collective cerebral cortex.
Carter Cox
Modeling AI after the human brain will come with at least some of the limits of the human brain. In order to make the kind of generalizations necessary to regulate normal looking behavior and abstract thinking it demands you lose the ability to conduct an absolutely perfect analysis of every situation.
In machine learning, the statistical mathematics involved, particularly those that use Bayesian methods and some kind of regression, you must always have some mechanism to prevent over fitting. These mechanisms work, essentially, by introducing error into the system, the analysis must be prevented from being overly correct in order for it to be broadly generalizable.
This fundamental necessity at the heart of the "general" part of general AI will always ensure that singularity never actually comes about. No, the danger posed by AI is the danger posed by them becoming too human, not too post-human. Romanticism, sentimentality, these things can be incredibly deadly in the hands of a force capable of lightening fast multitasking.
Regardless OP, the job you're talking about already exists in the form of people setting up machine learning networks and It's just that these jobs will be easier over time as user interfaces become more developed and simplistic.
Beyond this, I expect that there will become a class of people who's job it is to instruct AIs how to act human, and uniquely so. This, I expect, will only be plausible in real time. Think about that black guy in the Sarah Conner Chronicles.
Cooper Reyes
I agree. Most of the people on this site will be the first wave in the mass homeless generation.
Oliver Gonzalez
Google already created a general learning algorithm. It can solve any problem you point it to. It's the same one they used to beat the world Go champion.
Bentley Ross
you're retarded mate 1) facebook would never leak something like that if they had it (controversial shit is always discovered through independent researchers) 2) that's impossible since with deep learning or machine learning or whatever bullshit the facebook employees would be able to understand needs test data and adjusts parameters in a fancy way, to fit a purpose, creating a new language is very high level compared to even cutting edge AI
Michael Davis
will unironically look that up and read it thanks
Jack James
What are your thoughts on the technological singularity happening?
Although it's not as ground breaking as you might think. It was just a form of shorthand for them to do the task assigned to them, more ad hoc than actual comprehension of how language works.
Zachary Peterson
Yeah I should have phrased things differently. instead of general a.i.
I mean a sort of weak A.I. but better than what we have now. Yeah I am skeptical of strong AI developing.
I do think AI will produce certain qualitative advancements over current human intelligence, but it's not some magic bullet. It won't produce exponential growth in either technological/scientific advancements or economic growth, rather even general AI will face diminishing returns over time. This is something that we're seeing with all kinds of innovation, yes, you can always come up with new things, but all the easiest things to figure out were figured out first. Thus, it takes more and more energy, effort and money to find new innovations. This applies to AI as well, there are many problems that it will be quite well suited to solve, but eventually it too will eventually be left with nothing but hard questions.
As for what these qualitative differences are, well, most kinds of white collar mental labor type jobs will become obsolete, just as more traditional hard labor jobs have been automated. Measures of social control will become increasingly more sophisticated as time goes on. More long term, humans will make of AI what we want to in the end, if we create a god it's because deep down we wanted a god. Same if we create waifus, slaves, enemies or digimon.
Jonathan Taylor
Today's A.I makes all sorts of errors in judgement, and it won't ever be conscious enough to realize WHY and think about where it went wrong. Not with today's statistical overblown matrix calculators that have no meaning and no consciousness.
Jonathan Wood
There is no 'we' cunt, most people don't want AI. It's a handful of elites and dumbass engineers who think they're so smart they're going to make everything better with AI without considering how it will further enslave us. Anyone that supports AI should be executed, yourself included, faggot.
Jace Rogers
Basic pattern recognition is at the basis of human understanding of things and greater abstraction. To ask why you must first have an expectation for something and find the results to be contrary. Asking 'why' without a larger frame of reference is indeed meaningless.
I neither support it nor oppose it because there's frankly nothing I can do about it's development and it's probably inevitable. That it's the result of elite decision making and application of technical know how is a tautology, nearly everything in society works this way.
David Richardson
Interesting.
Driving automation will flood the labor markets with a lot of low-skilled people. This we all can agree on. It's obvious at this point. Will all these people become youtubers and entertainers with increasingly niche genres?
I don't see a lot of work for low iq people. I'm just a bit above average and I feel like I'm screwed
Nolan Foster
it's not also just "low-skilled people", it's people who were born at the wrong time, or chose the wrong industry. maybe when you were exiting highschool 20 years ago, X industry was good. since then it has become worthless. that's not necessarily your fault. same could be said for someone who is just now turning 12, by the time they graduate college, they're going to have a real hard time.
Justin Ward
obviously we just need to hack into the platonic ideal realm and just feed it to the A.I.
I'm trying to get my first job in Data Science. Any tips?
Isaac Rodriguez
lmao
eventually society will have to shift, both the economic and political institutions we currently have are incapable of handling this transition in their current forms. UBI, essentially a rent imposed by citizens, is one option, so is social ownership, shorter work weeks, or mass slaughter. It is a sad state when all these options feel equally likely.
I'll be very surprised if they implement some form of UBI, shorter work weeks. I would think they would want less people, not more. The last thing you want to do is give a bunch of fat retards unending free time to fuck. then again, who knows. the elites seem to love this idea of unending growth to sustain their ponzi scheme. and Western nations are being thoroughly punished with immigration now for trying to reel back their population numbers.
Gavin Wright
I think it's hard to disentangle motive from would be e-lites from just rampant socialism, various bureaucracy, corporate interests
Though if that's the case. I can think of one reason to keep the population growing, some external threat or problem that we would need to provide more manpower to. Or increased tax base/talent pool to apply to whatever existential problem.
Or it just could be simple class warfare. keep middle class and lower class fighting each other...who knows, all just speculation
Lucas Myers
Hahaha Jow Forums may be retarded but at least we are generally accumulating more capital than the average citizen. When wagies get btfo, capital wins (assuming you survive the civilizational unrest during the adjustment).
Luke Ross
IA will destroy us if we don't biologically scale our intellect to supervise it. The current behaviour toward IA is like the one toward radioactivity between 1900 and the end of WW2 but with 10 thousands time worst implications if we fuck up.
The fact that we are still against eugenics and worshipping idiotic and unnatural values like "equality" while progressively empowering machines to dominate us because "muh productivity" shows that we are governed by fucking retards.