What is the advantage of having multiple cores instead of one big core?

What is the advantage of having multiple cores instead of one big core?

Attached: unsure girl.png (379x205, 19K)

parallel instruction handling

it's feasible and a way to circumvent current technological and architectural limitations

Attached: 1498244158420.png (727x682, 168K)

We've reached peak architecture design at this point

more hentai videos can be played at the same time when one has more cores.

This is why the true /gentooman/ who has rid himself of disgusting 3D will have a Threadripper 32+ core CPU.

>more hentai videos can be played at the same time when one has more cores.
But who uses CPU rendering for their videos?

Literally everyone?

*uses the dsp on my gpu*
psssh nothing personel kid

Why do tits come in pairs instead of a single big one?

Attached: Thinking.png (614x614, 38K)

none. multiple cores only exist because we cant make a single core faster

Why not just... make it bigger?
actual question

because the GPU is also rendering hentai videos. The whole idea is that you have more anime tiddies.

making things bigger also makes them slower

Sadly, that is probably the greatest response to the question I have ever seen.

Distance = slow

2 brains are better than 1

>Sadly

Makes sense, that would have been my guess.

Think of it like city planning. At a certain point you can no longer scale outwards so you just build up adding more stories and creating taller buildings. This is sort of how CPUs work. The processes for these things are getting so tiny (14, 10, and even 7nm) and drawing so much power and producing so much heat that we aren't really going beyond 4-5GHz any time soon. Silicon has pretty much been pushed to its limit as a material for building micro processors. This is why there's a push to hack on architectural extensions for cool stuff like OOE and whatnot that speed up these CPUs greatly, but they do so internally. The same x86 or x86-64 architecture is still exposed to the end user's software.

This is where we are right now. If you had to ask me what I think the next revolutionary step in computing will be, it'll be massive software rewrites to remove bloat and it'll be branded as "software from scratch" or "freshware" or some stupid shit that normies like. It'll involve the removal of interpreted meme languages, bloated frameworks, and other unnecessary shit that has built up over the years. These new operating systems and programs will go great with shiny new RISC CPUs and heavily optimized compilers. Something as shitty as a Raspberry Pi may some day outperform gayming desktops with carefully crafted software if Windows 10 and OS X and Linux keep bloating to hell.

TLDR - The future isn't faster hardware, but leaner software.

Attached: 1534769566323.jpg (727x734, 73K)

Totally

shit get

>it'll be massive software rewrites to remove bloat and it'll be branded as "software from scratch" or "freshware" or some stupid shit that normies like. It'll involve the removal of interpreted meme languages, bloated frameworks, and other unnecessary shit that has built up over the years.
this is an ivory tower meme propgated by experienced programmers who don't realize most code is and always will be written by people with less intelligence and experience than they have

>leaner software
This.
Now, all you Faggots, go learn asm and C.

We can only dream that Uncle Bob's philosophy catches on.

Those digits...wasted

Uncle Bob's philosophy is to use Java, the king of bloated codebases

Been a while since I posted this

Attached: 1490474923106.png (1444x1705, 301K)

>muh digits
Grow up

t. zoomer

off-topic: what potential viable material will allow us to break 6ghz with desktop air cooling? Or are we stuck with silicon for the forseeable future?

what would be the difference between 4 ghz 1 core vs 2 ghz 2 cores?

quantum computers

So why not make them a cube? It would be more or less the same distance from end to end.
Heating issues?

>Heating issues?
yes

>posting on imageboards
grow up

>gpu decoding

Attached: 1509901230631.jpg (600x961, 65K)

Attached: 1526773082575.jpg (1093x819, 270K)

This is only partially true. What needs to happen is a bunch of really smart software engineers need to get together and create a computer company that sets out to make only the best computers. Imagine the Applel "just works" ecosystem and UI polish and consistency with the autistic minimalism of Plan9 as the base system, combined with a microkernel like seL4, combined with some decent RISC hardware. This is how you make a kickass computer. Add custom graphics hardware with unified memory, and build custom boot firmware that does nothing but find a disk or a kernel and boot it without spergy EFI shit.

As far as retarded koders go, just give them simple tasks to work on. If they do something wrong, explain why and tell them to fix it. If they keep fucking up, they're out of a job. It's that simple. Current software suffers from a lack of oversight and management, whether it's internally at a large company or with small freetard projects. Someone just needs to tell these fools "no" when they have bad ideas. I think Steve Jobs was a shitbag but he was really good at telling idiots no.

>Or are we stuck with silicon for the forseeable future?
I'm not really into the chemical side of stuff. I'm more into EE and software, so I have no idea what alternatives to silicon would actually work. I'm pretty sure we're stuck with silicon for a while, at least in consumer and most enterprise stuff.

Kek

Attached: DLD7z61WsAADoMY.jpg (720x718, 32K)

to get the same data processed as multiple cores, it would have to clock significantly faster than whats the norm. Faster clock speeds = more heat, more wear and tear on your hardware

Attached: 1531052628479.jpg (900x1200, 164K)

Attached: g in a nutshell.png (1098x265, 73K)

>This is only partially true. What needs to happen is a bunch of really smart software engineers need to get together and create a computer company that sets out to make only the best computers
that's what actually happens though. Then after they have a taste of success you get the other side of the equation, greed, pride and complacency. Human nature is what prevents software from reaching it's theoretical peak. There's nothing stopping it from happening but people themselves, but thats the most realistic limiting factor there is. If you fired every mediocre programmer, most of the software people are using just wouldn't exist

>"9.6GHz"
No, the cores all run at the same speed, and even if you COULD make a 100 percent, fully parallelizable task, which you can't, it's not correct to take completion time and use clock speed as retroactive arbitrary performance number.

Core are cores, frequency is frequency.

its a bait image, and you fell for it

>9.6Ghz
Fucking chill; no need for this

that pic is old as shit man
welcome to Jow Forums

Attached: gensokyocertificateofexcellence.jpg (2500x1874, 1.3M)

When doing a computation that can be split into two cores efficiently, there is no difference. When doing a computation that cannot, the second core is useless, and the 4 GHz is twice as good.

One fast core is better than multiple slower ones. The reason we have multicore machines is that we don't know how to make cores any faster, so we have to settle for the next best thing.

No obvious difference, even multithreaded processes would be run asynchronously on a single cpu. for example you are currently running a large amount of processes on a significantly lower amount of coresm they are basically just taking turns executing parts of themselves. probably having physical cores could eliminate some of this overhead cause your os would have to do less work to partition them into virtual threads but i'm not sure how much of an actual improvement this would be. i think one of the improvements is that having multiple smaller cores you'll have less power requirements, less heat gen and therefore be able to fit more transistors on a board or something like that. ultimately the fact that they exist means that more cores is cheaper to produce reliably for the same amount of power, and the market does the rest.

>after they have a taste of success you get the other side of the equation, greed, pride and complacency. Human nature is what prevents software from reaching it's theoretical peak
Why can't there be a company that does things differently? If I wasn't so fucking lazy I could start it today by building the exact software I shitpost about.

>If you fired every mediocre programmer, most of the software people are using just wouldn't exist
You haven't convinced me that this is a bad thing.

>Why can't there be a company that does things differently?
Human nature. Some people can overcome these things, most people can't. If you're an eltist who thinks only well-made software should exist even if it's to the deteriment of society as a whole I'm not sure you're alturistic enough to do it

>Why can't there be a company that does things differently?
Because companies can only survive by selling what people want to buy. And people want to buy features and the latest pieces of shiny, and don't give a flying fuck about what we would consider quality software.

Which means that the only organizations that aim at all for what we call quality software, are the freetard people; apropos, the only place you generally find software aimed at engineering quality is in the free software world. (Not ALL the free software world by any means -- most of free software is almost as feature focused as commercial software. But some of it, anyway.)

most free software is shit because they don't care about what their consumers want (or if they even have any consumers at all)

Bigger dies also mean lower yields.

Well that explains dual-core systems, because I have two hands. So I guess we need a core for each hand.

But why make quad+ core CPUs instead of just gigantic dual-core CPUs?

cows have 4 tits and they have the best milk

Basically this. If by chance you do get a good architect that designs a system meticulously and makes constant adjustments which allow the system to work nicely and then be very easy to maintain. Inevitably you'll get some pajeet come along and bang out shit code that barely does what it's supposed to but do it extremely fast, business will get a boner for this because low costs = better money (which makes sense) and try to work around the set procedures by going directly to people like this to implement their own agendas(which doesn't make sense) which will then make your architecture more fucked with every hack, eventually succumbing to aids. And that's if it were possible to make a perfect architecture, which it's not.

because tits are actually just a pair of buttcheeks, but on the front of the chest so you can see a womans sexy "butt" no matter what direction she is standing in. its evolutionary

every 2 cores is an additional chick, obviously

1) when you have a bunch of tasks that don't directly affect the behavior of each other, there's no reason they should have to run one-after-the-other

2) when your transistors are the size of a few dozen atoms, the distance between transistors actually makes a huge difference in latency, so making one really big, fast core would result in one big, fast, horribly inefficient core

3) Speed, power, and thermals are all being pushed to the limits of existing technology. This can be circumvented by using lots of tiny cores instead of trying to make one gigantic core.

joke's on you, I'm working on an embedded OS that runs user code in a VM

>Because companies can only survive by selling what people want to buy
Why couldn't well written and minimalist (in the non-autistic non-Arch ricer sense) software be a major selling point? It would be fast, secure, and stable. Simple and stable APIs would make things easier on developers (see NeXTSTEP) and wouldn't throw drastic UI and other changes at the users. Sometimes steady wins the race, as seen with so many companies and governments clinging to Windows XP to this very day.
>and don't give a flying fuck about what we would consider quality software
I consider microkernels to be quality software, and they have practical benefits aside from showing off your internet penis to Jow Forums in screenfetch threads. A well made microkernel system would allow various system components to be stopped, replaced with a newer version, and the started again. System updates could take place without a reboot or hours of spinner screens. I see normies bitch about this all the time with Windows 10. Imagine their joy when they never have to see "restart for updates" ever again. The biggest inconvenience to the user might be a few seconds of a blank display if the display server or related processes like GPU drivers are updated, but they wouldn't lose a single bit of work they had open. The system would also be self-healing. If one process crashes, like a driver or something, it doesn't bring down the whole system. Another kernel process can just restart it automatically. Isolation of kernel and other processes has security benefits, in that it's harder to hack a distributed network of programs unless you can find a vulnerability in the way they communicate. Luckily projects like seL4 have made inter-process communications between the microkernel servers as minimal as possible, which actually kills two birds with one stone by fixing performance issues seen in other microkernel systems like Mach.

>I'm working on an embedded OS that runs user code in a VM
Android?

Attached: 1534450743390.jpg (1148x746, 193K)

based apuposter

>RISC
Wish this meme would die. The idea of "twice as many instructions, where every instruction does half as much work" hits clock frequency, memory access, instruction fetch, and thermal walls much sooner than CISC. This is why instruction sets are getting larger, not smaller, otherwise we'd all be using OISC by now.

>Android
No, it's my own thing, but I guess it has some things in common with Android. The VM is embedded inside of the OS itself, which is similar to android, but the filesystem uses a structure that's all my own design, and it uses protothreads instead of supporting true multithreading. I guess you could say that it's what Android might look like if it was designed to run on a 100MHz microcontroller.

wtf i came to gensokyo to fuck fairies not become sysadmin

Attached: sad remilia.png (243x266, 111K)

Here's the thing. RISC is nice because it pretty much follows the Unix philosophy as well has hardware is capable of. Hardware is hardware, and software is software. This goes back to the old joke of Relegating Interesting Stuff to the Compiler. When you work on optimizing software, you can simplify the hardware and end up with a CPU package that's easier and cheaper to produce than CISC in development and manufacturing costs. Even if it is slower, there's not really an issue with a 2-3GHz chip when you can add other things like specialized memory and graphics subsystems that work with the specialized software you'd write for them to have incredible performance for things like 3D animation. Look at what SGI did. Their MIPS chips they used were pretty fast, but not as fast as CISC chips by the early 2000s. But this didn't matter. SGI machines continued to be used for video production and the likes because of the specialized hardware that had almost nothing to do with the processor.

That's how you build a fantastic computer.