Run two instances of a game

>run two instances of a game
>overlap the frames
>get fps to scale linearly with number of cores
how has this not been tried?
am i a genius or what

Attached: amd.png (500x651, 108K)

how would you make sure they synced up

the gpu remders the frames you dumbfuck

>bait this shit
>people fall for it
Hiroshimoot please shut this shit down already, it's been nice while it lasted but it's fucking time to stop.

i talking about cases where there's a cpu bottleneck, on lower resolutions
well, if imputs are excatly the same for both games, shouldn't they play out the same? if there's some random number generator that determines spawn locations or something, an exception would be made for this case

I'm gonna take the bait. The real problem is that shit pajeet tier game devs can't into multithreading, running the same game twice doesn't work as there will always be subtle but noticeable differences in frame time.

>The real problem is that shit pajeet tier game devs can't into multithreading

This.
Hardware accounts for shit-all in vidya games, because video games are a field that's populated by the undesirables of the software world.
They get paid like shit simply becaue they are the bottom of the barrel among software developers.

People seem to have this weird idea that video game developers are some kind of tech wizards. That may have been true back when the only ones making games were people who were capable of doing side projects for fun while attending top-tier universities.
A modern game developer is just a white pajeet to wipe game director's/designer's ass with.

This pretty much. People always lose their shit about it because maths hurt their feeling in their first semester, but game dev is literally nothing but maths. Except for level design etc but that's not game dev so who cares.

Games are shit optimised because they are made by underpaid overworked shit code monkey that failed their maths 1 course 3 times in a row.

There are a lot of things that can't be multithreaded in games - particularly physics stuff. Therefore, FPS can't scale linearly with cores, no matter how good the programmer is. Look up Amdahl's law. Yes, game devs do suck at multithreading, but it's not a silver bullet.

Obviously you would need a thread to sync both games and do the RNG for both instances. But it's actually a good thought example, about how much of a game can be multi threaded.

And basically this

Some game engines are pretty well optimized, Doom engine for example is fantastic

It's possible for game engines to use at least 8 threads for better performance,

In windows processes are nothing but a container for threads and game devs nowadays have to learn proper multithreading (which therefore uses multiple cores because that's one way the OS makes multithreading work) and so therefore this achieves nothing. Not to mention you have an insane memory overhead of having to store two instances of your game at once, unless you implement some weird shared memory shenanigans in which case you might as well just use a single process.

>game devs nowadays have to learn proper multithreading
Yet they don't and there's always one thread limiting performance on a 16c CPU.
>so therefore this achieves nothing.
Wrong. When single core performance is bottlenecking you can, indeed, run multiple instanced of a game and get more total fps.
>Not to mention you have an insane memory overhead of having to store two instances
memory compression solves this.

Would make a good troll science comic

It wouldn't work. The two instances would diverge from integration and rounding error.

>hurr muh multithread
I see dunning kruger has overtaken this board

I mean sure if you intentionally built a game that runs perfectly fine except for one bottleneck you could improve performance but it's not that simple most of the time. Not to mention that the OS side of multithreading will start to eat away at your performance and cause desynchronization because no matter what you do context switches are not instant and so it's eventually gonna come to pass that one thread falls off the other
Explain yourself

To clarify what I mean by a desynchronization:
Assuming you have 15 OK threads and one really CPU heavy one, and also assuming that each of those OK threads is still essential for the game to run well, otherwise why are they even there, then this means that you have 2 cores running at least 1 shit thread and one OK thread. The shit thread is going to end up being greedy as fuck and sucking up most of the core, but it's still beholden to the scheduler which will cause it to context switch once in a while. The nature of context switches is that they're non deterministic in terms of occurance rate and cost, and so you'll eventually end up with one core running a slightly different set of commands than the other because of a delay, one core ran say, 90% of the shit thread and 10% of the OK while the other did 89% and 11%, this is enough to cause a major desync over time.
Of course this is completely excluding the OS itself and other processes needing CPU time.
tl;dr assuming implicit synchronization between different processes is basically asking for a race condition. It's literally impossible outside of building an entire operating system to make such a thing happen, and even if you do that it's probably gonna be easier for you to just fucking build the game better.

dont you mean schroedinger?

This is not a terrible idea in theory. In practice, the difficulty is in synchronizing the simulations. You would have to have interprocess communication between both game instances to ensure all of the physics state, etc is 100% in sync on a per-frame basis, or even more frequently if the physics simulation is run faster than the render loop. You could hypothetically feed the same inputs to both simulations and see the same outcomes, but in more complex games or anything with a random element, this is impossible to code for. Additionally, how do you deterministically stagger frames with a perfect cadence if the simulation is non-deterministic? The jitter would probably be horrific.

>This is not a terrible idea in theory.

Mate, the first thing drilled in any piece of distributed computing theory is that no algorithm expecting implicit coordination is workable in most theoretical and practical models.

Theory said FUCK NO long before the first two computers ever got linked up together.