Cave Virtualreality fuck up

Hello Jow Forums and /sci/, since I'm posting this thread in both of these places.
I work in a team that is working on a Cave Virtual Reality system and I have a major issue that has to do with the way cameras and optics work.
Basically the idea is that you are put in a room, your position is tracked and is equated to positions of cameras inside of a game and whatever those cameras see you will see on walls. Basically VR without glasses, a cool mixed reality experience.
It has been done a bunch of times before but mostly without tracking your position or with the same errors we currently have.
Essentially the problem that I am trying to solve is two fold, first the actual problem and second proving to others that this is an actual fucking problem.
What is going on is that there are world cameras that look far into the world and also cameras that are bound to a game object which is equivalent in dimensions to the real room. Whenever we move both sets of cameras move with us, and the outer cameras that are looking at the game world adjust their angle to the inner camera so that the image will fit on the wall, otherwise there will be bending and mapping issues with warping effects.
What happens and what I am trying to prove is that the perspective ends up being reversed, the closer you get to the objects which are outside of the room box, the farther they become. This happens because the angle of the outer camera gets a lot bigger compared to what it was originally while the objects stay the same size, then said wider angle image is remapped onto the same size wall. Essentially objects increase in size and distances between them get larger.
Do you know what this sort of an optics problem is called, have you ever encountered it? Maybe you can explain to me what is going on in a more clear and objective way? Is there any way to solve it in your opinion?

Attached: Perspective issue.png (2439x1235, 139K)

Other urls found in this thread:

songho.ca/opengl/gl_projectionmatrix.html)
unicave.discovery.wisc.edu/
github.com/livingenvironmentslab/UniCAVE/blob/master/UniCAVE2017/Assets/UniCave/Scripts/ProjectionPlane.cs
twitter.com/SFWRedditImages

bump

maybe you're using ultra wide lenses that produce that effect of decompression, thinking about 6 to 10 mm. Maybe you could compensate that with the relative distances between the object and the user?

nvm i figured it out

What's the green dot?

faget
Green dot is an object, like imagine a sphere.
Look at the far plane, size of the far plane compared to the sphere and size of the far plane compared to the inner box.

Now look at the green sphere size compared to the far plane and to the inner box when the player gets closer to it.
The box which is the room itself stays the same size, obviously, the far plane becomes larger and the sphere stays the same size. Since far plane is being projected onto the unchanging room wall, the sphere shrinks, because a wider angle far plane view is being forced onto the wall and compared to the far plane now the ball is smaller.

So you are saying that I should somehow actually scale the distance between the objects to negate the camera lens effects?
Like actually warp the world together with the shift in FOV so that it will be accurate?
That might work but it seems incredibly complex and hard to figure out, I will try something but I would prefer a solution that does not need scene to bend.

I understand your problem now, but I find it hard to imagine that the problem would be intrinsic. In a real-world situation, there is clearly an analogous plane to your projection walls, which the light from the objects would have to traverse to reach the observer's eyes in exactly the same way as your walls are meant to simulate. If it doesn't work out that way, you probably have some assumption wrong in your abstraction of that system, and should think it through from more basic principles.

And just to be clear, exactly what you're doing wrong is quite hard to tell from here. Perhaps you just have some simple bug in your frustum values. Point being that I don't think there is an intrinsic, unsolvable problem, unlike what you seem to want to imply.

The reason why I think there seems to be a more universal problem is that I downloaded several other similar unity projects, and watched videos of a studio that had a setup like this and worked with big name brands.
They clearly had similar issues but they tried to mask it by putting the scene in a smaller space.

>perons view

Attached: 43439314_245633346297525_9004580836518920192_n.jpg (457x400, 26K)

Additionally this is not analogous to real life situation. The thing is that in a real life situation the camera angles would be fixed and would not increase their field of view depending on the distance of the wall to the camera, a 4 camera set up would always be fixed at 90 degree angles. What is happening here is similar to an effect you see in torantino movies sometimes, when camera moves towards the object but decreases it's FOV so it looks like the object is not moving but the area is becoming wider or narrower around it.
The thing is that if we did it like real world where the FOV does not in fact change and stays static regardless of the movement, you would get warping errors in the middle of the wall where the camera images meld, essentially the same issues you get when mapping a cylinder onto a cube.

Well, whether or not the problem is intrinsic, you can obviously reach your solution by mathematical means. Do the manual calculation (pen-and-paper-wise) of where points of an object would end up on the analogous wall plane in a real-world scenario, and then do the equivalent calculation for similar vertices with your virtual cameras and see why and where the calculations differ.

yeah, what is this? I'm not into VR development, but curious about this term.
what is perons?

Pal, I've been studying optics for gaymin' for couple of years, specifically AR and VR stuff. What you are trying to do is analogous to mapping a sphere or a cylinder to a cube, it is impossible. In real world your field of view stays the same regardless of the distance between you and the object, in your case the field of view grows with the distance, alternative to this is if you indeed lock the camera angles but then you are indeed encountering the good ol' issue of trying to map a sphere or in your case a cylinder on to a cube perfectly. Basically there exists no solution.

Its probably a typo of person.
Its also a surname, usually associated with Juan Domingo Peron, a third position politician/general in Argentina

yeah, that's why it's hard and as you say everyone find that problem when working in perspectives

thought it has something to do with peronism, in my language peron(as) means a place for train passengers to get in/out
I find it astounding that people makes typos.
t. never had typo in my life

>What you are trying to do is analogous to mapping a sphere or a cylinder to a cube, it is impossible
Why would that be the case? Why not just treat the wall as an actual plane in the real world, and render the world on it?

Because read the rest of the post. All the problems arise from the fact that field of view does not dynamically change in real world with distance.

Not the viewers field-of-view, of course, but the imaginary field of view on a plane between the viewer and what he's viewing would obviously change with the viewer's position.

dude, lol... just use sphere...

This is what actually would happen in real world, but this is as impossible to map on 4 walls as it would be impossible to map a cylinder. I don't know how else to explain this, try to grasp what I am saying. It just doesn't work.

Attached: Untitled.png (877x821, 19K)

I wis.
I can't use sphere.
If we use sphere than we are locked to a certain kind of environment.
We need a solution that would work for various kinds of rooms and corridors and shit.

how hard would it be to project a square in a sphere?

>people makes typos
>t. never had typo in my life
Congrats, you just made two

Attached: unnamed.gif (250x241, 1.38M)

At the line from any point on the same "quadrant" as the green dot to the viewer, there is an intersection on the wall plane. How are you saying that painting the wall at that intersection point in the appropriate color would be different from viewing the object IRL?

where's that grammarbro when you need him
>[Have] never [made] [a] typo in my life[.]

I don't know what other answers you have gotten in this thread but in principle this is not that difficult to solve.
You just need to create a projection matrix from the right parameters. left, right, top and bottom (when applied to this kind of matrix: songho.ca/opengl/gl_projectionmatrix.html) need to form the positions of the corners of the room as mapped onto your near plane. You divide the positions by the distance of the plane of the walls from the camera and multiply by the distance of your near plane to do that.

impossible to do accurately, the same reason why it's impossible to make an accurate earth map that respects both the distance between points and the overall shape of the world

This is correct behavior, even though it's counterintuitive. I also work on cave systems and noticed the exact same thing, but it turned out to be correct.

To be fair, grammar errors are different from typos. It's not a "typo" if you write what you actually thought to yourself.

Thanks a lot man, I will check it out.

But how can it be correct if it looks wrong?
Why DOES it look wrong?

wow a thread with real tech related discussion. A rare sight nowadays.

Attached: 1515905070564.jpg (160x213, 7K)

>implying this was a typo

Heck, I might try this at some point. You don't even need to do this with projectors and motion tracking, you can model this by having a camera in quad with the sides textured with the 4 game cameras. The only problem you'll find is that humans have two eyes so in reality the effect won't look quite right unless you halve the frame rate and use those flickery glasses (or use two projectors per wall with polarising filters and those imax cinema glasses).

This is not correct behavior, it's the easy way out. It is mechanically correct but the result is wrong to the eyes. You are essentially creating a strange lens/prism box with this method.
As I said for it to be correct the field of view that the person sees must not become wider as you get closer.
There is a perspective distortion going on. imagine you as a human with your 160 degree FOV or whatever it is. If you were in a glass box you would be seeing 160 degree image in your brain and the glass box is only registered as an object, just a frame in the larger image that you are seeing in your head. What happens in a scene like that is that your irl 160 FOV eyes are seeing a wall with 90 degree FOV, and the FOV of that plane increases the closer you get.
In a perfect cave with perfect eye position based tracking it would be possible to create a more convincing simulation, but not in a setup like this.

Our current set up with 3 kinects each for each axis of the 3d space.
Sadly kinects are shit and there is no proper kinect 2 SDK applicable here.

Attached: 20181127_183342_HDR.jpg (4160x2080, 3.61M)

ah, so we need to invent 3D VR glasses then.

If you have Unity check out the project I work on UniCAVE. unicave.discovery.wisc.edu/
It's kind of a mess right now but it does the projection correctly and Unity is easy to use.

In the case of specifically what you mentioned; it is normal that farther objects appear farther apart. It is counterintuitive but think about this: imagine two points one centimeter apart on a wall; as you become close to the wall the points will appear to move apart. So in essence, when objects are farther apart on the wall, they appear closer together to you. If that makes sense. You may have other projection problems but that isn't one.

Use this as a reference for asymmetric projection.

github.com/livingenvironmentslab/UniCAVE/blob/master/UniCAVE2017/Assets/UniCave/Scripts/ProjectionPlane.cs

Projection / math wise that is perfectly correct.
Reprojecting your walls onto a sphere would get the correct, expected image.
The reason for the messup is the fact that your vision is 3D and your mind tries to mix in the distances, and since this projection doesn't keep distance it feels weird.
As silly as it sounds, you can try closing one eye and see how it looks then.
What you're trying to do is only possible by directly controlling what each eye sees, aka VR headsets.

>There is a perspective distortion going on.
But why? All your FoV imagination doesn't explain why painting the glass walls with the color of the light that passes them on the way to the observer's eyes would look different from seeing the original light.