Do you think it would be theoretically possible to have a "vector" monitor screen?

How would this screen work? Is it even possible or are we limited to the confines of modern pixels and RGB? Would the screen be black and white or could color be incorporated? How powerful would our hardware need to be? Would this create photorealistic screens? At what point will the screen just be a window to a new world?

Attached: 1553491361293.jpg (1600x731, 348K)

Other urls found in this thread:

youtube.com/watch?v=ijAaLxiYiWM
youtube.com/watch?v=0wxc3mKqKTk
myredditnudes.com/
twitter.com/AnonBabble

There's no point. Vector displays made sense when things were analog. TV were just shooting a beam onto a screen of phosphors. You can probably rework any CRT into a vector display with enough electrical know how.
Everything today is digital. It's far more efficient and accurate than analog in basically every situation.

Why do we use pixels than? wouldn't analog not be constrained to any sort of resolution degradation? only way I see pixel based screens reaching that is if the pixels were atom sized

CRTs can only move a beam so fast. How fast the beam can move left to right, top to bottom on a screen limits the resolution.
Digital doesn't have this issue. The limiting factor on digital screen resolution is how fast you can transfer 1s and 0s.
Old vector CRTs are as sharp as the beam. A beam can be blurry and out of focus, or it can be focused into very fine area.
Pic related is a camera viewfinder from the 90s. It's a resolution of 640x480i, less than 1 inch diagonally.
Modern 4k and 5k displays are more than enough to be as sharp as a vector display from normal viewing distances.

Attached: FPYH31FIDZSFA3G.LARGE.jpg (1024x576, 62K)

what if light could be manipulated to bend, and appear similar to the binary of on or off currently being used in pixels. So like 1 giant pixels that somehow bends light and applies color to instantenously move. this would fix the issue of needing more pixels depending on the distance. as long as eyesight is 20.20 then the screen would be in focus. do you know if physics has gone far enough to see if something like that COULD be possible. I cant help but feel a system similar would have advantages over Digital or CRT displays. Almost like manipulating light itself so the data we receive would be indistinguishable from light we receive from any other source, such as bouncing off an apple (the fruit, not apple computer lol)

That's how screens work. You make one line with varying brightness, and then another line below that and again and again until you have a picture.
Pixels are more efficient, more accurate, and don't have inherent blur in them like a beam does.

Attached: IMG_1193.jpg (2880x2160, 2.67M)

Easy on the mescaline, bugs.

>How would this screen work?
gpu which rasterize vector commands

>Would this create photorealistic screens?
No. The real world isn't made of vectors. And our eyes don't work in vectors either. They're both more like pixels. Just far more densely packed.

At best it would help with text rendering.

What if we made a vector overlay that dealt with ray tracing

>what if I put random words together?

Attached: 593A6162-55A4-4436-8BBB-38E8DFE0AFFE.jpg (400x323, 51K)

Like if you have something shiny you just make the vector overlay screen shiny instead of the pixels in the game which would reduce the processing power.
Not OP but I see what he's getting at

This. High 300+ PPI 120Hz OLED/microLED screens are about as photorealistic as we can possibly achieve with current technology.

The REAL issue on current affordable LCD IPS displays is they have really high response times so anything above 30 fps (33.33ms frame time) is just smeared on the screen. You'll typically have 10-20ms of real world GtG average response time on most IPS trash out there whicj is unaccetable for content being displayed abovr 30 fps. OLED/microLED fixes that by having around 1ms response time.

Attached: oled-13-638.jpg (638x479, 63K)

How would adding complexity reduce required processing power?

Couldn't we find a way to eliminate the blur of light, so its basically a really sharp laser? think NEON lights instead of LED lights, but removing the blur and going down to a sort of microscopic level. Where we can manipulate light to "Bend" into the next image. At this point things like "FPS" and "ppi" and "distance from screen" would not matter. The bottleneck would not be our hardware, but out senses. I still don't get why that wouldn't be better than continuously raising the FPS and Response Time of modern screens. It seems like a far weirder way to get there, like using addition instead of just jumping to multiplication or exponents.

^^^^
sorry forgot to tag you on that response as well

You can't bend light like you can bend electrons. Light doesn't have a charge. The only way you can bend light is though things like refraction, which we do with fiberoptics.

>I still don‘t understand why available and feasible technologies that solve these problems are better than my „what if magic“ autism

Oscilloscopes deflected the electron beam using electrostatic plates mounted within the sealed glass tubes rather than electromagnetic coils on the outside. This let the beam move millions of times faster. It's also theoretically possible to built a CRT with multiple electron guns and deflection plates pointed at the same screen and operating independently.

Here's something called the wobulator: youtube.com/watch?v=ijAaLxiYiWM

It's a CRT with some extra coils added to cause distortions. They're just putting in sine sine waves to get wavy distortions, but by very carefully generating the waves it should be possible to make the resulting image move like a 3D object.

Google "vectrex". An old games console with a vector CRT..

Attached: FC9ULEZR1GEWUSLB0G.LARGE.jpg (480x640, 37K)

That has already been done. Remember those old 80s and 90s CGI graphics? Those were usually analog.
youtube.com/watch?v=0wxc3mKqKTk

>It's far more efficient
No
>and accurate
Yes

It would be impossible to have 4k video with purely analog storage.

4k video is a digital concept though.
I guess you could also put it backwards: analog is more accurate in theory but less efficient in practice.

That was an analog computer capable of doing 3D math similar to modern digital graphics cards. It generated a raster video for a raster computer screen. What I was talking about was a special vector display that manipulates the electron beam directly to produce 3D graphics.

4k is somewhere around 2640-2400 lines. There is no way you're going to get that kind of detail with analog recording. It's not a coincidence that everything from videos, to audio, to photos have all moved to digital.
Having two symbols means you can maximize the signal to noise ratio. It also means you can recreate the signal on the fly, without losing ANY information in the process. It's mathematically as efficient as is physically practical.

>you're going to get that kind of detail with analog recording
Just so you're not pedantic, I meant analog ELECTRONIC recording. Obviously film still holds up even to such high resolutions. That is chemical recording however. Film is not video.

To be fair I was talking about analog in general, not specifically electronic.
I definitely agree that digital is everywhere for a reason, I'm just being contrarian for little reason.

What a heartwarming video, God bless this man and analog stuff