How much does average bullet impact sprite/texture take from video memory and how come lately they choose to program...

How much does average bullet impact sprite/texture take from video memory and how come lately they choose to program them in a way to quickly disappear? Is it perhaps implementation of tessellation to give effect of hole's depth that takes more video memory so program needs to remove it faster to make space for other stuff?

Attached: 3D_Primer_For_Game_Developers_fullsize_Decals-in-Use.jpg (1920x1200, 499K)

I just assumed it's to stop people from drawing dicks and spelling out nigger everywhere.

Nigger dicks

The Division did it right.

Why would someone want to limit and quickly erase an object held in memory that contains, at a minimum a texture, position and size that can be created by a possibly infinite source?

Attached: 1523155439355.jpg (720x718, 22K)

It is texture on texture.

The CIA niggers probably leaned on them to do it since it can be used as a form of steganography.

Bullet hole removal has been a thing since I started noticing bullet holes in walls though. I figure the tech has advanced enough that you wouldn't notice hole removal by the time they had to do it though.

The performance impact is negliable, even when using tesselated displacement maps. The reason they are not permanent is because there is a sickness among graphics programmers iƦthat cause an aversion to anything nice. It took us years to get proper collision meshes. Foliage are still 2D planes because adding 30 polygons extra to that flower or grass would be waaay overkill. Can't have bullet holes be permanent, that's a bad practice man. Cull them fast, nobody notice anyways. And let's atrificially limit the gras radius distance options available to the player, nobody will play this game with better hardware in the future anyways. And GOD FORBID we use more polygons so things look round. I don't care that new GPUs can handle millions of polygons these days. I'm currently making my own graphically pristine game because I've been sick of this bullshit for 15 years. Particle simulations, global illumination, real geometry for everything, proper physics, I spare no expense.

Dude, I work with trillions of polies.
You really dont get how chuggy even 1080tis and 5 gig i9s get.

Trillions of polygons? Get outta here. I have some 100 million poly meshes, and the most I've ever heard about was about 1 billion. In game a couple million is a max poly budget for todays hardware. I use a 1080Ti.

>30 polygons extra to the flower
>pick the "flower tool" to brush over a field
>suddenly extra 2M polygons
>whopsie now we can't even get silky smooth 30 on the PS4

Shit adds up. But yeah, I personally don't care about polycount, it's TECHNOLOGY that I care about. I don't care if it's all procedural like those Nvidia and UE4 tech demos or handmade like Splinter Cell, MGS or FEAR, I just want the guy who does the OIL_BARREL_04 to add a sprite waterfall when I shoot the barrel. Details matter, or well they would matter if the enviroment isn't supposed to be a scenery for a rollercoaster ride.

I do lidar and structured light captures to mesh.
So while I degrade the point clouds to 1.5mm resolution, it adds up.
Hell, currently im at a week now fixing 105 million small poly face issues resulting from spray and sag from gravity.
Then its a month to fix all the nonmanifold issues at some 3 billion.

It's 2018, I don't want to see sharp edges on oil barrels. But yeah details matter. You used to be able to shoot light bulbs and the light would go out. Props used to be more dynamic. Nowadays everything is static. There is no attention to detail, no pride in the craft, no creativity. 3D game graphics have stagnated.

That's reasonable, but there is a difference between billion and trillion. I do photogrammetry, combined I have billions of polys too but no billion+ single point cloud or mesh. Do you use a custom structured light solution or a commercial one?

that's probably because back then the games didn't look as good either. Nowadays most environmental lighting is baked in order to to look better and to free up resources for other things.

Oh nono, this is just a subsection of 10 m by 30m by 20m building being worked on in designx.
I had to chop the project down becase. I am a ramlet with 64 of 2133.
The lidar was a faro focus 3d.
Structured light was a artec spider.
Probably going to move over to the faro 3d scanner from spyder.
The spider view is too small for the applications and results in loss of tracking due to undifferentiated surfaces.

Application is a combo of vehicle and building.

Graphics are better than ever, but static maps are easy on the CPU. Blame Microshit and Sony for wanting two literal tablet CPUs from 2011 paired with what was a GTX 670 equivalent so they can cram the most amount of grafex into their 200W power budget and then saying that they plan to stick with it for a decade. I'm not defending the developers in any way, I mean fucking Crysis 3 runs 45fps on a core2duo, and still looks better than the shit that comes out nowadays.

Tesselation stage is avoided like the plague for performance reasons, and in any decent engine reusing a texture costs almost nothing. Computing the projection of decals on the other hand can have a visible impact on performance.

Ah I see. Sounds like a cool gig. Can you use the hardware to scan cool stuff like nature etc?

That's no excuse. Use real geometry if tesselation is too expensive.

Easily.
The spideris just bad since we cant put dots on items.
It hates empty space, which cars have lots of.
Andthen very consistent smooth setups.
So a tree works all day if oak or pine, but birch and poplar have issues.
Rocks are easy.
If I wasnt working on the trillion points Id offer a showing.
But as a words only, pretty damn photorealistic.
Sucks balls harder than dennis rhodman discovering bubble tea liquor when it hits a deep well with overhang.
Not sure if its settings or an issue with the shadows from the strobe making ghost geometry.
God I wish I could get a handheld lidar.
Its not the method so much as size, need a mobile unit to shove in compartments.

tessellation is less expensive than real geometry. anyway they dont do tesselation on bullet holes because its a decal on top of another mesh, and trying to push in the geometry without it being hidden by the underlying geometry isn't easy.
on top of that, its probably a lot faster to just use parallax mapping since its per pixel. bullet holes are small and therefore its easier to use a few expensive shaders per pixel than to create a ton of geometry that doesn't scale with screen size

Attached: fear_parallax_mappingbig.jpg (1248x604, 148K)

What about combining lidar and photogrammetry? You can pladter a car with markers abd get decent resulta with photocan. If you coat it as well you get some pretty tight tolerances. Is it your busines or are you a work drone for someone else?

>limit
This is the perfect solution
>quickly erase
Disgusting

All I'm hearing are excuses. If they spent more time layering shaders on top of materials instead of layerering excuses on top of complaining we's have some good graphics by now. Creating bullet holes with actual displaced geometry should be standard practice by now.

Hell you could just spawn real geometry that intersects the wall.

Pseudo work drone.
Pretty much run the scanning, but am constrained by customer.
Which varies by commitee as to allowing dots or not.
The combo was the plan, it just doesnt work with the spider.
Worked with the faro freestyle.

Attached: the-homer-inline2.jpg (532x293, 27K)

I see. Sounds like fun, except for lugging around the gear.. Does it pay well? I do photogrammetry freelance, but I dabble in structured light and other peoples lidar and drone/gis data sets. Sorry about my typos btw I'm on mobile

65k plus bennies
Health vision and dental.
A bullshit heavy work environ where pointing to a status bar means dont disrupt coworkers.

I see, the pay ain't bad. Do you have an education in this stuff? Have you concidered using your hardware for personal stuff? If you juryrig a way to get and merge color data you have a serious advantage in capturing data for real time applications like materials and assets for games. Until cheap solid state drone mounted lidar becomes a thing thanks to autonomous cars.

I don't get the reference. I'm gonna go make a decal displacement proof of concept now just to see if I can get it working.

>whopsie now we can't even get silky smooth 30 on the PS4

Who cares.

And 2 million is an over statement. At best a couple hundred thousands. Then just draw the low poly cardboards for the grass that is further away.

They probably can't afford to have 1 ambitious graphics programmer breaking the workflow of the 200+ artists working hard on making the experience as "cinematic" (read static and bullshot-friendly) as possible.

>I'm currently making my own graphically pristine game because I've been sick of this bullshit for 15 years. Particle simulations, global illumination, real geometry for everything, proper physics, I spare no expense.
post pics with gpu load, armchair bitch

AAA game dev is souless mundane compartmentalized work. No room for personal creativity. Blizzard and Valve became successful because they pushed at the limits and delivered serious quality. There are not a single studio doing that any more. Not one.

0% load I have a 1080Ti bitch.

Seriously though I'm getting 70ish fps at the moment so I'm gonna push the fidelity more. At the moment I'm testing out baking out fluid simulations and running them in game. They are looping animated meshes for things like flowing water. Looks good senpai but takes up several gigabytes of ram

This is correct. they do it in real life too. The last year of my life has been hell. Total hell. I'm completely alone, I'm not being helped. I'm a hard working person and I knew it would only be a temporary thing in my life. Hell broke out that year. I lived with drug dealers. I had bruises. I wouldn't go to class for a week at a time, because they'd all be sitting between my room and the front door, and there was always trouble. I still can't talk about it a lot now, because I don't want to remember it. Around the area there are always signs of vandalism and gunshots but someone changed them. One day I will see a broken window on the house to my left. The next day it will be on the house on my right. Bullet holes in signs change but the message doesnt change. Sometimes certain letters are highlighted with bullet holes. When its vowels I see more people in hoodies. The harrass me and are stalking me. The vandalism is a sign for them and lets them know how I'm most vulnerable.

So TLDR is you live in a rough neighborhood?

jesus

You have a shit pipeline

Worked at a company that had similar issues, reduced their compute costs by a factor 100 by switching from point cloud to a meshlike.

Tell me more.

Attached: dGmZJF2lwdhde8lWEd_leTraNEoP46U5TFsEn5EnN58.jpg (614x768, 101K)

Trade secret

Hint: Use what you know about the sensor and move up a dimension to lose time complexity.

Is accuracy/error rates affected?

Attached: 1513453841258.jpg (320x320, 29K)

No noticeable difference in fidelity using standard algorithms. Improved fidelity with ML postprocessing. I do hope you're not letting all that data go to waste by not feeding it through a GAN.

>GAN

As in Generative Adversarial Networks?

Attached: 1434129910376.png (411x387, 11K)

A few megabytes at best
1 bullet hole or 10000 doesn't really make a difference
Each hole just takes an extra 64 bytes at most