Occlusion culling is really tough in systems where users can add content to the world. Especially if there's translucency. As with windows (not Windows), or layered clothing.
You're in a room without windows. Everything outside the room is culled. Frame rate is very high. Then you open the door and go outside into a large city. Some buildings have big windows showing the interior, so you can't cull the building interior. You're on a long street and can look off into the distance. Now keep the frame rate from dropping while not losing distant objects.
Games with fixed objects can design the world to avoid these situations. Few games have many windows you can look into. Long sightlines are often avoided in level design. If you don't have those options, you have to accept that distant objects will be displayed, and level of detail handling becomes more important than occlusion. Impostors. Lots of impostors.
Occlusion culling itself has a compute cost. I've seen the cost of culling big scenes exceed the cost of drawing the culled content.
This is one of those hard problems metaverses have, and which, despite the amount of money thrown at the problem, were not solved during the metaverse boom. Meta does not seem to have contributed much to graphics technology.
This is much of why Second Life is slow.
corysama 6 hours ago [-]
Dynamic occlusion culling is pretty common these days now that the GPU can do its own filtering of the draw list. I think it goes like:
Start with a list of objects that were visible last frame. Assume they are visible this frame. Go ahead and draw them.
Then, for the list of things that were Not visible last frame, draw bounding box queries to build a conservative list of objects that might actually not be occluded this frame. This is expected to be a short list.
Take that short list and draw the probably-newly-visible objects.
Have queries attached to all of those draws so you end up with a conservative list of all of the actually-visible objects from this frame.
This obviously has problems with camera cuts. But, it works pretty well in general with no preprocessing. Just the assumption that the scene contents aren’t dramatically changing every frame at 60 FPS.
motorpixel 3 hours ago [-]
Roblox just gave a talk at this exact intersection of topics (user generated content and high performance culling) last month at GDC.
I was thinking about this problem a few days ago, imagining a semi-online game where players could create a collective city by plotting buildings. The "grid" would be some kind of pre-determined voronoi pattern, in theory making occlusion culling easier.
snailmailman 2 hours ago [-]
I've always wondered to what extent these culling techniques still work with raytracing?
A reflective surface can bring a bunch of otherwise-offscreen things into the scene. Its what makes screen-space reflections look so bad sometimes, they can't reflect whats not on-screen.
enobrev 2 hours ago [-]
I really appreciate this post. It reminds me of a video I watched a couple years ago that does an excellent job of demonstrating how culling works with actual code and visuals
> Quake made PVS famous. It’s still useful in some indoor games where the scene geometry is static and bake time is acceptable.
It was used extensively in outdoor games like Jak and Daxter.
root_axis 1 hours ago [-]
Great post. Looking forward to the followup article about lights and shadows :)
Panzerschrek 2 hours ago [-]
Is portal culling still used today? I thought it's an old technic used only by some very old games like Thief.
LarsDu88 12 hours ago [-]
PVS isn't that expensive to compute. Especially nowadays. I assume this is actually referring to the binary space partitioning techniques used in DOOM and improved in Quake, Half-Life, etc in the late 90s, early 2000s.
The BSP tree was also extremely useful for optimizing netcode for games like Quake 3 Arena and games within that family and time period I believe.
Panzerschrek 2 hours ago [-]
PVS requires some hierarchical scene representation with no seams between walls. I know no other way to build such representation other than BSP. But BSP works fine only with pretty low-detail map geometry consisting of brushes. No large detail meshes or terrains can be used with it. If a game has a lot of open spaces or semi-open spaces it's nearly to impossible to build a BSP for it.
socalgal2 2 hours ago [-]
PVS does not require a hierarchical representation. You can use any representation you want. In fact the one in the article itself is not hiearchical.
Panzerschrek 1 hours ago [-]
In practice many useful representation can be built only in a hierarchical way. Unless you want to force artist/map makers to split their maps in regions manually.
formerly_proven 24 minutes ago [-]
All Source / Source 2 games still use both PVS (bsp/octree) and pre-baked lightmaps. Of course, they’re quite notorious for the staticness of their environments.
yards 10 hours ago [-]
I always wonder about this IRL...I'm at work rn, is my apartment still rendered?
Zecc 1 hours ago [-]
Unless it's being observed externally it is in a state of quantum uncertainty, I think.
Backface culling has been common since the late 1990s when we started using face normals to determine lighting rather than per-vertex lighting. Pretty much every 3D game engine since about 2004 has included and enabled it by default. How is it that you made a game that doesn't use it?
nickandbro 10 hours ago [-]
I didn't use a game engine
Tanoc 9 hours ago [-]
Ahhh. So you used a wrapper or a library? Interesting then. I had assumed that almost every rendering method enables frustrum, occlusion, and backface culling by default if only to clear the number of objects needed to be tracked in memory. One thing I noticed in your game is that it's based on the absolute mouse position, which with a 16:9 window makes it difficult to turn in certain situations because your horizontal movement space is much larger than the vertical movement space and that adversely affects turning speed. Changing so that is based just on horizontal mouse movement or adding keyboard controls might be better.
nickandbro 9 hours ago [-]
Thanks for the feedback, I’ll try to get that sorted out.
01HNNWZ0MV43FF 6 hours ago [-]
For the curious readers, backface culling (at least in the way fixed-function OpenGL does it, and probably newer APIs still do) is not based on face normals, it's based on winding order of triangles, so it works even if normals are not used.
Also face normals (flat shading) are generally considered older tech than per-vertex lighting (Gouraud shading). Newer stuff since 2008-ish is generally per-pixel using normal maps for detail.
nickandbro 4 hours ago [-]
Thanks for clarifying, so I guess I already have it on then.
igraubezruk 2 days ago [-]
Very good read and visualizations, thank you for writing it
mempko 5 hours ago [-]
Back in the 90s I made a 3d engine (software renderer) and used frustum culling. But computing the frustum intersection every time was too slow. So one technique I did was add a count to each polygon. If the polygon was outside the view frustum, i added a count of N frames. Each frame if the count for a polygon was 0 it would check against the frustum, otherwise it would reduce the count and skip the polygon rendering entirely.
This worked very well but of course if the camera turned quickly, you would see pop-in. Not a modern technique, but an oldschool one.
yopstoday 2 days ago [-]
Dooope!
Rendered at 07:31:02 GMT+0000 (Coordinated Universal Time) with Vercel.
You're in a room without windows. Everything outside the room is culled. Frame rate is very high. Then you open the door and go outside into a large city. Some buildings have big windows showing the interior, so you can't cull the building interior. You're on a long street and can look off into the distance. Now keep the frame rate from dropping while not losing distant objects.
Games with fixed objects can design the world to avoid these situations. Few games have many windows you can look into. Long sightlines are often avoided in level design. If you don't have those options, you have to accept that distant objects will be displayed, and level of detail handling becomes more important than occlusion. Impostors. Lots of impostors.
Occlusion culling itself has a compute cost. I've seen the cost of culling big scenes exceed the cost of drawing the culled content.
This is one of those hard problems metaverses have, and which, despite the amount of money thrown at the problem, were not solved during the metaverse boom. Meta does not seem to have contributed much to graphics technology.
This is much of why Second Life is slow.
Start with a list of objects that were visible last frame. Assume they are visible this frame. Go ahead and draw them.
Then, for the list of things that were Not visible last frame, draw bounding box queries to build a conservative list of objects that might actually not be occluded this frame. This is expected to be a short list.
Take that short list and draw the probably-newly-visible objects.
Have queries attached to all of those draws so you end up with a conservative list of all of the actually-visible objects from this frame.
This obviously has problems with camera cuts. But, it works pretty well in general with no preprocessing. Just the assumption that the scene contents aren’t dramatically changing every frame at 60 FPS.
https://schedule.gdconf.com/session/optimizing-a-large-time-...
But see Roblox's own summary at: [1]
[1] https://devforum.roblox.com/t/occlusion-culling-now-live-in-...
https://www.youtube.com/watch?v=CHYxjpYep_M
It was used extensively in outdoor games like Jak and Daxter.
The BSP tree was also extremely useful for optimizing netcode for games like Quake 3 Arena and games within that family and time period I believe.
https://twilightzone.fandom.com/wiki/A_Matter_of_Minutes
[1] https://en.wikipedia.org/wiki/If_a_tree_falls_in_a_forest_an...
https://slitherworld.com
Also face normals (flat shading) are generally considered older tech than per-vertex lighting (Gouraud shading). Newer stuff since 2008-ish is generally per-pixel using normal maps for detail.
This worked very well but of course if the camera turned quickly, you would see pop-in. Not a modern technique, but an oldschool one.