17 Aug 2019 - Simon Coenen - Reading time: 12 mins
I’ve been implementing both tiled and clustered light culling with forward rendering recently and one of the things I never was quite happy with is the way spot lights were tested against the frustum voxel/AABBs. I wanted to write up something about my entire culling implementation but then I got a bit sidetracked by looking into way to improve cone culling specifically and I thought the spot light testing is an interesting case on its own.
To give a very brief summary of what light culling actually is:
Light culling is an optimization method to reduce the amount of lights that are considered when shading individual pixels by placing lights in either a 2D (tiled) or 3D (clustered) grid based on their position, dimensions and possibly other parameters. Before the shading pass, a compute shader splits up the view frustum in buckets and loops over all the lights and places them in the buckets that it intersects with. Later during shading, using the pixel position, the appropriate light bucket can be calculated and only the lights in that bucket need to be considered for that pixel. The technique can also be leveraged to accelerate other things like decals, environment probes, … This greatly improves shading of a scene with a lot of dynamic lights. It’s a very common technique used in many games and engines.
Spot light culling can be challenging to find a good intersection method for. There are several different approaches to this and depending on how far you think this through, there are some significant optimizations to be made here compared to naive methods.
I’ve gained lots of insight from other articles which really helped me learn about all the methods. One of which is the amazing article of Bartłomiej Wroński and I really recommend checking it out if you want some more in-depth information about this. The others are mentioned below in case you want to learn more about the subject.Read More...
10 Jul 2019 - Simon Coenen - Reading time: 16 mins
Packaging assets in large binary blobs in games is quite common. In 1992, Wolfenstein 3D introduced so called “WAD” files (Stands for “Where’s All the Data?”). This file format has been used after on Doom and eventually pretty much all games currently today in some form. It gives more flexibility to users (and developers) to create patches, mods, it provides opportunities for security measures but more importantly, it improves performance significantly as having only a few large binary files to read from is much faster than reading many small files.
Today, many game engines have adopted the idea of having large files of binary asset data. Unreal Engine uses .pak files, Unity uses .assets files, Anvil uses .forge files, .raf in League of Legends, …
As a small project, I’ve decided to look into creating a similar format. On top of what I’ve described above, the files in a pak file are usually also compressed. This can significantly reduce disk size and possibly even improving performance when disk access is slow and the cost to decompress is low.Read More...
21 Jun 2019 - Simon Coenen - Reading time: 11 mins
When working on my game engine project, I always get distracted by new interesting things or thoughts I want to look into and one of them was reflection. Almost all commercial game engine have some kind of reflection that makes GUI editors and visual scripting possible. Unlike those engines, I didn’t look into full all-the-way reflection because I didn’t want to bloat my code with that and maintain it. I was mostly interested in very simple type reflection so before you start frowning when looking at the code, this is not meant to be a complete reflection system at all! This is quite basic but I found it to be an interesting use of compile-time expressions.
All the code can be found on GitHub. The relevant files to look at are:
I mainly started working on this out of curiosity and to see how far I could take it without having to modify loads of files everywhere. I can definitely take it much further but in its current state I can retrieve type information and create instances of classes using string hashes and object factories. Getting type information is compile-time without any runtime cost.
I currently use it for getting game object components, object instantiation, shader variable addressing and general dynamic casting.Read More...
13 Jun 2019 - Simon Coenen - Reading time: 5 mins
When working on the C++ delegates (see previous post), I found out about a thing called “natvis” and I never actually heard of it before even though it’s always been right under my nose when debugging Unreal Engine projects. It’s a really awesome and powerful debugging tool that allows you to define how classes should be visualized in the Visual Studio “Watch” windows and hover windows.
So instead of having to dig through a load of expanders to find what you’re looking for in an object, you can fully customize the window to show only the relevant information of that object. Natvis is a xml-based file that you simply include in your Visual Studio project that Just Works™.
Many of you might already know about this but there’s a small detail that helps you with debugging natvis that you might not know about!
07 Jun 2019 - Simon Coenen - Reading time: 12 mins
I’ve always really liked working with delegates in Unreal Engine 4. It’s so easy to use, works in various different ways and after a while of using them, I wanted to try to implement them myself.
As you might know, it’s not straight forward at all to store and execute a method in a generic way in C++. A function pointer might be a solution but it’s not flexible as it’s very type-restricted and can’t have any extra data a lambda would have. The standard template library solved this by providing
std::function<> which is sort of a more flexible and fancy function pointer. However, I’ve found that both function pointers and std::function are such a headache to work with!.
Take a look at an extremely simple example of executing a member function with a
int(float) signature. Straight forward right? Well, not really…
Foo foo; // Function pointer. Arrrghh int(Foo::* pFunctionA)(float) = &Foo::Bar; (foo.*pFunctionA)(42); // std::function. Object itself passed as variable (makes kind of sense but extremely awkward!) std::function<int(Foo&, float)> pFunctionB = &Foo::Bar; pFunctionB(foo, 42); // Delegate. Elegant and straight forward auto pFunctionC = Delegate<int, float>::CreateRaw(&foo, &Foo::Bar); pFunctionC.Execute(42);
06 Jun 2019 - Simon Coenen - Reading time: 1 min
I really like programming blogs and I love reading about how people learn from personal projects and share their thinking process, ideas and potentially implementation details. Just to name a few of many blogs in my bookmarks bar that are absolutely worth checking out:
There are loads of interesting blogs I check out once in a while but I always though making one myself wouldn’t be very interesting because many of the things I could write about are already touched upon many many times in other places and I don’t really feel like I could be committed to maintain it.
However, I’ve been working on a personal game engine and loads of other little projects and other things for a while now but I could never think of a way to present this in a good form to others because those things will never be “finished”. That’s why I thought a blog could be a great way to pick a few interesting things I’ve done and write about it. Besides, implementing something is one thing but by writing about it, you have to comprehend the subject much better as there will be gaps of information that will force me to research more.
I’m hoping for this to be a good little exercise moving forward. If there’s anything you don’t agree with, like to comment on or if you just like to reach out to me, always feel free to reach out to me on Twitter, email or comment below.Read More...