I struggled getting shadows working, there was a real moment where I wanted to cut my losses and give up. It was particularly frustrating because shadow mapping itself isn’t a difficult concept to comprehend. You render a depth map from your shadow-casting light’s perspective, capturing what the light sees–and equally importantly, what it doesn’t. Then, while rendering pixels (fragments technically) during the main render pass, you check if that pixel falls within the light’s bounds (based on clipping, range, falloff, etc.). If it does, you transform that fragment’s world position into the light’s space and sample the shadow map. You compare the fragment’s depth (from the light’s perspective) against the stored depth in the shadow map–if the stored depth is less, something closer to the light is blocking it, so that pixel is in shadow. ...
Line Rendering
OK, so drawing lines is hard. Well maybe not hard per se, but certainly tedious. Looking into it (starting with Matt DesLauriers’ infamous ‘drawing lines is hard’ article linked above), I had the general gist. To draw the lines I want, the way I want, I need to segment them and render triangles. Like this, kind of… No real surprise there, honestly. Your GPU is a super-powered triangle crunching machine. While line primatives (line-list, line-strip) do exist, they’re not very useful–as far as I can tell. They’re typically aliased, hardware dependent and 1px wide. ...
Lights, Camera, Action!
Before we move on, in my last post I took a detour from the lighting/texturing pipeline to fix some issues in my Transform implementation. The debug scene was helpful for finding and fixing issues with transformations and it inadvertently highlighted a different issue. Multi-Sample Anti-Aliasing Without a technique known as ‘anti-aliasing’ edges can look jagged/stair-stepped because each pixel is either fully inside or fully outside the triangle you’re rendering with no in-between. ...
A Slight Detour
Far from the first and unlikely the last, I’ve taken a slight development detour. Firstly, I have migrated this dev blog from WordPress hosted by my web host to this statically generated one hosted on GitHub. I won’t bang on about it, but I was finding WordPress quite cumbersome and exceptionally slow—not just to load on the client side, but to do anything with it. It made the blogging process quite frustrating for me. ...
So long, Cube
It’s time to look into getting different geometry into our scene. In the Vulkan version of StaticRectangle, I was using OBJ files for meshes and a 3rd party library called tinyobjloader to load them. This was fine for that application because the size of the files wasn’t much of a concern. I mean it’s always a concern, but it wasn’t as much of a concern in the early stages. It’s something that can be optimised later.1 ...
A Conscious Decoupling
At the end of my last blog post StaticRectangle was rendering ‘the illustrious spinning cube’. We had a bunch new components, like the camera controller, input / resource manager as well as a top level Application that took ownership of some processes away from the Renderer. The next thing to tackle was splitting up the Renderer further, it was still doing more than just rendering. In fact it was the render function of the renderer that was animating the cube! certainly overstepping its responsibilities as a renderer. ...
The Illustrious Spinning Cube
By the end of my last post, I had a very boring hot-pink “StaticRectangle”. I gave my wife a demo who was as enthused about a hot-pink rectangle as you can imagine. I showed her all the code it took to produce that rectangle. Her honest response was “why?” I explained how it took lot of leg work to do the things you can’t see. Of course she understood that, but it is remarkable how much work can go into something so unassuming. ...
StaticRectangle Lives
I spent some time messing around with WebGL trying to render some better looking 3d mazes, which I have no doubt you can do. There are plenty of examples of WebGL being used to make some captivating 3d content and that challenge is enticing. However, it wasn’t long until I came across WebGPU, the Next Generation™ of 3d rendering technology for the web. Digging in a bit, it felt very familiar. The API is very ‘Vuklanesque’, inspired by it, as well as other modern graphics APIs (Metal, Direct3D 12 etc.). ...
It had to be done
Not a faithful recreation. I believe the Microsoft one was following the left-hand rule solver (and right when you flipped upside down), which means the mazes must have been ‘simply connected’ or ‘perfect mazes’. There isn’t the topsy-turvy geodesic shape, psychedelic fractals, OpenGL logos, Smiley Face end-point or… rat either. This one is using the Recursive Backtracker to create passages and then Dijkstra’s to find the longest connected path through the maze, and then it follows that. ...
Mazes now in 3D!
Probably a surprise to no one, but while going through the maze generation exercise I thought it would be nice to display a maze in 3d. There is (though I haven’t gotten to it yet) an entire section of the Mazes for Programmers book dedicated to generating mazes that wrap around 3d shapes, like mazes on cubes, spheres, even a möbius strip! Displaying a maze in 3d isn’t in the scope of the book, but it’s not hard to take the concepts and translate them to 3d. ...