
At the end of my last blog post StaticRectangle was rendering ‘the illustrious spinning cube’.
We had a bunch new components, like the camera controller, input / resource manager as well as a top level Application that took ownership of some processes away from the Renderer.
The next thing to tackle was splitting up the Renderer further, it was still doing more than just rendering. In fact it was the render function of the renderer that was animating the cube! certainly overstepping its responsibilities as a renderer.
If I wanted to do something as scandalous as add another cube I’d have to do a lot of copy/paste work, manually creating and allocating new GPU resources and branching logic in places that didn’t make sense. That isn’t what we’d refer to as a’scalable’ system.
It’s a logical place start things of course, because it’s simple. Almost all examples you’ll come across are like this because of that fact.
Adding new objects to render means we need to take a step back and ask ourselves, what exactly are we rendering?
The Scene Graph
What we are rendering, for all intents and purposes, is a scene. In the previous example the scene was a single spinning cube and it was more or less hardcoded into the renderer, but it was a scene nonetheless.
Now what if wanted to take that same scene and render it from multiple angles? (like the typical front, side, top and perspective views of a 3d application) or run a physics simulation or game play logic?
We need to decouple the scene from the renderer completely. Keep all the render specific resources (buffers, layouts, pipelines etc.) we need to render the scene with the renderer, and create a seperate system that tracks the objects and their types, properties and relationships to one another. We can then pass that same scene to the various components to update it before we render it.
while app.is_running:
animation.update(scene, time) <- update animated scene elements
physics.simulate(scene, time.delta) <- perform physics simulation
renderer.render(scene) <- render the scene
So why a graph?
We use a graph (technically speaking a ‘directed acyclic graph’ or DAG) to describe a scenes hierarchy. It allows for a parent/child relationship between objects where transformations of a parent are inherited by the child, but the child has its own local transformations.
Changes to a child transform cannot have any impact on the parent, that would create a cycle.
StaticRectangle now has a simple SceneGraph, for keeping track of transforms and shapes and the relationships between them.
Here is the verbatim code that the Application is running to create the SceneGraph in this posts example.1
function createDefaultScene(): SceneGraph {
const scene = new SceneGraph();
const root = scene.createTransform();
const geometry = ExampleCube();
const shape = scene.createShape("cube", geometry);
shape.assignMaterial(defaultMaterial);
shape.setParent(root);
let previousTransform = root;
for (let i = 1; i < 10; i++) {
const child = scene.createTransform();
child.setTranslate([0, 1.0, 0]);
child.setParent(previousTransform);
const childShape = scene.createShape(`cube${i}`, geometry);
childShape.assignMaterial(defaultMaterial);
childShape.setParent(child);
previousTransform = child;
}
root.setScale([10, 10, 10]);
root.setDirty();
return scene;
}
At the moment, the SceneGraph is barebones. It only has the concept of transforms, shapes and material assignment.
The transform stack is also decoupled (sensing a theme here?) from the shapes despite being part of the same SceneGraph. Every new transform and shape is assigned a unique ID (UUID) and when ‘parenting’ a shape to a transform, you’re really just assigning the transform’s ID to be looked up at render time.
It still yet to be seen if this design works for more complex scenes. I did it this way because while ‘physical’ hierarchical relationships are necessary for transforms when calculating matrices (more on that in a moment) this information isn’t necessary for shapes. It also makes it potentially simpler to optimise where the same shape (geometry/material combinations) are shared across multiple transforms, useful not just for reducing the memory of the SceneGraph but also when rendering the scene (i.e instancing).
It’s also common for transforms to have no shape (pivots, groups etc.) so when gathering ‘renderables’ we can iterate the shapes in the scene and then pair them with the assigned transform via ID lookup, instead of having to flatten the transform stack to find the shapes we want to render.
It makes logical sense—at least to me—but I’m unclear at this stage if it is a good idea. I’m not sure what kind of performance overhead these lookups have vs just iterating the stack. Something I can profile in the future.
Transforms
The new Transform object keeps track of its local position and whether it has a parent, or children. As mentioned above, this is important for calculating where an object.
I will probably do a full blog post on transformations because it’s quite interesting stuff. But the high level gist is that when we want to know where an object is, we multiply its local matrix by its parents world space matrix, that’s it. Magic.
The Transform caches its own world matrix and it’s using ‘dirty’ flag propagation to avoid recalculating them unnecessarily. A transform is ‘dirty’ when its local transformations change, so a call to getWorldMatrix when will recalculate it. Doing things this way means we don’t need to walk the hierarchy multiplying all ancestors local matrices to get a world space position—that wouldn’t be very efficient.
A long with the Transform comes the need to handle the GPU resources associated with them. Initially I had a single buffer that held the spinning cube matrix. It was tempting to just turn this single buffer into an Array of buffers, but there are other, better options.
Instead I have added a TransformBufferPool, this preallocates an array big enough to store n-number of matrices—by default it’s 1000—and stores the bind groups for each, the bind group is how we tell the shader what offset the specific object being rendered is within this larger array of matrices.
Doing it this way is certainly more efficient when updating all matrices in one go, though it doesn’t support partial updating. In my mind we’d only ever by drawing a small fraction of a larger scene so I don’t expect this buffer to be too large.
If it wasn’t clear, a nice thing about this approach is that the bind groups aren’t tied to specific transforms. It’s just a pool to be used by n-number of transforms that may come and go during runtime.
Geometry
Decoupling the scene from the renderer meant decoupling the render resources from objects in the scene. Until now, I had a mesh interface that had GPU resources and functions that created/converted them from geometry descriptions, all intertwined.
Now StaticRectangle has Geometry which provides vertex positions, optional UV/normal/colour and indicies.
There is also now a Shape which is created by the SceneGraph, the shape stores the geometry and the material assigned to it.
Then we have the Mesh2, which is all the GPU resources that the renderer needs to render it. You create a Mesh from the the geometry data, which creates various buffers.
Materials
Similarly to decoupling geometry, we need a way to assign a material to an object in the scene without carrying all the rendering specific resources.
So now, we have a Material which we assign to a Shape in the SceneGraph, the Material has the name of the Shader we’re using, which is used when creating the MaterialResources. Specifically the layouts the shader is expecting given the materials type (e.g an unlit material may expect just an albedo map, where as a PBR material may expect albedo, normal, roughness and metallic maps).
At the moment, the material system is mostly scaffold for future work, my current ‘default material’ isn’t doing anything, the shader isn’t looking at those parameters (though they do exist even now), it’s simply outputting the vertex colours defined by the example cube Geometry.
Resource Manager
The ResourceManager has been updated, originally I had intended it to be used for loading/supplying external resources (shaders, textures etc.) but it is now serving GPU resources (layouts, pipelines, for meshes and materials etc.) In retrospect, this makes a lot of sense. It’s quite satisfying when things settle into place, and pieces fit together.
ResourceManager into external and GPU resources. I haven’t got that far yet, but I’m sure I’ll mention it in the future.StaticRectangle
~opens the terminal- Mouse look with left/right mouse button3
w,a,s,dto movepto pause
All of that work, and what do we have to show for it?
Well, more cubes of course!
Hopefully by now you are getting the idea that while visual progress may be slow—it actually took me 3 days to go from having a single spinning cube to having a single spinning cube with more infrastructure and I spent about 8 hours fixing bugs I had introduced in the process—that these updates build on each other, fairly soon I’ll be making large visual improvements without the huge infrastructure changes (I hope).
I’m not expecting scenes to be created by code this way, at least not generally. The idea would be to serialize / deserialize the scene graph into a ‘scene description’ that can be loaded just like any other resource. ↩︎
This is not a great name, I just needed a different name than
GeometryorShape. I could (maybe should) have just called itGeometryResourceslike I did withMaterialandMaterialResources, I can always rename it though… ↩︎To highlight the importance of QA, in the previous post I had the
ctrlkey to enable mouse look, but on some browsersctrl + w(mouse look and move left) closed the browser🤦 ↩︎