Before we move on, in my last post I took a detour from the lighting/texturing pipeline to fix some issues in my Transform implementation.
The debug scene was helpful for finding and fixing issues with transformations and it inadvertently highlighted a different issue.
Multi-Sample Anti-Aliasing
Without a technique known as ‘anti-aliasing’ edges can look jagged/stair-stepped because each pixel is either fully inside or fully outside the triangle you’re rendering with no in-between.

No Anti-aliasing, jagged/stepped edge
There are different approaches you can take to address this issue. A reasonable1 method is to use something called ‘Multi-Sample Anti-Aliasing’ (MSAA), which will do mulitple samples per pixel and average them for the final result, while only running the fragment shader once per pixel.

MSAA, pixels on triangle boundaries get a smoother gradient
Addressing this in WebGPU is not particularly laborious though you do have to do have make sure your attachments/pipelines are syncronised properly.
Attachments (depth, color, stencil etc.) are created with a sample count of 1 by default. If you create them with a sampleCount of 4 (WebGPU only has official support for 1 or 4 samples, though some browsers/hardware may support more) and you create an intermediary ‘multi-sample’ attachment, you can switch your color attachment (the result of your render) to be a resolveTarget instead, with the view being the multi-sample texture. This, along with creating with a matching multisample value in your pipelines, allows WebGPU to do MSAA.
So, that’s what I did.
Though in typical me-fashion, I didn’t do just that.
I wanted to be able to enable/disable MSAA on the fly for performance testing at a later stage.
That led me to more architecture restructing/refactoring.
In fact, it led to so much refactoring that I ended up with a full first-class citizen Camera that is now part of the SceneGraph2, more on that in a moment.
mtoggles MSAA~opens the terminal- Mouse look with left/right mouse buttons
w,a,s,dto movepto pause
Lights
Lights are part of the SceneGraph and extend the Transform class.
I currently have two light types implemented, the spot light is the next logical addition.
- Point light
- Attenuated light that radiates outward from a point in space.
- Directional light
- Light emitted as parallel rays in a single direction with no fall off. Typically used to emulate light sources very far away, like the sun. Very fast and cheap to render (if you’re not casting shadows from it…)
// Created like this
const scene: SceneGraph = new SceneGraph();
const point = scene.createLight(LightType.Point, "myPointLight");
point.setColor(new Color(1.0, 0.5, 0.5));
point.setIntensity(50);
point.setRange(45);
I had to make some changes to the Transform class for lights so I can get properties in world space (e.g Transform.getTranslate(Space.World)). Previously, world space transformation matrices were uploaded wholesale so getting piecemeal transformations in world space wasn’t necessary until now (points disregard orientation and directionals disregard position).
Camera
The Camera node also extends Transform and adds the ability to extract the perspective, view and projection matrices which the CameraUniformBuffer then writes to the device at render time.
// Created like this
const scene: SceneGraph = new SceneGraph();
const camera = scene.createCamera("main");
// Camera Properties
const fov = 45; // in degrees
const aspect = canvas.width / canvas.height; // needs to be updated when the canvas resizes
const near = 0.1; // clipping plane
const far = 1000.0; // clipping plane
camera.setPerspective(fov, aspect, near, far);
// Camera Transforms
camera.setTranslate(-10, 28, 72);
// Looking at the origin
camera.lookAt([0, 0, 0]);
// Technically you can have many cameras in a scene, this sets the currently active one
scene.setRenderCamera(camera);
Changes to the Camera came hand in hand with changes to the CameraController. There is now damping so it doesn’t stop abruptly as soon as the key press ends, and there are two new camera controls.
space bar: fly upctrl: drop down
These two additions make traversing larger scenes easier.
Action!
mtoggles MSAAttoggles ‘ancillary’ shapes~opens the terminal- Mouse look with left/right mouse buttons
w,a,s,dto movespace barfly upctrldrop downpto pause
I have added the concept of ‘ancillary’ shapes, these are shapes that are ignored by the renderer unless explicitly told otherwise. The general idea is that these shapes would be to used as ’locators’, visual representation of where objects are that would otherwise be invisible (lights etc.), mostly for debugging purposes.
As an example, in the scene above I have added an axis mesh to the two lights that you can toggle on and off with t.
The one that has the orientation with the z-axis pointing towards the skull is a greenish directional light, and the other one is a white point light.
I was hoping to create nice locator shapes for the lights drawn as lines for this post, but as it turns out drawing lines is hard.
Reasonable in terms of performance/benefit tradeoff. Other more expensive options like super-sampling (SSAA) will run the shader per-sample rather than per pixel, which means you can address all aliasing issues (textures, geometry etc.) but the cost is usally too high, certainly for StaticRectangle. ↩︎
I find it quite amazing how the simple addition of a new concept can snow-ball to impact seemingly unrelated sub-systems. It’s why I find estimation of development tasks to be difficult some times. ↩︎