Getting Started with luma.gl
homeI recently tried going through the tutorials on the luma.gl docs page, but version 9 was a major change, and the tutorials haven’t yet been updated. This is my attempt at recording what I did to get them running. I’m new to luma, so I might get some things wrong and I encourage you to let me know. Otherwise, I hope you’ll find this useful and it will save you some of the headaches I’ve gone through in recent weeks.
Also: note that I’m only using WebGL here, not WebGPU, although I am planning on adding that.
Prerequisites
For this tutorial we’ll be using:
luma.gl
(obviously)pnpm
for managing packages (but you can subsitutenpm
if you’d like)vite
for the development server
Getting the tutorial project setup
First, let’s make sure that we have a solid vite starter project:
pnpx create-vite -t vanilla-ts luma-gl-tutorial
Then we cd
into the directory and install the required dependencies:
cd luma-gl-tutorial
pnpm i @luma.gl/engine @luma.gl/webgl
Now we can start up the dev server with:
pnpm run
and we should see a running application on port 5173.
Next we’ll replace the contents of app.ts
to render a triangle:
import {
AnimationLoopTemplate,
AnimationProps,
Model,
makeAnimationLoop,
Geometry,
} from "@luma.gl/engine";
import { webgl2Adapter } from "@luma.gl/webgl";
const vs = `\
#version 300 es
precision highp float;
in vec2 position;
in vec3 color;
out vec3 vColor;
void main(void) {
vColor = color;
gl_Position = vec4(position, 0.0, 1.0);
}
`.trim();
const fs = `\
#version 300 es
precision highp float;
in vec3 vColor;
out vec4 fragColor;
void main(void) {
fragColor = vec4(vColor, 1.);
}
`.trim();
const positions = new Float32Array([-0.5, -0.5, 0.5, -0.5, 0.0, 0.5]);
const colors = new Float32Array([1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]);
export default class AppAnimationLoopTemplate extends AnimationLoopTemplate {
model: Model;
constructor({ device, aspect, animationLoop }: AnimationProps) {
super();
this.model = new Model(device, {
vs,
fs,
geometry: new Geometry({
topology: "triangle-list",
attributes: {
position: {
size: 2,
value: positions,
},
color: {
size: 3,
value: colors,
},
},
}),
});
}
onRender({ device }) {
const renderPass = device.beginRenderPass({
clearColor: [1, 1, 1, 1],
});
this.model.draw(renderPass);
renderPass.end();
}
onFinalize(): void {
this.model.destroy();
}
}
const device = luma.createDevice({
adapters: [webgl2Adapter],
createCanvasContext: true,
});
const animationLoop = makeAnimationLoop(AppAnimationLoopTemplate, {
device,
});
animationLoop.start();
Here’s an example of the what that code will draw when it runs:
We’ll go into a little more depth on that code in a moment, but to start, let’s define some terms:
- Buffer: A segment of memory allocated on the GPU.
- Attribute: A view of a buffer as a specific data type, for instance 32-bit floating-point numbers.
- Vertex Shader: A shader program that runs on the GPU for each vertex in a geometry. It takes in vertex attributes (like position and color) and outputs transformed vertex data.
- Fragment Shader: A shader program that runs on the GPU for each pixel in a rendered image. It takes in interpolated vertex data and outputs a color for that pixel.
- Geometry: A collection of vertices and indices that define the shape of an object.
- Model: A class that combines vertex and fragment shaders with a geometry to draw an object on the screen.
These terms are roughly listed in terms of least abstract to most abstract. All graphics programmers would recognize terms like buffer, attribute, vertex shader, and fragment shader. They would probably use geometry and model similarly, but geometries and models are actually classes in luma.gl
that are helpful as higher level abstractions, and thus they would differ from similar classes in other libraries like three.js
.
One of the main challenges that every graphics programming library has to address is getting data from CPU land (or you might think of it as JS land) to GPU land efficiently. In our example, luma.gl
’s Model
class is handling this for us. We allocate two Float32Array
s in JS land:
const positions = new Float32Array([-0.5, -0.5, 0.5, -0.5, 0.0, 0.5]);
const colors = new Float32Array([1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]);
And then we reference those arrays when we construct the Model
, which tells luma.gl
that we’re going to need those copied to the GPU for our rendering to work. Notice that our arrays have no structure to them. If we were working with that data in JS, we would almost certainly present it this way:
const positions = [
{x: -0.5, y: -0.5},
{x: 0.5, y: -0.5},
{x: 0.0, y: 0.5}
]);
const colors = new Float32Array([
{r: 1.0, g: 0.0, b: 0.0},
{r: 0.0, g: 1.0, b: 0.0},
{r: 0.0, g: 0.0, b: 1.0}
]);
But we can’t upload objects like that straight to the GPU. So instead we flatten them into their most primitive form and we include a contract about how the data should be made available to the shaders programs. So when we define our position
attribute:
position: {
size: 2,
value: positions,
},
Notice that we’ve explicitly included a size
property, which tells luma.gl
how many components each element in the array has. In this case, each element is a 2D vector, so we set size
to 2. Similarly for the color
attribute:
color: {
size: 3,
value: colors,
},
And in our vertex shader we declare the position
and color
attributes:
attribute vec2 position;
attribute vec3 color;
And in case it isn’t clear, vec2
has to correspond to size: 2
up above and vec3
has to correspond to size: 3
up above, or else we’ll get an error when we try and run our shader program.