WGSL Shaders


Use.GPU has a powerful WGSL shader linker. It lets you compose shaders functionally, with closures, using syntactically correct WGSL. It provides a real module system with per-module scope.

The shader linker is very intentionally a stand-alone library, with no dependencies on rest of the run-time. It has minimal API surface and small size.

Every shader is built this way. Unlike most 3D frameworks, there is no difference between built-in shaders and your own. There is no string injection or code hook system, and no privileged code in shader land. Just lots of small shader modules, used à la carte.

Examples of custom shaders include:

e.g. Here's a shader function for adjusting the exposure of an image:

@link fn getGain() -> f32;

@export fn adjustExposure(color: vec4<f32>) -> vec4<f32> {
  var rgb = color.rgb * getGain();
  return vec4<f32>(rgb, color.a);

In JS, you have access to all the same APIs, hooks and types as the core library. Shader modules can be imported directly:

import { adjustExposure } from './exposure.wgsl';

They are used as props on reactive components, which allows for high- and low-level code to mingle and combine very snugly.

The linking process is highly optimized, done in a single linear pass over each module. The tree of modules that it links is built incrementally by the reactive components around it, starting from pre-parsed bundles.

The rest of this guide will assume some pre-existing familiarity with shaders.


The shader linker is typically used either inline, or through reactive Live hooks.

  • Inline is great for so-called "kernels" and for quick prototyping.

  • Hooks allow you to build and memoize complex shaders.

This covers changes to the data, the shader code and the render pipeline. If a component prop changes somewhere, the response could be as minimal as updating a single constant in GPU memory. But it could be as thorough as linking an entire new shader from scratch.

In the browser/driver, pipeline and shader compilation times are pretty slow, so shaders and pipelines are cached. This is done using structural hashes, which the shader linker computes as you compose. It also uses instance hashes so you can use multiple unique instances of the same module in the same shader.

This arrangement lets you create shaders on the fly, in a fully declarative and reactive way, while being very responsive.

It works very well once warmed up. The downside is the initial start up time, which is unavoidable in WebGPU—at least for now: shaders have to be supplied and parsed as text.

To mitigate this the pipelines are compiled async, so content will pop in as it is ready to be displayed. This works similar to React Suspense, though the specifics are completely different. This way, Use.GPU loads similar to a typical webpage, with background, fonts and images appearing on their own schedule.

Once all shader and pipeline archetypes are known, transitions are instant and coherent.

Inline WGSL

You can use wgsl template literals to inline WGSL code anywhere, and pass it directly to components:

import { FullScreen } from '@use-gpu/workbench';
import { wgsl } from '@use-gpu/shader/wgsl';

const shader = wgsl`
  @link fn getTexture(uv: vec2<f32>) -> vec4<f32> {}

  fn main(uv: vec2<f32>) -> vec4<f32> {

    // A simple "Milkdrop"-style feedback effect
    let advectedUV = (uv - 0.5) * 0.99 + 0.5;

    return getTexture(advectedUV);

return (
  <FullScreen shader={shader} />

This <FullScreen> component will use the given shader to produce a feedback effect, when placed in a <RenderToTexture>.

The @link annotation signifies a function that is linked from elsewhere at run-time. The <FullScreen> component provides the getTexture link as part of its API.

The wgsl template string returns a parsed ShaderModule, and is equivalent to a call to loadModuleWithCache. The module is ready to be linked to other shaders. There are also f32, i32 and u32 helpers.

Inline WGSL is ideal for "ShaderToy"-style snippets and experiments. It is also good for custom materials, such as @<ShaderFlatMaterial> and @<ShaderLitMaterial>. These take one or more shader modules and only require you to provide small functions, without rewriting an entire fragment shader.

Data getters

A geometry layer has certain input attributes, like position, color, size, etc. A vertex shader has to read these to generate the right output vertices.

Rather than use classic vertex attributes, the vertex shaders define getter functions index => value, like so:

// Circle size for <PointLayer>
@optional @link fn getSize(index: u32) -> f32 { return 1.0; }

The index counts over the vertices and/or items being rendered.

This means the shader is fully agnostic about where getSize comes from. It could be:

  • Null - Default implementation (1.0)
  • Constant - Replaced with a uniform + getter
  • Array (StorageSource) - Replaced with a storage binding + getter
  • Function (LambdaSource) - Replaced with another shader function recursively

For a texture instead of a data array, index is of type vec#<..>. Instead you use:

  • Texture (TextureSource) - Replaced with texture / sampler binding + getter

The linker will generate the code needed to paper over the differences, e.g.:

struct _VT_Type {
  _VT_1_getSize: f32,
@group(1) @binding(0) var<uniform> _VT_Uniform: _VT_Type;

fn _VT_1_getSize(a: u32) -> vec4<f32> {
  return _VT_Uniform._VT_1_getSize;

Note that you never need to write this sort of cryptic, eye-watering WGSL in Use.GPU yourself. All the prefixes are auto-generated.

Auto bindings

You can also @link naked bindings such as var<storage> or var foo: texture_2d<f32> directly, without a getter function. This is mainly used with compute shaders that need both reading and writing.

In all cases, bindings will be grouped into two bind groups, static and volatile:

  • Static: Binding changes infrequently. Its referenced contents can still change frame-to-frame.
  • Volatile: Binding changes frame-to-frame, e.g. as part of a front/back buffer flipping arrangement.

The static bind group is not expected to change, and it requires a component re-render to respond to changes.

The volatile bind group instead is evaluated just-in-time every frame. Each volatile draw call has a small LRU cache internally based on the volatility of all sources used, to ensure it never trashes under cyclic use.



As a convention, the data props come in pairs: a uniform singular (size) vs array-based plural (sizes).

  • The plural is for arrays and other GPU-based data. These are passed by reference, as a handle. Which is good.
  • But singular uniforms are usually passed by value: e.g. a number. This means props changing and components re-rendering.

To avoid this needless re-rendering, we box singular values into a React-style Ref. This is a wrapper {current: T}.

But we don't want to put this burden on the component user. They should still be able to pass in plain numbers, colors, etc. freely.

The useShaderRef hook instead makes this easy for the component creator. It will join a size and sizes into one getSize. It prefers the plural if it is not-null, or the singular value otherwise:

import { useShaderRef } from '@use-gpu/workbench';

type Props = {
  size: number,
  sizes: ShaderSource,

const getSize: Ref<number> | ShaderSource = useShaderRef(props.size, props.sizes);
const boundShader = useBoundShader(...., [getSize]);


This hook binds data to a shader. It's a light-weight operation: actually linking shaders is an explicit separate step, done at the very end.

The attributes to bind are all the @linkable functions, in the order they appear in the shader code. Your .wgsl remains the single source of truth for its own interface.

Inside a component:

import { useBoundShader } from '@use-gpu/workbench';

const getSize = ...;
const boundShader = useBoundShader(myShader, [getSize]);

Here, boundShader is a new ShaderModule, which can be used anywhere: array, texture or constant, doesn't matter as long as the value types match.

The shader linker can also auto-cast compatible source types, e.g. from a single f32 source to a vec4<f32> result, using the standard practice of filling in (..., 0, 0, 1).


This hook is the first half of useBoundShader in solo form. It provides a getter for 1 data source, turned directly into a bare shader module, without linking it to an existing shader.

You must specify the function signature to bind yourself, as:

const SIZE_BINDING = {
  name: 'getSize',
  format: 'f32',
  args: ['u32'],

const sizeSource = useBoundSource(SIZE_BINDING, getSize);

The result can be passed to e.g. castTo or chainTo to build up simple shader chains in a "point-free" style.


This hook links a fully bound shader. It will use the shader's structural hash to effectively memoize the result, even if the bound sources change.

In most cases however, you do not need to use this, as this happens automatically inside <DrawCall> (render) or <Dispatch> (compute).

To link a pair of vertex/fragment shaders, use:

const {
} = useLinkedShader(
  [vertexShader, fragmentShader],

The shader is a list of ShaderModuleDescriptor, which holds the final code, entry point and hash.

The other values describe the final manifest of the linked shader:

  • uniforms are grouped together, to be put into a single UBO
  • constants holds the latest values of the uniforms to be uploaded
  • bindings holds all ordinary storage and texture bindings
  • volatiles holds all volatile storage and texture bindings (binding may change every frame)
  • entries defines the pipeline's bind group layout

Turning this into an actual usable render pipeline is somewhat involved, due to the nature of WebGPU. See the source of @<DrawCall> for how this is done.

WGSL Extensions

The shader linker implements its extensions mainly through custom @attributes.

The only exception is the use import syntax, which is an amalgam of Rust and TS:

use 'module/path'::{ symbol, ... };
use 'module/path'::{ symbol as symbol, ... };

Module paths are resolved by whatever JS/TS packing solution in use.

The shader linker contains a canonical transpiler, which can emit both CommonJS and ES modules. The included wgsl-loader has a webpack, node and rollup variant.



  • Import .wgsl functions, types or vars statically from other WGSL code
  • Import .wgsl symbols statically from TypeScript
  • Parse inline or user-supplied WGSL code in TypeScript at run-time
  • Link shader functions at run-time
  • Bind data to shader modules
  • Auto-generate numbered bindings + getter
  • Infer types of base shaders based on the links
  • Polyfill gaps in the spec (e.g. u8 storage buffer)

The specifics are documented in the README.

For historical reasons, it can link GLSL too. Both languages mirror the same API, but each uses its own native types (i.e. f32 vs float), and the GLSL version uses #pragmas instead of @attributes.

WGSL Structs

You can define structs as WGSL types and then import them as symbols in TS:

@export struct Light {
  position: vec4<f32>,
  normal: vec4<f32>,
  tangent: vec4<f32>,
  size: vec4<f32>,
  color: vec4<f32>,
  intensity: f32,
  kind: i32,
import { Light } from '@use-gpu/wgsl/use/types.wgsl';

Light is a shader module, not a TypeScript type.

You can use it as the format of <ShaderData>, similar to passing fields to <Data>. It produces a single StorageSource of the same format, preserving the array-of-structs layout.

Call bundleToAttribute on it, to get back a UniformAttribute for the type.

Type Inference

The linker can infer types recursively from links. You can use this to duck-type structs, allowing any type to be substituted that makes the code compile.

Given the following link and code:

@infer type T;
@link fn getVertex(vertexIndex: u32, instanceIndex: u32) -> @infer(T) T {};

fn main(...) {
  let vertex: T = getVertex(v, i);
  let pos = vertex.position;
  // ...

The struct type T will be inferred from the link's return type, and can be used elsewhere in the shader snippet. Any type T that contains a suitable position field will work.

If you only pass along T without doing anything with it, then it is effectively an any type.

Here it lets e.g. a lighting system carry along arbitrary metadata for different material types, which are linked as functions:

@infer type T;
@link fn applyMaterial(..., materialProps: T) -> @infer(T) T {};

fn applyLights(..., materialProps: T) {
  // Loop over lights
  for (...) {
    // ...
    applyMaterial(..., materialProps);

Faux Pre-Processor

Unlike GLSL, WGSL has no #define. For the most part this is a non-issue, as run-time bindings are auto-generated.

Nevertheless, there is an escape hatch, for two reasons.

  1. If you really want to hand-roll bind groups and bindings, there should be nothing stopping you, and it should be as pleasant as possible.

  2. In a few rare cases, like e.g. the view projection, there is no real reason to write it in an auto-bound way, because views are global.

So you can still write manual bindings like this:

struct ViewUniforms {
  projectionMatrix: mat4x4<f32>,
  viewMatrix: mat4x4<f32>,
  viewPosition: vec4<f32>,

@export @group(VIEW) @binding(VIEW) var<uniform> viewUniforms: ViewUniforms;

The @attributes contain a placeholder VIEW, which can be filled in just-in-time. This way, even hand-rolled bindings can be made reusable.

You define e.g. {'@group(VIEW)': '@group(0)'} when you link the shader, as if it was a constant. Instead of a global let a = b, it will substitute the first attribute with the second in the code everywhere, across module boundaries.