SPACEto play / pause
Lto go forward
Jto go backward
ARROW UPto increase volume
ARROW DOWNto decrease volume
Fto toggle fullscreen
Mto toggle mute
0 to 9to go to the corresponding part of the video
,to decrease playback speed
;to increase playback speed
Want to learn more?
Post-processing is about adding effects on the final image (the render). People mostly use this technique in filmmaking, but we can do it in WebGL too.
Post-processing can be subtle to improve the image slightly or to create huge effects.
Here are some examples of what you can use post-processing for:
- Depth of field
- God ray
- Motion blur
- Glitch effect
- Color variations
- Reflections and refractions
We will use the same setup as the Realistic Model Render lesson but with the Damaged Helmet model by Leonardo Carrion. It's a popular model with many details and good textures that should go well with our post-processing.
How it works 01:57
Most of the time, post-processing works the same way.
Instead of rendering in the canvas, we do the render in what we call a render target. That render target will give us a texture very similar to the usual texture. In a simpler way, we render in a texture instead of the canvas on the screen.
The term "render target" is specific to Three.js. Other contexts mostly use the word "buffer".
This texture is then applied to a plane facing the camera and covering the whole view. This plane uses a material with a special fragment shader that will do the post-processing effect. If the post-processing effect consists of redden the image, it would merely multiply the red value of the pixel in that fragment shader.
Most post-process effects are doing more than just tweaking the color values, but you get the idea.
In Three.js those "effects" are called "passes". We will refer to that term from now on.
We can have multiple passes on our post-process: one doing the motion blur, one doing the color changes, one doing the depth of field, etc. Because we can have multiples passes, the post-processing needs two render targets. The reason is that we can't get the texture of a render target while drawing in it at the same time. The idea is to draw in the first render target while taking the texture from the second one. At the next pass, we switch those render targets, take the texture from the second, and draw on the first. And again at the next pass, we switch them, and again, and again. That is what we call ping pong buffering.
Final pass on the canvas
The last pass won't be in a render target because we can put it directly on the canvas so the user can see the final result.
In the end
All of these can be very complicated for beginners, but, fortunately, we don't have to do it on our own.
Actually, we could have start without even explaining those render target, textures, ping pong buffering, etc. but it's always good to understand what is really going on behind the scene.
All we have to do is use the EffectComposer class that will handle most of the heavy lifting for us.
How to use it
(if your browser detects a menace, do not worry, it is not)
- Unzip it
- Open your terminal and go to the unzip folder
npm installcommand to install dependencies
npm run devto launch the local server
(your browser should start automatically)
You can now start coding.
- The JS is located in
- The HTML is located in
- The CSS is located in
Any trouble? Go to the discord server: