Immersive presentations with HTML & WebGL

While preparing for a talk I gave at Reject.js 2014, I decided that I wanted to embed my WebGL demos directly into the presentation itself, to give a more fluid and interesting talk.

If you take a quick look at the title slide, you’ll be able to see the result, full-screen 3D content running in a presentation, accessible to anyone with a WebGL-enabled browser. You can advance using the arrow keys.

WebGL presentation title slide

In this post I’ll give details of how something like this can be put together.

Making a HTML presentation

Before we can embed WebGL content into a presentation, we need a presentation. Obviously.

I’ve been using shower recently, which is a pretty nifty presentation enginethat let’s you create your slides in HTML, like this:

<section class="slide"><div>
  <h2>Why WebGL?</h2>
  <ul>
    <li>Accessible 3D for all</li>
    <li>Support across browsers and mobile increasing all the time</li>
    <li>Combination of JavaScript &amp; GLSL very effective</li>
  </ul>
  <p>Some examples: <a href="http://threejs.org">http://threejs.org</a></p>
  <img class="right" src="pictures/sidebar.png" alt="">
</div></section>

…and get out slides looking like this:

WebGL presentation example slide

Adding WebGL content

Now that we have our presentation up and running, we can add some WebGL. The example that I embedded is of a 3D model I built of the Safari logo.

Adding this is reasonably straightforward. In the original example, the 3D visuals get populated into a <div> with a specific id, all that is required to add it to the presentation is to load the JavaScript for the example, and add a <div> to the slide where the embedding takes place.

However, this doesn’t scale – what we want is for many slides to have some example embedded in them, and ideally for them to share resources, so that each slide wouldn’t start with an ugly loading screen. Adding a bunch of <div>s won’t work as the WebGL code only expects to have a single place to render to, and it seems wasteful to create multiple rendering targets (one per slide) when only a single one is ever shown at a time.

Detaching and re-attaching

The solution is pretty simple: we create a single canvas element that contains our WebGL demo, and as we advance through the slides we detach it from the slide that it was on previously and attach it to the slide we are about to show.

Shower doesn’t give us events to trigger off, however we can use shower.getCurrentSlideNumber() to find the current slide number. Given that on most slides we are already executing some sort of animation loop for the WebGL example, it’s not a problem to always check whether the slide has changed on every frame.

Shower also adds matching CSS ids to each slide, so we can grab the <section> element for the corresponding slide from JavaScript.

Combining this with some CSS selectors for picking out the canvas element from the old slide and locating the element to inject it into the new slide, we can now declaratively add a WebGL example to as many slides as necessary:

<section class="slide"><div>
  <h2>Geometry - Sphere</h2>
    <code>new THREE.SphereGeometry( 1, 32, 32 );</code></br>
    <div class="threejs-container medium">Loading...</div>
</div></section>

Matching the WebGL content to the slide

The demo I use has a basic API for interacting with it, for example, to switch to displaying the model in wireframe mode, you can do app.wireframe = true;. To synchronize the slides and the demo, we can just update the demo every time the slide changes, e.g.

  var slideNumber = shower.getCurrentSlideNumber();
  if ( slideNumber !== lastSlideNumber ) {
    // Have changed slide
    if ( slideNumber === 1 ) {
      app.wireframe = false;
    }
  }

WebGL presentation wireframe

Styling with CSS

One gotcha I hit with sharing the WebGL canvas in this way, is that the canvas container size changes between slides.

I use a library, THREE.js to do the rendering, and this was not notified of this change and as such the rendering results were off. To resolve this, I added a utility function, renderer.setContainer() to the renderer class which let me update the DOM container that WebGL canvas is a child of. Using this, the renderer could reset the its size and aspect ratio.

This function is invoked whenever the slide is changed, to make sure things display correctly.

Hover effects

Finally, I wanted to allow users to expand a demo on a slide, from a thumbnail like this…

WebGL presentation thumbnail

…into a fullscreen version like this:

WebGL presentation full

The most intuitive way to do this is to provide an on hover CSS rule on the thumbnail, like so:

.small:hover {
    position: absolute;
    right: 2%;
    bottom: 3%;
    height: 94%;
    width: 96%;

This just expands the demo to fill pretty much the whole slide, whenever the user hovers over the thumbnail.

As before, the slight gotcha is that we need to update the renderer every time this happens, but this is achieved fairly simply with a listener:

var onHover = function() {
  renderer.setContainer( this );
};
container.addEventListener( 'mouseover', onHover );
container.addEventListener( 'mouseout', onHover );

That’s all

All in all I was quite happy with how well all this performed, and not having to switch tabs to show a demo mid-presentation definitely made giving the talk easier. If you’d like to use my presentation as reference, the code is up on Github.

Rendering large terrains

Today we’ll look at how to efficiently render a large terrain in 3D. We’ll be using WebGL to do this, but the techniques can be applied pretty much anywhere.

Terrain

We’ll concentrate on the vertex shader, that is, how best to use the position the vertices of our terrain mesh, so that it looks good up close as well as far away.

To see how this end result looks, check out the live demo. The demo was built using THREE.js, and the code is on github, if you’re interested in the details.

LOD – Level of detail

An important concept when rendering terrain is the “level of detail”. This describes how many vertices we are drawing in a particular region.

Take the terrain below, notice how the nearby mountain on the right fills a lot of the final image, while the mountains in the distance only take a small portion of the image.

Terrain

It makes sense to render nearby objects with a greater LOD, while those in the distance with a lower LOD. This way we don’t waste processing power computing a thousand vertices that end up in the same pixel on screen.

Flat grid

An easy way to create a terrain mesh is to simply create a plane that covers our entire terrain, and sub-divide it into a uniform grid. This is pretty awful for the LOD, as distant features will have far too much detail, while those nearby not enough. As we add vertices to the uniform grid to improve the nearby features, we are wasting ever more on the distant features, while reducing the vertex count to bring down the vertices used to draw the distant features will impact the nearby features.

Recursive tiles

A simple way to do better is to split our plane into tiles of differing sizes, but of constant vertex count. So for example, each tile contains 64×64 vertices, but sometimes these vertices are stretched over an area corresponding to a large distant area, while for nearby areas, the tiles are smaller.

The question remains how to arrange these tiles. There are more sophisticated methods for doing this, but we’ll stick with something simpler. Specifically, we’ll start with the area nearest to the camera and fill it with tiles of the highest resolution, say 1×1. We’ll then surround this area with a “shell” of tiles of double the size, 2×2. We’ll then add a 4×4 tile “shell”, and so on, until we have covered our map. Here’s how this looks, with each layer color-coded:

Tile shells

Notice how the central layer actually has 4 extra tiles than the others, this is necessary, otherwise we’d end up with a hole in the middle. This tile arrangement is actually very nice, as each additional “shell” doubles the width and height of the area covered, while only required a constant number of additional tiles.

When viewed from above, this arrangement doesn’t seem particularly great, as all the tiles are more or less the same distance from the camera. However, a more usual view is one where the camera is pointed at the horizon and hence nearby tiles are filling more of the final view:

Tile shells perspective

In this view it is clear that we’re already doing much better with our vertex distribution. Each “shell” is represented roughly equally in the final image, which means that our vertex density is roughly uniform across the image.

Moving around

Now that we have a suitable mesh, we need to address how what will happen as the camera moves around the terrain. One idea is to actually keep the mesh itself static, but let the terrain data “flow” through the mesh. Imagine that each vertex is part of a cloth that is being warped to the shape of the underlying terrain. As we move around the terrain, the cloth is dragged over it, being deformed into the correct shape below it.

The advantage of this approach is that we always have the correct LOD now matter where we move, as the terrain mesh is static relative to the camera.

The problem is that as the terrain “flows” through the mesh, if the vertices are not sufficiently close together, they will not do a good job of sampling the terrain, in particular they will wobble as the terrain flows through them.

To illustrate why this happens, consider a region of the terrain where the vertex spacing is large, e.g. 1km. Imagine that the terrain we are displaying has alternating valleys and hills at this point, spaced 1km apart. As the terrain “flows” thorough the mesh, then sometimes the vertices will all be on the hills and other times they will be in the valleys. Now as this is a region that is far from the camera, we don’t care so much that the hills and valleys aren’t shown correctly, as either alone would be fine. The real issue is that the oscillation between the two creates flickering, which is visually distracting.

Terrain close-up

Grid snapping

To solve this oscillation issue, we keep the same geometry as we had before, except that we make it so that when the terrain “flows” through the mesh, the vertices snap to a grid, with spacing equal to the vertex spacing for that tile. So in the hill/valley example from before, rather than having a vertex “flow” from the top of the hill to the bottom of the valley and then back up, it is instead snapped to the nearest hill-top. By making the snap grid be spaced at the same interval as the tiles vertex spacing, we will end up with a uniform grid again, except snapped to a fixed point on the terrain.

Morphing between regions

Ok, so we’ve solved one problem, but now we’re faced with another. As we’re snapping each tile according to it’s vertex spacing, where two tile layers meet we end up with seams, as shown below:

Terrain seams

To fix this we want to gradually blend one layer into another, so that we end up with a continuous mesh. One way to do this is to compute what we’ll call a “morph factor”, which determines how close we are to the boundary between two different LOD layers. When we are near the boundary with a layer that has a greater vertex spacing than ours, the value of the morph factor approaches 1, while when we are far away, it is 0.

We can then use this value to blend smoothly between one layer and the other. The way we do this is calculate what the position of the vertex would be if it were snapped to both it’s own grid and that of the next layer and then linearly blend the two positions according to the morph factor. Here’s some GLSL code that does just that:

// Snap to grid
float grid = uScale / TILE_RESOLUTION;
vPosition = floor(vPosition / grid) * grid;

// Morph between zoom layers
if ( vMorphFactor > 0.0 ) {
  // Get position that we would have if we were on higher level grid
  grid = 2.0 * grid;
  vec3 position2 = floor(vPosition / grid) * grid;

  // Linearly interpolate the two, depending on morph factor
  vPosition = mix(vPosition, position2, vMorphFactor);
}

Here’s the terrain with the areas where the morph factor is non-zero highlighted in white.

Terrain seams

As before, each tile shell is color coded. Notice how each tile becomes completely white on the boundary with the next shell – as it morphs to have the same vertex spacing as the neighbouring tile.

Live demo

To see the terrain in action, check out the demo. The resolution of the tiles has been chosen such that it is hard to notice the transitions between different levels of detail. If you look closely though, at a distant mountain as it comes closer, you should be able to spot the terrain gradually blending into a higher-vertex representation. The source is on github if you want to play around with parameters to change the look or color of the terrain.

Terrain final

Finally, some of the ideas in this post were taken from the CDLOD paper, which is well worth a read.

Working (better) with GLSL source files

This post is a follow-on from a previous post, where I detailed the workflow I had developed for working with GLSL files, as part of developing 3D content for the web. Since then, I have refined my approach so I’m posting an update.

At a high level, my approach is as follows:

  • Store shaders as separate files, with common code imported using #include statements.
  • Use a custom Require.js plugin to inject these shaders into my JavaScript when I need them.
  • Allow overriding of #define statements to allow for customization of shaders at runtime.

Terrain

The rest of this post will go into details about how each piece works, and the underlying motivation.

An example application that uses this structure can be found here: https://github.com/felixpalmer/amd-three.js.

Storing shaders in their own files

Being able to edit GLSL code in individual files is a big deal for me, as it keeps the shader code separate from the JavaScript codebase and allows me to run a validator over the code to check there are no obvious bugs.

To actually perform the code validation, I’ve created a command line tool which compiles the GLSL code and reports any errors. Using this I can easily integrate with the editor I’m using to check for bugs every time I save. For more details, see this post.

Injecting shaders with Require.js

I use Require.js to organize my code, so I needed to find a way to pull my shader code into modules where I’d need them. Require.js has a text plugin, which does exactly that, you pass it the path to a file and it will load in the raw content of that file, like so:

// myText.txt
Hello world!

// main.js
require( ['text!myText.txt'], function ( myText ) {
  // myText now contains "Hello world!"
} );

This is great, except I wanted to do more, so I made my own Require.js plugin which added some functionality to the above, namely #include statement support and the ability to redefine #define statements from within JavaScript.

#include statments

Once my shaders started to grow, it became difficult to work with them a large monolithic files, especially when different shaders would share common code. To remedy this, I implemented support for the #include statement in both the Require.js shader plugin, and the command line validator. Usage is as you’d expect:

// shift.glsl
vec3 shift(vec3 p) {
    return p + vec3(500.0, 0, 0);
}

// main.vert
#include shift.glsl
void main() {
  // Example usage of included file, see shift.glsl for function definition
  vec3 shiftedPosition = shift(position);
  gl_Position = projectionMatrix * modelViewMatrix * vec4(shiftedPosition, 1.0);
}

Redefining #defines

As mentioned above, I’ve rolled my own plugin for injecting shaders, which works similarly to the text plugin. As well as supporting the #include function, it allows you to modify #define statements. I’ve found this useful for when I want to use a shader in different contexts, but with slightly different parameters, without having to pass it these values as uniform values. It can also be used to conditionally compile portions of the GLSL.

Here’s how it’s used:

define( ["shader!simple.frag", "shader!simple.vert"], function ( simpleFrag, simpleVert ) {
  simpleFrag.define( "faceColor", "vec3(1.0, 0, 0)" );

  // To actually get the text content of the shader, use myShader.value
  var material = new THREE.ShaderMaterial( {
    vertexShader: simpleVert.value,
    fragmentShader: simpleFrag.value
  });
} );

Example

To see how all this fits together, check out the example project.

Creating a Geospatial database on Amazon RDS

Last year, Amazon added Postgres support to their cloud relational database offering, RDS. The good folks at Amazon were kind enough to include support for some popular extensions, in particular PostGIS, which adds geospatial abilities to Postgres, so queries like “find me 100 users nearest to London” are simple and efficient to perform.

In this post I’ll go through setting up an RDS instance with Postgres + PostGIS, and importing some sample data to show that it all works. Here’s RDS serving up the country boundary and populated places for the UK:

UK towns

Note that everything I’ll do here is covered in Amazon’s Free Tier, so you can try this out at no cost.

AWS basics

For this post, I’m going to assume some familiarity with AWS, in particular that you already have an EC2 instance running, that you can use to connect to the RDS instance. If you do not have an EC2 instance running yet, set one up, choosing a region local to you.

Creating an RDS instance

Creating an RDS instance is pretty straightforward:

  • Go to the RDS console select ‘Launch a DB Instance and select Postgres
    Select Postgres
  • You’ll be asked whether to go with a Free or Production system, I went with Free
  • Configure your instance as follows (this will keep you within the Free Tier)
    Select Postgres
  • In the additional config, choose a name for your database, and then place it in the same VPC and availability zone as the EC2 you created earlier. I opted to make my instance not Publicly Accessible. By placing the RDS instance in the same VPC as your EC2 instance you will be able to access it from there.
  • Finally configure your backups (I kept the defaults) and launch the instance.

Your instance will now launch.

Connecting to your RDS instance

Go to the RDS console to view your instance (if you can’t see it double check you are in the right AWS region). If you select your instance, you’ll be told the Endpoint for the database, which will look something like mydb.0123456789abcd.eu-west-1.rds.amazonaws.com:5432, note this down.

Login to your EC2 box and verify that you can talk to your RDS instance, by invoking:

telnet mydb.0123456789abcd.eu-west-1.rds.amazonaws.com 5432

If all is well you should see something like the following

Trying 172.0.0.1...
Connected to mydb.0123456789abcd.eu-west-1.rds.amazonaws.com.
Escape character is '^]'.

If you can’t connect, double check that the Security Group for the RDS instance allows connections from your EC2 instance on port 5432.

Using your EC2 box as a proxy

Great, so now you can talk to the RDS instance from your EC2 box, but not from anywhere else, in particular your local machine. To enable access from you local machine, you can set up an SSH tunnel. Invoke the following in a terminal on the local machine:

ssh my.ec2.instance.amazonaws.com -L 5432:mydb.0123456789abcd.eu-west-1.rds.amazonaws.com:5432

Now you can just use localhost:5432 on your local machine to connect directly to the Postgres RDS instance. Whether you use this tunnel or the EC2 machine for the rest of the setup is up to you.

Installing PostGIS

To actually connect to Postgres you’ll need to install psql, on Ubuntu this is simply sudo apt-get install postgresql-client-9.1.

Then connect using the following command:

psql --host mydb.0123456789abcd.eu-west-1.rds.amazonaws.com --port 5432 --username user --dbname mydb

Or if using the SSH tunnel:

psql --host localhost --port 5432 --username user --dbname mydb

When prompted, enter your password, and you should be in. Installing PostGIS is a breeze, just type this into the psql prompt:

CREATE EXTENSION postgis;
CREATE EXTENSION postgis_topology;

Adding data to the database

For our source of data we’ll use the boundaries of world countries from Natural Earth. However you can use pretty much any shape file, so you can pick another one of the datasets from Natural Earth, like rivers or place names.

To get the data into Postgres, we’ll use the shp2pgsql tool. If you already have PostGIS installed on your EC2 box or local machine you’ll have this already, otherwsie you’ll need to install it, on Ubuntu use sudo apt-get install postgis.

Then to download the data, convert it and populate the database use:

curl -O http://www.nacis.org/naturalearth/10m/cultural/ne_10m_admin_0_countries.zip
unzip ne_10m_admin_0_countries.zip
shp2pgsql -s 900913 ne_10m_admin_0_countries.shp countries mydb > countries.sql
psql --host mydb.0123456789abcd.eu-west-1.rds.amazonaws.com --port 5432 --username user --dbname mydb --file countries.sql 

Be sure you to use the correct database name in the shp2pgsql command, rather than mydb

To verify that the import worked, enter this into the psql prompt

SELECT ST_AsGeoJson(the_geom) from countries LIMIT 1;

You should get back JSON describing the shape of a country:

{"type":"MultiPolygon","coordinates":[[[[-69.9969376289999,12.577582098],[-69.9363907539999,12.5317243510001], ...

Visualizing data

A neat way to visualize data is using a program like Q-GIS. With this installed you can easily connect to the RDS database directly from the program and visually see what is there.

When the dataset that we use above is imported, it looks like this:

World

The performance is quite slow, compared to using a local database, so this is more for sanity checks then anything else.

Overall, I found the whole setup pretty painless, definitely simpler than setting up a local Postgres database on my Mac. So far, I haven’t taxed the system much, so I can’t talk much about performance. If anyone is running Postgres on RDS in production and can talk to this, I’d love to hear from you.

Automatically validating GLSL files

If you are doing anything THREE.js/WebGL related, sooner or later you are going to start spending a significant amount of your time working in GLSL, rather than in JavaScript.

In a previous post I detailed how I work with GLSL files, specifically loading them into the app itself in an convenient way.

One notable missing feature was automatically validating the shaders before they are used the browser.

This post will detail how you can do this directly from your editor, to help you spot stupid mistakes while editing, rather than only alerts when testing in the browser.

Frustration

For me, it was a huge frustration to make a change in a GLSL file, reload my app in the browser and then after waiting for it to load to be told (by a pretty obscure message) that I’d missed a semicolon in my code.

Originally, I was using the GLSL Compiler, which alerted me when I made silly mistakes, such as missing a semicolon. However, it could not spot all types of errors. In particular I was forever doing things like this:

float r = 10;

Only to be greeted by the following message in the browser (usually obscured by a dump of my entire shader):

cannot convert from 'const mediump int' to 'float'

For a while, I thought that I would have to live with this, as Googling around didn’t seem to yield anything. The breakthrough came when I found a plugin for Sublime Text, GL-Shader-Validator. While I don’t use Sublime, the guts of this plugin were interesting, namely ANGLE was used to actually compile the code and detect any errors.

ANGLE

What the ANGLE executables let you do is pass in a fragment or vertex shader file, and it will try to compile it for you, reporting back any errors. E.g.

Script

Never have I been so happy to see a list of errors!

The output format wasn’t quite what I wanted so I have wrapped this in a python script, which gives me an easier to parse output:

Script

THREE.js Integration

Now the above works great for self-contained shaders, however I’m working with THREE.js, which passes in helpful variables into the shaders for me, like the cameraPosition or modelViewMatrix. The trouble is, ANGLE doesn’t know anything about these variables and hence whenever I use them, they are reported as errors. Not great.

To solve this I’ve included prefix files that the glsl-validate.py script will automatically prepend to any shader it passes to the ANGLE compiler. This basically mocks out the things that we expect to have passed in.

This approach could easily be extended to support other libraries, if you feel like doing so, please submit a pull request!

#includes

Extending the library integration idea, I’ve been working on adding a rudimentary form of #include to my shaders, so I can effectively share code between them. The implementation is pretty simple, basically whenever the validator encounters a statment like #include shader.glsl, it replaces the include statement with the contents of the included file.

Editor integration

I use vim for editing and set it up so that whenever a file is saved, glsl-validate.py is run over the file, reporting any errors. In the future I’ll probably wrap this up into a Syntastic plugin, so that the error messages appear directly in vim.

Performance

So far, I’ve been very pleased with the accuracy of detected errors, I’ve yet to hit an occassion where my shaders wouldn’t compile in the browser if they passed through the validator.

Script

The project is up on Github, hopefully others will find it useful.

WebGL tombstone – bump mapping

This post is part of a series on how to deform a 3D mesh (in this case, a tombstone) by drawing onto a 2D Canvas. To start from the beginning, click here.

In the previous post we looked at how to calculate the lighting for our tombstone in the fragment shader. While a vast improvement over a model with no lighting, the tombstone looked “faceted”, that is, it was apparent that it was made of a finite set of faces. In this post, we’ll look at how we can inspect the depth texture directly in the fragment shader and thus make our lighting model look even better.

To see the difference this makes, you can play around with the live demo. Use the ‘Toggle light’ button to switch between: no lighting, simple lighting, and the improved lighting covered in this post.

Here is a comparison of the two lighting schemes:
Quarter

Notice in particular how the one on the left renders the details, like the letters, badly. While both models use the same shape for the mesh (and so have the same number of vertices), the image on the right inspects the depth texture when rendering the lighting, rather than basing the lighting on the position of the vertices.

Better shading

Recall that in the previous post, calculating the lighting was largely dependant on calculating the normal of the surface. To do this, we differentiated the surface, like so:

vec3 getNormal() {
  // Differentiate the position vector
  vec3 dPositiondx = dFdx(vPosition);
  vec3 dPositiondy = dFdy(vPosition);

  // The normal is the cross product of the differentials
  return normalize(cross(dPositiondx, dPositiondy));
}

The issue with this, is that our surface isn’t perfectly smooth, it is composed of faces defined by our mesh of vertices and as such the normal computed is the same across each face. This leads to the reflection of light being uniform across a face.

However, we can do better. We have the higher resolution depth map texture at our disposal, and we can use this calculate a more accurate value for our normal. The principle is similar to the code above, except that we correct the displacement vectors dPositiondx and dPositiondy by the value of the depth texture at that point:

vec3 getNormal() {
  // Differentiate the position vector
  vec3 dPositiondx = dFdx(vPosition);
  vec3 dPositiondy = dFdy(vPosition);
  float depth = texture2D(uCarveTexture, vUv).a;
  float dDepthdx = dFdx(depth);
  float dDepthdy = dFdy(depth);
  dPositiondx -= 10.0 * dDepthdx * vNormal;
  dPositiondy -= 10.0 * dDepthdy * vNormal;

  // The normal is the cross product of the differentials
  return normalize(cross(dPositiondx, dPositiondy));
}

Once we have the corrections to the depth, dDepthdx and dDepthdy, we modify the positions of the displacement vectors. You’ll notice that we use vNormal to do this, which is passed in from the vertex shader. Without the normal vector, a depth value wouldn’t be much use, as we wouldn’t know the direction in which the depth was to modify the position vectors.

I don’t know about you, but I find it pretty cool that this works. We’re essentially creating a variable, depth by doing a texture lookup and then telling GLSL to compute the derivative of this variable between fragments.

Here’s another example, again with the first image using the old technique. Notice how the reflections of the light are much higher resolution on the second image.
Face

Bump mapping

What is interesting about the above approach is that we have made the light calculation independent on the position of the vertices. In fact, we can now use a much lower vertex count and still get a reasonable result. Here is the same model twice, except the version on the left has 40000 vertices on the side being carved, while that on the right has 4 (one at each corner).
Face bump mapped

Notice that the lighting remains the same, the only difference is that the right model is lacking depth (most easy to see on the eyes of the model.

Using a texture in this manner is known as bump-mapping, and as the name suggests, is best suited to reasonably smooth surfaces, with small deformations, or bumps. At this level, there isn’t much point in having lots of vertices, as the perspective isn’t affected by these minor deformations. In the demo, you can toggle between a low vertex model and a high vertex model by using the ‘Toggle vertices’ button to see the difference.

For example, a bump map consisting of noise can give the surface the appearance of having lots of tiny holes.
Bump map

With such a depth map, it’s worth trying out the different lighting models. Because of the small feature size, the dents will only show up when using this post’s lighting model.

WebGL Tombstone – lighting model

In the previous post we looked at how to deform a 3D mesh by using a depth map drawn onto a 2D canvas. In particular, we’d draw a pattern onto a canvas and have that pattern “carved” out of a slab of rock. There’s a live demo here, if you haven’t seen it already.

Last time, our model didn’t quite look right, while it was possible to perceive the depth of the carving, the depth didn’t look natural. This was because rather than applying any sort of lighting, we just colored the deeper part of the stone with a darker color. Today we’ll look at how we can do better, by introducing lighting into our scene.

Here’s a demonstration of the difference a little light gives, with the “depth-darkness” method on the left, and the lighting method on the right:

Rune

Lighting model

To calculate how our object should look with a light present, we will use the Blinn-Phong lighting model. Our lighting model will have 3 components:

  • Ambient – this is just the flat colour of the object, it doesn’t take the light location into account.
  • Diffuse – this is dependant on the angle of the incident light. The diffuse component will be highest when the normal of a surface points in the direction of the light source.
  • Specular – this is dependant on angle of the incident light and the position of the camera. This is the component for the light that is reflected, as if the material was a mirror.

Here are 3 renders of a slab, the first with ambient light, the second with ambient & diffuse, and finally the third with ambient, diffuse & specular included:

Phong

The ambient component we already had last time, so we just need to implement diffuse and specular components.

It’s worth playing with the demo to see how the different componets behave. In particular, the specular light will change as the camera position is moved, while the diffuse light will stay the same.

Normal vector

For our lighting calculations, one quantity is very important, and that is the normal vector at the point on the surface we are rendering. Recall that we used it to displace the surface in the previous post. However, by deforming the surface in the vertex shader, we changed the normals and as such we cannot use the normal that THREE.js passes in for our calculations, we will have to recalculate it. We’ll calculate it in fragment shader, as it is relatively simple to do – although as we’ll see, not without drawbacks.

What follows is some vector algebra, don’t worry if this isn’t 100% clear, as long as you understand what the surface normal is, the rest of this post will still make sense.

One way to obtain a normal of a surface is to take two vectors that lie in the plane of the surface, and take their cross product. Thus, if we can find two such vectors, we’ll have our normal.

Furthermore, lets say that our surface is defined some function, say p(x, y), where x and y are any two parameters that can be used to parametrise the surface. What these parameters are isn’t so important, just that by varying them, p will return a set of vector locations that define our surface. E.g. a flat xy plane might have p return the following:

p(0, 0) = {0, 0, 0}
p(5, 0) = {5, 0, 0}
p(5, 5) = {5, 5, 0}
p(0, 5) = {0, 5, 0}

A key insight is that we can get two vectors that lie in this plane by taking the derivative of p with respect to x and y, respectively. With this in hand we can use an extension to GLSL which does exactly this, allowing us to calculate the derivatives of any varying quantity with respect to x and y, which in this case are the screen coordinates. Note that it does not matter that these screen coordinates are not x and y in 3D space, the main thing is that as we vary them, p changes.

Ok, enough vector algebra, here’s how all this fits together into a function in GLSL:

#extension GL_OES_standard_derivatives : enable
varying vec3 vPosition;

vec3 getNormal() {
  // Differentiate the position vector
  vec3 dPositiondx = dFdx(vPosition);
  vec3 dPositiondy = dFdy(vPosition);

  // The normal is the cross product of the differentials
  return normalize(cross(dPositiondx, dPositiondy));
}

That’s it. With 3 lines of code, we can take an arbitrary mesh and obtain the normal, even if we’ve modified the geometry in the vertex shader.

Diffuse light

With our normal in hand, we are ready to calculate the level of diffuse light in our fragment shader. This is given by the dot product between the normal and the direction of the light. The dot product basically tells us how in line with each other these vectors are. Here are a couple of situations:

  • 1 means they point in the same direction
  • 0 means they are at right angles
  • -1 means they point in the opposite direction

We can say straight away that for values of 0 or less, we won’t draw any diffuse light, as here the light is shining from behind the surface. For values greater than 0, we will draw an amount of diffuse light proportional to dot product. Thus our fragment shader will become:

void main() {
  vec4 color = texture2D(uTexture, vUv);
  vec4 dark = vec4(0, 0, 0, 1.0);
  vec3 normal = getNormal();

  // Mix in diffuse light
  float diffuse = dot(normalize(uLight - vPosition), normal);
  diffuse = max(0.0, diffuse);
  color = mix(dark, color, 0.1 + 0.9 * diffuse);

  gl_FragColor = vec4(color);
}

So, first we get the ambient color, by looking up the relevant pixel in our stone texture. Then we calculate the diffuse amount, capping off values smaller than 0 to 0. We then mix this in with our color value. Notice that even for a diffuse value of 0, we keep a bit of the ambient color, so that our object doesn’t completely disappear.

Specular light

With diffuse light under our belt, let’s tackle specular. Here we’re interested in the similarity (dot product) between the vector of the reflected light and vector from the surface to the camera. It can be shown that this can be formulated in terms of a halfway vector, which is a vector that is halfway between the light direction and the camera direction. Here’s how we’d calculate it:

// Mix in specular light
vec3 halfVector = normalize(normalize(cameraPosition - vPosition) + normalize(uLight - vPosition));
float specular = dot(normal, halfVector);
specular = max(0.0, specular);
specular = pow(specular, 50.0);
color = mix(color, light, 0.5 * specular);

One step that I didn’t mention above was the hardness of the specular light. This dictates how sharp our reflection is. For a low value of hardness, the specular light looks much like the diffuse light, but for higher values it looks more like a shiny reflection of the light. Adding in the hardness is achieved by taking our value for the specular intensity and raising it to a power representing the hardness.

End result

Flag

To best see what the end result looks like, try out the demo, where you can toggle where lighting is used. It’s also worth trying out running the code yourself, as you can vary the parameters and see how the lighting changes as a result. The code is up on github.

As you can probably see from the image above, there is a downside to this lighting model, namely that our surface looks “faceted”, i.e. it is obvious that it is made up of a finite number of faces. In a way, our normal calculation is “too good”, it exactly follows what the mesh shape is.

It would look much smoother if we didn’t calculate the normal at every fragment, but rather calculated it at every vertex, and then interpolated the normal between vertices, much like we did with the depth in the previous approach. We’ll look at this in a future post.

WebGL Tombstone – part 2

In the previous post we looked at how to link up a 2D HTML canvas with an object in a 3D scene, so that when we drew on the canvas, the drawing would appear on the surface of an object in the 3D world.

Flat

Specifically, we were painting onto a tombstone. Today, we’ll look at how to write our own WebGL shaders to do something a little more interesting: carving out the rock, based on what is drawn on the canvas.

Carve

You can play around with the live demo to get a better idea of the differences, use the “Toggle carving” button to switch carving on or off.

Custom material

When we were just displaying a flat slab of rock, we used one of THREE.js’s built-in materials, to which we passed our stone texture. To get our carving working, we’ll need to create our own material, with custom vertex and fragment shaders (written in GLSL).

If you haven’t worked with GLSL before, check out this post for an introduction.

Briefly, a vertex shader is a bit of code that will be invoked on the GPU for each vertex that is drawn. It allows us to change the shape of an object. The vertex shader must set the gl_Position variable to the location at which the vertex should be drawn.

The fragment shader is invoked for each pixel that is drawn. Here we can choose what color the pixel will have. The fragment shader must set the gl_FragColor variable to the color to be drawn.

To create a simple custom material we’ll do this:

simple: new THREE.ShaderMaterial( {
  uniforms: {
    uColor: { type: "c", value: new THREE.Color( "#ff0000" ) }
  },
  vertexShader: shader.vertex.simple,
  fragmentShader: shader.fragment.simple
}),

The uniforms object is used to pass values (in this case, a solid color) into the fragment and vertex shaders, while the shaders themselves are defined by vertexShader and fragmentShader. Let’s take a look at these:

void main() {
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}

This is our vertex shader, which doesn’t do anything to the vertices, apart from transform them according to the rotation and location of our object and the position of the camera. This is pretty much the simplest vertex shader you’ll use with THREE.js. The variables projectionMatrix, modelViewMatrix and position are all passed in by THREE.js.

uniform vec3 uColor;

void main() {
  gl_FragColor = vec4(uColor, 1.0);
}

This is the fragment shader, which just colors in every pixel with the same color, uColor. Here’s the result, applied to our tombstone:

Simple

Bring back the texture

This may seem like a step back, we used to own a pleasant piece of rock, while now we just have a solid block of red. Let’s being back the texture.

The first thing we need to do is pass in the texture itself, so the fragment shader can read it. To do so, we use a sampler2D object. First we modify the uniforms hash to include:

uTexture: { type: "t", value: texture.stone1 }

…and then pull it in the fragment shader, like so:

uniform sampler2D uTexture;

varying vec2 vUv;

void main() {
  vec4 color = texture2D(uTexture, vUv);
  gl_FragColor = vec4(color);
}

The texture2D function takes a sampler2D object, in this case our stone texture and pulls out the color value at a set location, given by the second parameter, here vUv. Recall, that the fragment shader processes each pixel individually, which is why we need to retrieve the color at a specific location in the texture.

So what is vUv, and where does it come from? vUv is a 2D vector, which describes a position in texture coordinates, u and v. These range from 0 to 1, so a value of vec2(0.5, 0.5) corresponds to the center of the texture. vUv isn’t passed into our fragment shader by default, we need to pass it in from our vertex shader. This is simple enough, as our vertex shader is passed this information in the uv parameter, by THREE.js. Think of this as THREE.js telling our vertex shader how to map the texture onto our set of vertices.

To pass vUv onto the fragment shader we’ll just do:

varying vec2 vUv;

void main() {
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
  vUv = uv;
}

Notice that in order to pass information from the vertex shader to the fragment shader, we used the keyword varying, while when passing in the texture we used uniform. This makes sense, as the value of vUv varies across the vertices (and thus pixels) of our object, while the texture itself is constant.

One final note: in most cases we will have more pixels to draw than we have vertices, in this case, what value of vUv will the pixels that are not at the exact position of a vertex receive? The answer is that the GPU will automatically interpolate the value of vUv based on how close we are to nearby vertices. A simplified way of thinking about this is that, if we are drawing a pixel that is halfway between two vertices, vUv will be the average of the values of vUv at each of these vertices.

Phew, that was quite a lot to cover just to display a texture, something we previously did in one line of code. But it is worth it, with this basic shader under our belt, we can start doing something interesting with our vertex and fragment shaders.

Deformations

In the last section, we got our stone texture displaying again. Now we’ll look at deforming our mesh, so that the bits of our canvas that we’ve drawn on appear deeper that those that we have not. The procedure is pretty straightforward:

  • We’ll pass in a texture that represents the state of our drawing canvas
  • For each vertex, we’ll use the uv value to retrieve the correct displacement for that location
  • We’ll modify the position of the vertex, by moving it in a direction perpendicular to the surface

We’ve covered techniques required for the first two steps already, but what about displacing a vertex perpendicular to the surface? Helpfully, THREE.js passes in a vector, normal which gives us exactly what we need: the direction perpendicular to the surface, also known as the normal vector. So, our vertex shader becomes:

uniform sampler2D uCarveTexture;

varying vec2 vUv;

void main() {
  // Get displacement for this vertex from carve texture
  float depth = texture2D(uCarveTexture, uv).a;
  vec3 displacedPosition = position - 10.0 * depth * normal;
  gl_Position = projectionMatrix * modelViewMatrix * vec4(displacedPosition, 1.0);
  vUv = uv;
}

Almost reads better than when written out in English, no? Notice that when calculating the displacedPosition, GLSL understands that we are operating on 3D vectors, so we can write this pretty succinctly, correctly handling scaling a vector by a scalar, as well as vector addition.

Also note how when retrieving the depth, we only look at the alpha channel. texture2D returns a 4D RGBA vector representing the color at that point, where the alpha can then be retrieved using color.a.

Here’s how it looks:

No shadows

Hmm, not so great. If you look closely, you can see that the surface is deformed according to the “L” shape drawn on the left. However, as the color of the pixels is unchanged, it is quite hard to see, especially when looking straight on.

Depth highlights

In real life, we’d easily notice the depth as the surface would be lighter or darker depending the angle of the surface relative to the direction of the light, as well as shadows cast. For now, we’ll just make the material darker based on how deep we’ve dug into it, and leave the complexities of the correct lighting to a future post.

To implement this, we’ll pass the depth to the fragment shader, by assigning it to a varying variable vDepth and use that to modify the color of the pixels. Here’s how our fragment shader will end up looking:

uniform sampler2D uTexture;

varying float vDepth;
varying vec2 vUv;

void main() {
  vec4 color = texture2D(uTexture, vUv);
  vec4 dark = vec4(0, 0, 0, 1.0);
  color = mix(color, dark, 0.5 * vDepth);
  gl_FragColor = vec4(color);
}

No surprises here, I hope. The only new thing is the mix function, which takes two values and mixes them together, based on a weighting parameter (here 0.5 * vDepth). The effect is that for a zero vDepth, the pixel color is unchanged, while for the maximum vDepth, 1.0, the pixel is darkened by mixing the original value 50:50 with solid black.

Here’s the result, much better:

Flash

Get carving

Of course, the best way to see the effect is to play with the demo. I’ve added a couple of buttons to toggle the carving on/off, and to load an image to the canvas, if you’re not feeling artistic yourself. The code is up on github if you’d like to play around with it.

WebGL – working with GLSL source files

If you are doing anything THREE.js/WebGL related, sooner or later you are going to start spending a significant amount of your time working in GLSL, rather than in JavaScript.

This post is going to cover the workflow I have adopted, to make development faster and more enjoyable. The context here is a THREE.js app that uses Require.js to structure the code. You can see a simple example of such an apphere.

EDIT: I’ve since updated my workflow, so you might want to check out the new post here.

Shaders

Shaders are programs written in GLSL, which looks very much like C, except that it has some extra functionality built-in, like vector and matrix operations, or support for textures. Here’s how some GLSL code might look:

void main() {
  float h = length(position);
  vec3 transformedPosition = transformPosition(position);
  transformedPosition = transformedPosition + vec3(1.0, 2.0, 3.0);
  // ...
}

To get the graphics card to execute this when we’re running in an WebGL-enabled browser, we need to take this entire program, as a string in JavaScript, and send it to the graphics card using the WebGL APIs, for compilation. When we’re using THREE.js, this is abstracted away from us, as we create a Material, however we still need to pass the Material the shader as a string.

Approaches

A common technique I’ve seen people use <script> tags to house this code, and then use DOM methods to get at the content.

<script type="x-shader/x-vertex" id="vertShader">
  void main() {
    float h = length(position);
    vec3 transformedPosition = transformPosition(position);
    transformedPosition = transformedPosition + vec3(1.0, 2.0, 3.0);
  }
</script>

// Later in JavaScript
var vertShader = document.getElementById('vertShader').textContent;

Or another way is to directly put the shader together in JavaScript:

var vertShader = [
  "void main() {",
    "float h = length(position);",
    "vec3 transformedPosition = transformPosition(position);",
    "transformedPosition = transformedPosition + vec3(1.0, 2.0, 3.0);",
  "}"
].join("\n"),

The first method works fine, but as I’m using Require.js to modularize my code, it didn’t seem to fit to pull in content from <script> tags.

The second method works, and allows me to encapsulate the shaders into a Require.js module, however it is an absolute nightmare to edit, as everything is wrapped in double quotes and it is very easy to forget to add a trailing comma at the end of each line.

Shader compilation

To address this, I put together a simple converter, which takes shader files (ending in .frag or .vert) as inputs and combine them into a Require.js module, which I could then easily import in the rest of my application.

The advantage of writing the shaders in separate files like this, is that we get syntax highlighting, and a nicer way to organise our shader code.

Despite calling this process compilation, the GLSL code isn’t actually compiled. However, in the future I hope to exapnd this script so that it can also perform validation of the GLSL code, by compiling it.

Example

As an example, lets put together a simple ShaderMaterial that we can apply to a Mesh. First we’ll create the vertex and fragment shader files in /js/shaders/, the same location that our converter is in.

// file: /js/shaders/simple.vert 
void main() {
  gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);
}

// file: /js/shaders/simple.frag 
uniform vec3 uColor;

void main() {
  gl_FragColor = vec4(uColor, 1.0);
}

These don’t do very much, the position of each vertex is left untouched and the object is draw as a solid color uColor, a uniform that will be passed into the shader. When we run /js/shaders/compile.py, it will produce a Require.js module /js/app/shader.js with the following content:

define( [], function() {
  return {
    vertex: {
      simple: [
        "void main() {",
        "  gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);",
        "}",
      ].join("\n"),
    },
    fragment: {
      simple: [
        "uniform vec3 uColor;",

        "void main() {",
        "  gl_FragColor = vec4(uColor, 1.0);",
        "}",
      ].join("\n"),
    },
  }
} );

This will now allow us to create a material using this shader, like so:

define( ["three", "shader"], function( THREE, shader ) {
  return {
    simple: new THREE.ShaderMaterial( {
      uniforms: {
        uColor: { type: "c", value: new THREE.Color( "#ff0000" ) }
      },
      vertexShader: shader.vertex.simple,
      fragmentShader: shader.fragment.simple
    } ),
  };
} );

To see a full working example, check out the code on github. This example doesn’t use the custom shader by default, but if you modify /js/app/app.js, you can easily load in material.simple, rather than material.grass.

Here’s how a cube looks with the simple material applied:

Red cube

Automation

I have my editor (vim) setup such that when the shader source files are updated, the compile.py script automatically runs and updates /js/app/shader.js, which means that I can forget about this intermediate file altogether, and just edit the GLSL code and have the app always be up to date in the browser. The structure (e.g. indenting) of the files is preserved so it is easy to see where errors are, if there are bugs in your GLSL code.

Future

In the future, I’d actually like my compile.py script check for errors in the GLSL code, so I don’t have to wait for the whole app to be loaded in the browser, the GLSL shader sent to the GPU, just to be notified that I’ve missed a semicolon. I’ve had some success integrating with GLSL Unit, although it still isn’t perfect, so I’d be curious to hear what others are doing.

Finally, debugging GLSL code is pretty painful, as it runs on the GPU, however a recent development that should make things better is the inclusion of live GLSL in the Firefox Developer Tools.

WebGL Tombstone – part 1

Recently, I’ve been combining 2D Canvas objects with WebGL Canvas objects, and figured it would be nice to put together a demo with some of the techniques used.

An obvious application for these techniques is an in-browser tombstone designer, that’ll let us take a nice slab of rock and draw and carve all over it. Beats me why this hasn’t been done before.

Today we’re going to look at setting up a 2D Canvas with some basic drawing capabilities, and linking it to a 3D scene. The actual carving will come in a later post, for now we’ll limit ourselves to vandalism, that is: drawing on the surface of the rock, like with a spray can. After all, they say it’s easier to destroy than to create…

Here’s something I drew earlier, while I was feeling particularly inspired:

Baz
For the impatient, here’s a live demo

Getting started

For this project, I’ve used amd-three.js as a starting point, which will set us up with 3D scene that has a single cube:

Cube

If you want more detail on how amd-three.js works, check out this post.

To make this grassy cube a piece of rock, we just need to create a block geometry in the js/app/geometry.js file:

block: new THREE.CubeGeometry( 200, 200, 20, 10, 10, 1 )

…and add a stone texture in js/app/texture.js:

stone: THREE.ImageUtils.loadTexture( texturePath + "stone.png" )

Now, we just change we mesh that we’re creating in js/app/app.js to use these parameters and we have ourselves a blank slab of rock:

Rock

Scribbling

Now, you’re probably itching to unleash your own creative talent, but before that can happen, we first need to look at how we can draw to a Canvas.

We’ll encapsulate all our drawing code in a object called scribbler, located at js/app/scribbler.js. When initialized, the scribbler will create its own Canvas for drawing and register some methods for capturing mouse input.

var scribbler = {
  init: function() {
    container.innerHTML = "";
    scribbler.canvas = document.createElement( 'canvas' );
    scribbler.ctx = scribbler.canvas.getContext( '2d' );
    container.appendChild( scribbler.canvas );

    // Listen for mouse events
    scribbler.canvas.addEventListener( 'mousedown', scribbler.onMouseDown, false );
    scribbler.canvas.addEventListener( 'mousemove', scribbler.onMouseMove, false );
    scribbler.canvas.addEventListener( 'mouseup', scribbler.onMouseUp, false );
  },
}

Here container is the DOM element that the drawing Canvas will be appended to, which is passed into the file using Require.js, like so:

define( ["drawing-container"], function( container ) {
  var scribbler = { //...
  }
  return scribbler;
});

So, what are these onMouse functions? Nothing too interesting, they just capture where the user clicks and drags on the Canvas, and invoke the paint function, which does the actual drawing.

onMouseDown: function( e ) {
  scribbler.drawing = true;
  scribbler.paint( e.offsetX || e.layerX, e.offsetY || e.layerY );
},
onMouseMove: function( e ) {
  if ( scribbler.drawing ) {
    scribbler.paint( e.offsetX || e.layerX, e.offsetY || e.layerY );
  }
},
onMouseUp: function( e ) {
  scribbler.drawing = false;
},

When looking at the event e, we need to first try offsetX and then layerX, as different browsers like to give this property a different name, to make the world a more interesting place.

And finally, our paint function:

paint: function( x, y ) {
  scribbler.ctx.beginPath();
  scribbler.ctx.arc( x, y, 10, 0, 2 * Math.PI, false );
  scribbler.ctx.fillStyle = "rgba(1, 255, 0, 0.2)";
  scribbler.ctx.fill();
  scribbler.ctx.closePath();
  scribbler.updated = true;
},

This will draw a circle, 10 pixels in radius, in a semitransparent garish green.

Great, so now we can draw on our Canvas:

Pig

Putting it together

We now have an excellent picture of a pig and a spinning rock, so it is tempting at this point to call it quits and end on a high note. However, that would be cowardly, so let’s push on and combine the two.

Our 3D scene currently only has one object in it, the rock. To make it look like there is something drawn on top of it, we’ll create another object the same shape as the rock, and make it ever so slightly bigger, so that it appears in front. Then we’ll apply the drawing we have on our drawing Canvas to this object as texture and we’ll be done.

So, in js/app/app.js, we’ll modify our scene like so:

var app = {
  baseMesh: new THREE.Mesh( geometry.block, material.stone1 ),
  drawMesh: new THREE.Mesh( geometry.block, material.scribbler ),
  init: function() {
    scene.add( app.baseMesh );
    scene.add( app.drawMesh );

    // Draw mesh is slightly larger, so that it appears in front of base mesh
    app.drawMesh.scale = new THREE.Vector3( 1.01, 1.01, 1.01 );
  }
}

Pretty simple, but to make it work we have to create a new material, material.scribbler, that will automatically pick up changes to our drawing Canvas. The material is just like the stone one we created earlier, except that it uses a different texture and is transparent:

scribbler: new THREE.MeshBasicMaterial( {
  map: texture.scribbler,
  transparent: true
} ),

To complete the linkage, we’ll need to create a texture, texture.scribbler that is backed by the drawing Canvas, rather than a static image. When we created the drawing Canvas earlier, we assigned it to scribbler.canvas, so to get at it all we have to do is pull in the scribbler object into texture.js and use it like so:

define( ["three", "scribbler"], function( THREE, scribbler ) {
  // ...
  var scribblerTexture = new THREE.Texture( scribbler.canvas );
  scribblerTexture.needsUpdate = true;
  return {
    scribbler: scribblerTexture,
  }
} );

Great, now we have a material that is backed by our drawing Canvas. But when we run our code, our rock stays perfectly clean, no matter how much we draw. What’s wrong? Well, whenever we make a change to the drawing Canvas, we need to set the needsUpdate flag to true on the texture, otherwise it’ll continue using the old data.

One place we can do this is in the animate function in js/app/app.js, which is called on every frame.

animate: function() {
  requestAnimationFrame( app.animate );

  // Update texture based on what is on drawing canvas
  if ( scribbler.updated ) {
    texture.scribbler.needsUpdate = true;
    scribbler.updated = false;
  }
} 

Notice that we’re using the scribbler.updated flag to ensure we only update the texture if we’ve drawn something new. If you go back to the code for scribbler.paint() you’ll notice we set this to true whenever we update the Canvas.

Now we’re done. As we draw on the drawing Canvas, our scribblings will appear on the tombstone – in realtime.

Go vandalize

To check out how all the code fits together, take a look at the source on github. For a live demo, go here.

Tunnel

Also be sure to check out the next post in the series, where we’ll look at carving into the stone.