Transcript: Brandon Jones’ WebGL talk at onGameStart

by Darius Kazemi on September 23, 2011

in html5,transcript

These are my notes from Brandon Jones’ onGameStart session on WebGL. Brandon’s the author of the awesome glmatrix. Any mistakes or misinterpretations are my own. My notes are in square brackets.

[I came in a little late to this session. Brandon built a Quake 3 level renderer.] Q3 demo had no serverside stuff. All clientside. It’s a horrible idea, not something you want to do with a real game because there are a lot of calcs that don’t have to happen on the client because you’re slowing things down. But I wanted it to be a “view source” type of educational resource so I built it that way.

To make the clientside parsing nicer, we do parsing in a Web Worker — same with the movement system. It’s a simple trace through the BSP tree in Quake where you pass off your position test vector to the Web Worker. I’m sending the raw BSP format directly to the browser — bad idea, but again, it’s for educational purposes. Normalized all the BMPs and PNGs to PNGs. Normalized textures to powers of two since it’s OpenGLES, not OpenGL. Shaders are parsed from the Q3 material format and parsed to GLSL on the fly.

There’s no geometry culling AT ALL in the demo. We render every triangle in every frame. At the time (Chrome 6) I couldn’t find a way to do culling fast enough that it didn’t kill the frame rate. There is now. But since everything is sorted by material on the scene and we do a single draw call on each material type. It’s very effective and so I didn’t need to cull.

The original game uses a test-and-failback system for their textures. The system tries to hit the hard drive multiple times to see if a “lava” texture is “lava.png” or “lava.bmp”. We don’t want to do that on the web, so we made everything PNG. Since we’re pulling down from the internet, we put all the materials into a single file by hand to reduce http requests.

Shader format was designed to do everything as multi-pass effects. We could get better performance if we took multipass effects and rendered them in one go using our modern hardware.

So what about the next demo? I was drawn to mobile games. They share a lot of the same limitations that we see on the web. The demo I ended up doing was the iOS version of Id Software’s RAGE. [Shows a WebGL demo of the iPhone RAGE running, shows that it uses the texture atlas technology. Video of the demo here.] RAGE is a really good example of the type of things that our early WebGL games need to be trying for. It knows its limitations and plays to its strengths. All the rendering and all the gameplay are built with mobile in mind. The controls are built to embrace the platform. The gameplay sessions are quick and gratifying.

I will make an admission: the RAGE demo is running on localhost. Pulling in all the texture atlases will not perform well on the web!

When talking about file formats for WebGL, you have two semi-competing goals: you want to download them fast, but you also want to be able to parse them fast. As far as fast downloads goes, the Google Body team has done a great job with their WebGL-Loader project. They compress the vertex stream in UTF-8 that creates some very small files that stream down to the file very quickly. There’s a great million-triangle demo out there.

But if you want better parsing performance, we recently gained a request for a typed array from an XHR request. [Holy crap!] For any model format you can break it down into a big vertex buffer, an index buffer, and some bone/materials info. But the big buffers take the most time to parse usually. You’re looking through a JSON file or whatever, and you have to loop and parse things individually. With the binary buffer that now comes back from the XHR request, we’re in a position where we can get all of our vertex data and index data into the GPU just using some subarray calls. Grab your data, bind it directly to gl.Bufferdata, etc. But that requires raw numbers. For non-binary data (elements that aren’t simple arrays of numbers), use JSON, it’s just better for handling that.

It should be theoretically possible to download everything as a compressed format, unload to the Filesystem API in uncompressed format, and load from Filesystem.

Please use requestAnimationFrame! Pretty please! Don’t use setTimeout. (For one thing, requestAnimationFrame pauses rendering when you’re not on that tab!)

For optimization, everything is pretty much the same as you have on the PC. The main difference is that the difference between GPU and Javascript is a little bit bigger than in, say, C++/GPU, so it can help a little more to push to the GPU. However, browsers get better and better (for example, I can now cull in JavaScript just fine). Also, you want to change state as little as possible. Sort draws by material as a pre-process if possible, reduce your total number of passes, pack multiple meshes into a singe buffer, draw everything that uses the same material/texture in a single call.

Don’t break up your draw calls based on visibility! In a lot of cases with WebGL, it’s cheaper to accept overdraw, and instead just reduce the number of draw calls you have to make.

Generally, you’re probably not going to be GPU limited. You can take advantage of spare GPU cycles instead of having a JS bottleneck. For example, if antialiasing is available on your platform, just go ahead and use it. You probably won’t lose performance. Don’t bother turning off texture filtering. Once anisotropic filtering becomes available in browsers, leave them on and crank it up. (Of course, benchmark any of this stuff as your mileage may vary.)

And offline precomputations you can do, do them! Make sure your file format already presorts objects by state, for example.

When you’re talking about cross platform support, we’re in a unique scenario, we’re trying to get things that run on desktop, iPad, iPhone. There’s competing limitations. It’s a unique opportunity that in most of the time with a standard desktop game it’ll send down tons of resources, and the machine picks at runtime how much detail to render. On the web we know what the client looks like (“you’re a mobile phone”) so we can then send only the appropriate data for the platform. Don’t send anything to the client they don’t strictluy need.

A good artist is always going to be more effective than any of the shaders that you’re ever going to have. Good art direction will make more difference in your game than all the performance in the world.

There are a couple things that make WebGL different from standard desktop. Uploading large sets of data (textures, vertex buffers, etc) to the GPU is expensive. For example, pushing a big texture to the GPU would block for a few frames because the texture was bigger than the command buffer. You need to be careful — if you’ll push large data, do it as a preprocess or in small chunks. Bandwidth is a valuable resource! A lot of internet connections are capped, you don’t want to abuse your users’ bandwidth.

JS does make it easy to build inefficient data structures. Your data locality can be all over the place with even a simple tree. It can be worthwhile to store all your data in a TypedArray, which will have good data locality and you can iterate quickly. Don’t go crazy with abstraction layers (long prototype chains, callbacks, recursion). My demos are fast because the render loop is a simple for loop over every material in the game.

[Ended it with a crazy demo. Showed TwoFort from Team Fortress 2, rendered in a browser at 60 FPS! It apparently can run as fast as 120 FPS.] What you’re seeing on screen is an accumulation of 1.2 million triangles for this level, approximately 200 MB of resources that get thrown to the browser. Realistically that’s not feasible for a web game, for now, but it is possible! We were talking yesterday about ‘when do we get to do hardcore games in the browser?’ I can say that rendering is not the limitation for the web. The real bottleneck is going to be getting the content to the player.

{ 7 comments }

Dariusz Siedlecki September 23, 2011 at 5:40 am

I have to agree, the whole presentation was very interesting, and when we saw the demo of TF2, my mind was literally blown. As long as a good JS developer gets to develop something in WebGL, we can do literally everything what’s possible now on normal PCs.

Btw, nice name! ;)

Lucas Rizoli September 23, 2011 at 9:47 am

Thanks for putting this up.

“The real bottleneck is going to be getting the content to the player.” I wonder what major architectural changes this will mean when moving from, say, streaming and decompressing stuff off a disc and doing so over the web. I would suspect that when developing for systems without much memory (eg. no-HDD Xbox 360s) this is already a big challenge, neh?

Won September 23, 2011 at 11:32 am

The million-triangle demo he mentioned is here:

http://webgl-loader.googlecode.com/svn/trunk/samples/happy/happy.html

Part of:

http://code.google.com/p/webgl-loader/

Love the shoutout!

Darius Kazemi September 23, 2011 at 6:02 pm

Awesome, thanks!

Michael "Z" Goddard September 23, 2011 at 12:30 pm

These demos are so awesome. Hearing about and seeing this stuff makes me so excited to be working with these new technologies. I have seen code for using xhr’s arraybuffer response type for working with stuff like Chrome’s FileSystem api but I did not think about using it as a direct method to load content into WebGL.

I also stumbled upon a video of Brandon’s Rage demo:
http://www.youtube.com/watch?v=d0S2dsuSxHw

Darius Kazemi September 23, 2011 at 6:06 pm

Glad you liked it, Z. Brandon was super impressed with your Fieldrunners work :)

Won September 23, 2011 at 6:07 pm

Agreed, Brandon’s demos are really sweet! Makes me wish WebGL had anisotropic filtering :)

Note that the WebGL loader stuff doesn’t use the ArrayBufferstuff at all; it is similar to the thing we used to make Google Body (http://goo.gl/body), which is like 1.5M triangles/model.

The ArrayBuffer stuff was implemented after Body. In an interesting coincidence, the folks who worked on GWTQuake really wanted it.

Comments on this entry are closed.

{ 2 trackbacks }

Previous post:

Next post: