Months of Not Much (Resurrecting Voi)

Hello again. It’s been on the order of three months since my last post, somewhat longer than I’d hoped, but here’s a new one.

Since my last post, I’ve worked on a few small(ish) things here and there, not really finishing much, but learning quite a bit regardless. I had quite a large problem where I had largely conflicting ideas about what I should be working on, especially considering that I started looking for work.

On the one hand I wanted to work on low level algorithm stuff so I could have at least one sample of good quality c++ code to show off, that I could give to people and have it build and work.

On the other hand I wanted to make things that looked pretty or were fun.
I felt that learning how to make things look unique or like a game was important given the kinds of games I’d like to make and the kinds of people I’d like to work with. Sure, getting good at code is important but I want to get noticed, do things like train jam, and sit with the cool kids at GDC, and developing my sense of style and my ability to make stuff seems to me like the best way to go.

This conflict kind of paralysed me in terms of making progress. One day I’d be into writing some parsing code, the next day I’d be into writing shaders and making things look nice, the next day I’d spend writing coverletters and emails to potential employers (none of whom would respond), then the day after I’d become discouraged and lose motivation having not made much progress on any front.

I tried on a couple of occasions to break this cycle. Both times the basic idea was to try and get a team involved so I’d have an external reason to stay motivated with one particular project. I like to think I can work pretty well on my own but it’s now something like two or three months out of university and I kind of miss the whole collaboration thing, y’know, the whole “talking to people and working towards a common goal” thing.

My first attempt was a (in hindsight, way overscoped) multiplayer team vs team game that kind of combined elements from Space Engineers, and Garrysmod. The idea was to create a game that I could whip out at a LAN party, but that had a fairly large technical aspect to it. I was going to have a system like wiremod where you could wire components together to create complex spaceships and bases. I had a pretty solid idea of what I wanted and what needed to be done, so I made a start on my own and got pretty far. This was also an opportunity for me to play with a bunch of different libraries I’d been meaning to try, like Raknet and stb_voxelrender.



I got to the point where you could have multiple people join a server and place blocks and decided to try to get more people involved. I explained my idea and showed what I had to a bunch of mates, some of whom expressed interest. “Great,” I said. I keep going for a while in hopes that at some point someone will join in but nothing happens. So eventually I hit a brick wall and lose motivation. I did some work on a compiler for a language I thought could be used in the game, and got pretty far, but by this point it didn’t look like it was going to happen so I stopped in favour of something (much) smaller. I may return to the idea in future but I need to do other stuff first

I tried a few things in Unity (which has a semi-solid linux client now, holy crap), made some models and wrote some shaders. This kept me going for a bit, and I learnt quite a lot, but it’s three months and I really want to work with people now. A job would probably help here but as I said earlier, I’d been getting zero replies from the companies/people I’d been applying to/trying to get into contact with, so I had to try something else in the meantime.








Fast forward to a week or two ago. I remembered that I made a game at uni that was pretty cool and that people seemed to like, plus I made it with people. I also remembered that we said that we were going to return to it at some point. Well shit, let’s do that then.
We discussed it for a bit and decided that yes, we should totally do that. However, because the original game suffered a bit from rushing to meet deadlines, comprimises, straight up poor managment and over engineering, we decided that it would probably be best to build it from scratch.

When we began development we had planned to do most of work ourselves, at least in terms of tools and graphics. Looking back that most likely wouldn’t have worked well for us. When we pitched our plans, we were fairly quickly convinced to settle on using existing software. We picked OGRE as it seemed to be the largest, most well developed, cross platform framework available to us. Plus we knew that Ogitor existed so it seemed like a sweet deal. We did not know however, that it was going to have little to no documentation passed the simple stuff, or that Ogitor was going to be garbage that didn’t work. We ended up replacing Ogitor with blender (the best decision made during the project) and just banging our heads against a wall until we figured out how the deepest, darkest features of Ogre worked or fit together. Plus getting everything to build and work reliably across the teams machines took WAY TOO LONG. Let me stress how bad the code is. Getting portals to render the way we wanted required doing dozens of (small) string operations per frame because of how RenderQueueInvocationSequences work. We obviously knew this was bad at the time but we there was no other way that we could see at the time to make it work without them. Just figuring out RenderQueueInvocationSequences took long enough.

Shout out to Ogre folk, I’m not the best at documentation but goddamn, Ogre is too large to not have solid documentation. It’s pretty hard to get a foothold if your new to using it.

Now that we have written the terrible code version, have learned from it, and no longer have deadlines, we’re removing the cruft and writing the (hopefully) good version. It’s only been a week or so but we have a barebones scene exporter for blender, scene loading and rendering, plus portal rendering that is already way more advanced than the original version but in far less code and written in a straight forward way. If we continue at the rate we’re going we’re going to reach feature parity in no time at all.

Needless to say, I’m pretty excited to be working on Voi, and I’m hoping my excitement carries through. I’d like to try and promote it from it’s current status of “kinda cool thing” to actual game.

Anyway, </long post>
This is me signing off for hopefully less than three months.

PS: Here’s cool looking bug I had while writing the new portal rendering algorithm. I thought I was clearing the stencil buffer more than I actually was.


Read More

It’s Been A While

It’s been a while. I figured I should probably start updating this more with the stuff I’ve been working on and what’s been going on.

To start with, I finished university. I’m yet to graduate but I have no more classes and I appear to have passed everything so it’s not looking like I won’t graduate. With the end of university came the initial release of my final project, Voi (, to the public. It officially started in my second last trimester but, because of the workload of other subjects, solid work didn’t truly begin until final trimester. The project had it’s ups and downs but ultimately we (vec1) produced something interesting and different, which is what we were aiming for. We were quite surprised with it’s reception. We put nearly no time into advertising its existence, but it was fairly quickly picked up by article writers and video makers – one of the bigger articles being from Killscreen. This was quite humbling, and especially surprising after some of our less-than-easy-to-watch playtest sessions.


We plan to continue working on Voi, as it is far from complete and people obviously like it. However we (at least I) are taking a small break so that we can relax and do some of the things we didn’t have time for during uni. I’m starting to make my way through my massive todo list of things to learn, and I’m starting to catch up on a lot of gaming.

smfilteredshadows smskylight smcomplex

One of the first big things I managed to tick off my massive list is shadow mapping. I’ve been meaning to do this for ages and only now have I had the time to sit down and figure it out. Apparently it’s relatively simple to get a basic implementation running, but I had to spend some time figuring out how to get rid of artefacts and aliasing. I’m pretty happy with how it turned out though. I thought that maybe I could figure out decal rendering next since they seem fairly similar in implementation. In any case, expect more graphics things.

Anyway, I’m hungry so I’m going to go and eat something. I’ve probably missed a few small things but that’s okay. That’s it. See ya.

Read More

Holiday Fun

I’ve been on holidays for about two and a half weeks now, and in a few days I’ll be back at uni, doing what I do. For the most part my holidays have been rather unexciting though I’ve been working on a few things to pass the time.

Just before the holidays started I decided to take a look at Haskell. To most this would probably seem like an odd move, as Haskell isn’t exactly the most mainstream language, and from what I’ve read, it’s never quite been able to escape realm of fringe languages. But that hasn’t stopped Haskell-aficionados from trying. I’ve heard quite a lot of good things about Haskell and functional programming in general, and I’ve noticed a few functional concepts sneaking into languages I already know (D is a primary example) so being able to understand and use functional concepts effectively, I think, would be very useful.


Needless to say, Haskell is very different to anything I’ve ever programmed in before. Learning Haskell is so far a lot like learning programming from scratch; there are so many new concepts to get my head around, and the mode of thinking required to use Haskell is leagues different to what is required for any imperative language or even stack based language. It’s quite a strange feeling to be honest. So it was (and still is) baby steps. I set up my environment just how I like it so I didn’t feel completely lost: sublime set up to trigger make, a make file to compile/run Haskell programs and a terminal on the side for messing with a Haskell REPL.

I started with basic math, made some very simple functions, then learnt about lists and tuples, and some standard functions for manipulating those. So far not so bad. I’m only just starting to understand how to use the fold* functions but still not so bad. Then I moved on to pattern matching and guards, and currying. These are a bit different to what I’m used to but I got the hang of these pretty quickly. Then it came to trying to write actual programs. Everything that has something to do with input or output is wrapped in something called an IO monad, which took some getting used to. And when I went to write my fibonacci sequence generator I hit a bit of a wall where my regular and efficient for loop method didn’t quite work. I tried first to write it as a recursive function but quickly found that the way I was trying to do it was O(n^2) complexity and took exponentially longer the larger the element that I wanted. I went back to try and understand the standard functions a bit more and ended up with an implementation that used foldl and a tuple to store the accumulators.

fib x = fst $ foldl (\(a,b) _ -> (b, a+b)) (0, 1) [1..x]

I’m sure there’s a much better way to iterate over a sequence than generating a list, but this is what I came up with given what I had, and it doesn’t freeze up when I give it a large number. It’s not exactly complex, but I’m happy that I was able to understand some functional concepts enough to implement something. And despite my troubles, I can definitely see how one could use Haskell to do much more complex things.

As well as trying Haskell, I also started a small project that initially was to see how far I could get with an OpenGL application without copy pasting code from all over the place and googling tutorials. It went pretty well for the most part. I had to consult documentation a couple of times just to get argument types/orders for some functions right (I always forget the arguments to glTexImage2D and glVertexAttribPointer), but I was able to get things rendering pretty quickly. I started with the popular red triangle and quickly moved on to cubes. I managed to get a grid of cubes each with their own colour and transform matrices rendering pretty quickly. Also to jazz it up, I rendered all the cubes twice with a different scale and with transparency enabled.


I was pretty happy with my progress so I decided to try and smarten it up a bit. Initially, every cube was a separate draw call with two calls to glUniform* for colour and transform. So I went down the instancing route. I created two array buffers, one for colours and one for transform matrices, and I modified my `Draw` function to fill those instead of uploading uniforms. I modified my shader to accept colour and transform as attributes instead of uniforms and finally replaced my numerous draw calls with two instanced draw calls (one for the opaque cubes and one for the transparent cubes). This gave me a HUGE speed boost, as expected.


After that I started messing with framebuffers and postprocessing. During my experimenting I finally figured out how to recover world coordinates from the depth buffer. This let me do some pretty cool things.

So that’s just messing with colours based on depth and time + a very basic outline using fwidth on depth. With a bit more messing around and some googling (I’d given up on not googling by this point) I managed to implement gaussian blur. It took some playing to get it performing alright, but here’s an example of what I managed.

megablur5 megablur

I set it up so that I could perform multiple passes of the blur kernel. After some optimisation, I managed to get about 20 passes of a 5×5 gaussian kernel at a descent framerate. In the above screen shots, I also added some tinting to make it look a bit nicer. I also tried modifying the postprocessing shader to just displace fragments instead of blur them. These are the results.

cubesfrosted cubesfrosted2

To bring it all together, I’d like to point out that I used Haskell to calculate the 5×5 kernel.

Read More

One Trimester Left

It’s the final week my second-last and most busy trimester yet. Over the trimester I’ve learned and done a lot of interesting things, such as program optimisation and networking. I worked with ‘big’ data, and I wrote a couple of languages. I’ve blogged about most of the things I’ve done during the beginning of the trimester, but I’ve fallen behind a little for the last few projects, so this will be a bit of a catch-up.

One of the projects I’ve been working this trimester has been an eye detection library for use with unity. The brief of the project was essentially to write something that interfaced with an ‘exotic’ input device (I’m paraphrasing here, the constraints were pretty much “No keyboards/mice/gamepads”). Because the designers had to make an ‘affective’ game, and we had a learning outcome regarding interdisciplinary work, we all decided to write apis for their games. The project I joined required only information regarding whether or not the player was looking at the screen, which I initially thought didn’t sound too hard. Since webcams are pretty much everywhere (and since I didn’t think it’d be that hard) we (the other programmer on the team and I) decided to use OpenCV with webcam input to detect what we wanted. Unfortunately this took us quite a while to get working but eventually we managed to get someone elses project working and modify it to suit our needs.
Once we got the project to a point where it was useable, we set up a small server program that managed OpenCV and dispatched the appropriate data on request. We then set up a unity project with a client script that started the server as a child process and recieved data, and a sample script to show that the api worked. This went by pretty smoothly, the only hiccup being how strings were represented in C++ and C#, but we didn’t really need them so that wasn’t much a problem. Here is a test project using the api (in the video is Luke Savage, the programmer I worked with)

The next project I worked/am working on was/is a multi-user networked synthesizer, aptly named NetSynth. The goal was to create tool to allow possibly non musically minded people to jam. Instead what we ended up with was an awesome multiuser synthesizer that, while it was capable of sounding cool, also had a fantastic capability to sound terrible. The NetSynth client has a simple two oscillator, two envelope setup which allows for a wide variety of sounds. To make it easier to play ‘nice’ sounding things, we decided early on that each key on the keyboard should be mapped to a degree in a scale, rather than have each key set to a specific frequency or note. This meant that the scale could change at any point but the user wouldn’t have to change what they were doing like what they might if they were playing piano. Since there are three rows of ‘letter’ keys, we set each row to a different octave of the same set of degrees. This for the most part worked well.netsynthscreen

Each oscillator has a waveform setting (of sine, square, triangle, saw or none), an octave shift setting, a detune setting and a pulsewidth setting which was only active for square waves. This is where most of the problem was. The octave shift had a range of [-2,2] octaves. That combined with the fact that the top row has an octave shift of +1 already, and that in a pentatonic scale, you pretty much get an extra octave per row, that meant that the maximum possible octave shift was +5. That’s annoyingly high. That combined with saw waves with random detune settings makes for some terrible noises. So work has to be done to make sure that it’s not as easy to do this.
I think it’s fair to say that it’s still not quite easy enough for non-musically minded people to use, but we still made something cool that can sound good with the right settings.

Read More

A Language About Toilets

To tick off a learning outcome at uni, we were required to get a data set from somewhere and do some form of processing on it. I discovered, in the Australian government’s massive database of open databases, a set of all public toilets in Australia. I couldn’t resist using that. So I wrote up a quick program in D that would spit out kml placemarks for each toilet in the set. Given the ~17000 public toilets in the set, and given that each one of those had labels, Google Earth didn’t fare very well. I limited it to a couple thousand and that did the trick. Next I went through and set the descriptions of each placemark to be the set of features that the toilet had using a table. That worked out pretty well. At some point we were told that being able to query the data somehow would be a big plus. Things like using Query in google sheets were suggested, but being me I decided against it. I’d already started an assembly interpreter this weekend so I was in the mood to do some language parsing. Lo and behold I created the Toilet Query Language.TQLqueryIt’s not a very complex language, of course, but it allows a fair amount of flexibility in what the program spits out. The main features of the language are sorting and filtering based on postcode, distance from a coordinate, and based on toilet features and facilities. It also has style which essentially sets the colour of the placemark, and it has push and pop which are mainly there for styling specific parts of the set without actually modifying it. The query in the image above sorts by distance to a coordinate (it was my house but I removed a few digits for reasons) then limits the set to the first 1000 toilets. Then I style the set black, and restyle sections of the set based on their features/facilities. I also colour all toilets within 3km of the coordinate green. Finally I limit the set to 200. This is the output (with different coordinates)That is the result of one night’s work and a set of public toilet data. I’m pretty happy with that. I suppose while I’m at it, I’ll talk about the assembly interpreter I started. In about two days I managed to get a basic virtual architecture going. The assembly is kind of intel based, but very watered down. It is complete with four general purpose registers, a modifiable instruction pointer, and a stack pointer. The instruction set is essentially a mov, a jmp, simple arithmetic add, sub, div, mul, neg. binary operators and, or, xor, not, shl, shr,  + debug which dumps memory and the state of the registers, and print which just prints what ever you pass to it. The word size of my imaginary processor is (purely because I didn’t want to think as much about overflows) 64-bits. Under the hood everything is just a signed long long.

With my assembly and virtual machine, I managed to write a fibonacci sequence generator that stored each step of the sequence in a sequential cell in memory.asmfibcondAt the top of the screepcap is the program that generates the sequence + my instruction pointer/integer math hack for doing conditional jumps without conditional jmps. At the bottom is the state of the registers at the end of the simulation (A, B, C, D, SP, IP) and a dump of memory with the first 10 steps of the fibonacci sequence in the first 10 cells. I plan on doing something like this in a game one day. It’s a lot of fun to play with. Here’s the repo.

Read More

Flocking and ASCII Renderers

Over the past few weeks we’ve been working on a quite a few tasks, one of which was about flocking. The flocking task was split up into three parts: 1) optimise a flocking simulation, 2) add a way to configure the simulation and 3) a method of getting statistics from the simulation so that it could be tuned.

Initially, the simulation was set up so that each flocking agent checked against every other agent and calculated if they were in range of any of it’s “sensors” and then acted on that input. So for the first part, I decided to set up a kd-tree and limit the checks to nodes that overlapped with each agents sensors (using a AABB). Nodes of the kd-tree were allocated once (sort of, I was using a std::vector for this) and reused to increase cache coherency and to make it easier to iterate over them. For simplicity’s sake, the tree was reconstructed each frame, but this didn’t seem to pose any tangible hit to performance – the gains were much greater.


Next I multithreaded it using OpenMP on the update loops of the agents. This was incredibly simple to do as the agents were designed so that they wouldn’t need synchronisation while updating, though it posed some problems later when it came to stat collection.

I also, after some profiling with callgrind, removed some of the slower math that was being hit the most. There was one function in particular that was particularly bad for oft-hit slow math which was a test that agents used to test if another agent was in range of one of it’s sensors. It involved square roots and an acos. I managed to delay a square root by modifying an early out to compare squares of values instead of values. Also, the code dealing with checking the differences in angles originally compared the acos of a dot product to the angle of a sensor. I reversed this calculation so it compared the cos of the sensor angle with the dot product and then cached the cos in the sensor. This removed all trigonometry from the function entirely.


Next step was the configuring stage. I decided on lua and a simple binding for configuration. This was pretty straightforward. I pretty much exposed agents, obstacles and sensor parameters with user data and a small library of setters each (I didn’t bother with getters because they didn’t seem useful in this instance), and then added a few extra functions for creating said things and setting up the overall simulation, like setting seed and whatnot. I also decided to make all the setter functions chainable to remove some repetitiveness.chainableluaFor fun I also made it so that when preditor agents collided with regular agents, they would also become preditors. A sort of zombie mode. Idea stolen from Corey.

Finally, for statistics and what not, I decided to spit out data relating to when agents became zombies and where things were during the simulation, and a few other tid bits. Because I wanted to keep this self contained I decided to write an ascii renderer for a graph and heatmap (on the train to uni… twice). Here is the result.




The graph represents infections over time and the heatmap is simply a normalised log of the positions of all agents during the simulation. The symbols used in the heatmap and the output size of the heatmap are also fully configurable in lua.

Anyway, here is a large gif (sorry).


Read More

Audio Synthesis for Final Project

We’ve begun planning for final project at uni. My team and I were sick of using unity for everything and wanted to do something awesome, so we decided that we were going write it all in C++. Our game is a puzzler akin to Antichamber with it’s non-euclideon spaces and such, but with a few different things thrown in. One thing we decided on pretty early in is that we wanted the smallest team possible, so everything is optimised for programmers (there’s three programmers and one designer in the team). As such most, if not, all the audio is going to be procedural.

I’ve messed a bit with audio synthesis a few times before but I’ve never really tried it in the context of a game, let alone in a 3D game, so this is going to be a very interesting experience. At the moment it looks like we’re going to be using OpenAL (the software version) for everything audio. I’ve whipped up a quick demo to show off my portal rendering in Ogre, and what I’ve managed so far with 3D audio in OpenAL and it’s built in effects (mainly reverb and lowpass).

Read More

Webservers and KML

A couple of weeks ago we were told that we were to be making a tool for data visualisation. One possibility for visualisation that was brought up was Google Earth. So I decided to try to figure out KML. I started off by looking through examples and the KML reference to get a grip on how it was structured. The format was simple enough that it didn’t take long to get things drawing in Google Earth. It wasn’t long before I stumbled upon the NetworkLink tag, which allowed periodic updates of the ‘document’.

From that point on I researched into HTTP and worked on writing a simple webserver to serve KML to Google Earth. The first thing I tried was just getting something working that I could navigate to in a browser. The first iteration involved just starting up a tcp socket, listening on a port (I used 1337 so I didn’t have to run my server with admin privileges), and for each incoming connection send a HTTP 1.1 header with status ‘OK’ (Code 200), a content-type of ‘text/plain’ and with content-length set to 0. This meant that the browser could connect to the server but would receive nothing.

Next step was to send something. What I did was write out a simple block of html with just a h1 tag in it, then finish writing the header with content-length set to the size of the buffer. Then I’d concatenate the header and body buffers and send the result. Obviously writing directly to the socket would have been better but performance wasn’t a big concern for me at that point. That seemed to work, so the next step was KML.

I replaced my HTML block with a simple block of KML which was meant to draw a rotating rectangle around my house, and wrote a KML file that was supposed to connect to my server and update every 100ms or so. I started the server and opened the KML file in Google Earth. The rectangle spawned, and rotated once or twice then just stopped. My server also stopped getting requests.

This is obviously not what was supposed to be happening, so I looked at my code again to try to figure out what the problem was. After some good hard pondering I came to the conclusion that Google Chrome wasn’t creating new connections to the server every time it updated like I’d assumed. Instead it created one TCP socket and reused it each update. This wouldn’t have been a problem if I’d remembered to close the sockets after I’d used them. What was happening was, Google Earth was requesting data, I received the request and sent it the data through the new socket, then instead of closing the socket I waited on an accept. This meant that all of Google Earths subsequent requests were met with silence as my server no longer was expecting requests on that socket.

So I closed the socket and tried again. It worked. Floating above my house was a rectangle rotating at 10fps. At that point I decided that I’d made enough progress for the night (4am) and left it there.

Read More

Networking Fun

Our last project in studio at uni was a sort of collaborative art piece involving networking. The idea was, we were all to create a client that took input from the user and sent it, via packets describing various shapes, to a draw server. The server would also send the client programs the positions of every other clients cursor within their screen, which we also had to do something with. The networking was relatively simple. There was no serialisation – packets were pretty much memcpy’s of C structs, and we were on a local network so we didn’t care about packet loss. I used D for my client as I’d been working with DerelictSDL a fair amount prior, and it had a nice socket module in the standard library. I also used libpng to export heatmaps for a while but quickly switched to use SDLs BMP functions on the last day because I couldn’t get libpng to build for windows.

My client was pretty much just a rainbow worm simulator. Each new cursor from the server spawned an egg that would hatch a worm that would follow it around the screen. The hue of the worm was the direction from the head of the worm to the cursor it was following. Every few frames, the positions of the heads of all the worms in the client plus their colours were sent to the server. Upon exiting the client, a heatmap showing the position of the user’s cursor throughout it’s use was saved. Each position saved was drawn into the buffer using a simple 5×5 kernel to make it look nicer but each point was still very pronounced. Using a larger kernel would have been better.


Read More

Optimisations and Such

The first two weeks of uni have passed and they’ve been full on to say the least. After the first session of studio, our task was to optimise a raytracer and we were given a week to do so. My first thought was to go down the GPU route but I decided that optimising on the CPU side would probably be more fun because there would be more varied ways to go about it. The first thing I did was convert the renderable list to be SOA, which was fairly easy to do as all renderables that the raytracer was capable of drawing were spheres. Next, I converted the code for sphere-ray intersection tests to use SSE or AVX. Because the important data was already SOA, loading said data into SIMD registers was pretty straight forward. It also meant that I could test intersections of upto eight spheres at a time. These optimisations about halved the render time.

The next thing I did was spatial partitioning using a kD-tree. I’d never done spatial partitioning before but I managed to get it up and running pretty quickly. The hardest part was rearranging my SOA structures so that I could continue to check eight intersections at a time. My solution was to allow each leaf of the kD-tree to manage upto eight spheres and have it populate my SOA structures in chunks of eight. Each leaf then held an index into the SOA structures and a count as it was never guaranteed that a leaf would have exactly eight children. From memory, every second node had seven instead of eight. As a consequence, this also meant that my SOA structures had to contain padding in order to continue using aligned intrinsics (Out of about 61kB worth of SOA structures, 4k was padding). Then in my sphere-ray intersection function, I added a mask vector that masked out the padding values in the data so that I wouldn’t get sphere intersections where there shouldn’t be.

Finally I multithreaded it with an OpenMP parallel for in the main loop. After all that I managed to get rendering down to about 1% of the time. Which I’m pretty happy with. Though there are plenty of things that I could have done that I just didn’t get around to. For example most, if not all of the code relating to reflections and shadows remained untouched. Shadow calculations used the same ‘trace’ function that the reflection code used so for each pixel in shadow, there were several rays cast that were essentially thrown away. Also, each sphere had it’s own ‘Material’ but in the provided scene, there were only two different kinds of material. This meant that a large chunk of memory was devoted to duplicates. If it were modified so that spheres shared materials, then the entirety of the material data could be kept in cache.


Read More