Update: The follow-up post can be found here.
About 3 weeks ago, I wrote about a program I created to transform pictures into all-RGB images (4096x4096 images using each possible RGB colour exactly once). It worked by ordering pixels based on a Hilbert ordering of the 3d colour cube and then re-colouring them in order, and while it produced interesting images, it pretty much failed at the stated goal of “keep[ing] the overall look untouched”. The problem was that the general hue of the pixels was often very shifted from the input, so the overall features were preserved but the colour balance was not. So for the past week or so I’ve been working on a new program, one that will (hopefully!) do a better job of keeping the base look intact. As with last time, I’m using an image I took in Barcelona for testing – let me know if you have a different one you’d like to see.
Choose the closest colour…
My idea this time was that instead of choosing an ordering of the pixels, it would be better to try to minimize the distance between the source and destination colours overall. The easiest way I could think of was to simply choose pixels at random, and assign them the “closest” colour remaining. Hopefully deviations would occur in all directions equally, so the average colour of a region would be as close as possible to the source. By popular demand, I will try to make this algorithm a little more explicit this time:
- Go through the pixels in the source image in a random order.
- For each, select the closest remaining unused colour, by Euclidean distance between the coordinates in the colour space.
- Assign the found colour as the output colour of the pixel, and mark it used.
But in which colour space?
A key question I had was which colour space would be best for preserving hues? There are a number of different “colour solids” that I could use coordinates from, with RGB being only one of many. I had a strong suspicion that something like HSL would do better than using RGB directly, but the easiest way to find out which to do a direct comparison. I tried the RGB cube as well as HSL and HSV cylinders for the comparison. My test images are presented below.
As you can see, HSL and HSV give essentially the same results, which are both much better than RGB (look closely at the wheel wells, or the buildings in the trees on the right to see the differences). I like to think that HSV is slightly better, but I might be imagining differences that really aren’t there. Either way, I chose to use HSV for the final copy.
Looks good! Certainly a lot closer to the source image – I’m satisfied with this one for now.
As with last time I am using a conceptually simple algorithm, however this time the implementation was considerably more difficult. The problem is that choosing the closest remaining colour to a source pixel is a hard problem to do efficiently, especially since the set of candidate colours changes at every step. I wrote the code in C# for performance this time, but I have still had to spend quite a few hours optimizing the code to get the program to finish at all. The final version can take 30+ hours to generate an image, and peak at over 4 GB of ram. I based my code around a KD-tree I found online, then rewrote to optimize for the 3D, single-nearest-neighbour case as well as to support branch pruning on delete. The rewritten tree – as well as the rest of my code – is available in a repository on GitHub: http://github.com/EricBurnett/AllRGBv2. Feel free to try it out for yourself - if you do, I’d love to hear about it!