Wednesday, March 17, 2010

Visualizing RGB, take two - Update


When I posted my 'Visualizing RGB, take two' article last month, an anonymous commenter going by the name Full-size had a good suggestion - I should use a form of error diffusion to better hide the pixel errors when selecting the colours to use. Within a couple days I had this implemented, and wow does it make an improvement! Unfortunately, between school and needing to reinstall Windows I never ended up posting the results. So, three weeks late, here they are.

Results for Barcelona image

As you may recall, the goal of the algorithm was to transform a source image into a 4096x4096 version that uses every RGB colour exactly once. As a reminder, here is the original image, and what my algorithm previously did with it:
Original
Result from previous version


Now, take a look at how it looks using a form of error diffusion (with error terms capped at 60):

(Full 50 meg png available here). Much nicer to look at! Take a look at the sky in particular - where it used to go psychedelic trying to deal with all that near white, large portions now end up simply turning into a uniform gray. The new version is worse in a couple spots (e.g. the wheel wells of the car), but overall I think it is hugely improved. Now, I wonder how the new version would do on a harder image?

Results for the Mona Lisa

As promised, I am also posting the results of running this algorithm on an image of the Mona Lisa. This is an especially difficult image, because the colour palette is very limited - notice the lack of blue in particular. First, let's take a look at the original, and the result from the previous version of the code:
Original
Result from previous version


Ouch. Poor Lisa. Still, let's forge on and see how the new version does, shall we?

(Full 50 meg png available here). Considerably better overall, although the colour burning on the forehead and neck is pretty ugly to look at. Still, considering we are trying to use every RGB colour once, I think the results are quite decent.

Postscript

The code to achieve this is in the same place as before, updated on GitHub. Right now you have to change the source code to edit the error diffusion cap, sorry - but feel free to fix that!

8 comments

  1. Great work. I published your Metamorphosis of Narcissus on allRGB. I also think it's quite interesting to see how both of your Mona Lisa's compare to the four Mona Lisa's we have on our website.

    ReplyDelete
  2. Thanks! Yes, I must admit I did Mona Lisa specifically to see how it compares to the ones already posted on allRGB. I chose not to upload it since I thought there were enough up already, but I'd be happy to add this version as well if you'd like.

    ReplyDelete
  3. impressive work. I find the allrgb project fascinating :)

    ReplyDelete
  4. I'm impressed by the smoothness of this version with error diffusion. I cannot so far reproduce it in my computations. The images I get are more pixelated, so to say. For each pixel I obtain the difference between original and mapped RGB values and diffuse it over the upper, left, down and right neighbours, 25% for each one. Could you tell me, more or less, how do you perform error diffusion? I'm intrigued... Regards form Germany.

    ReplyDelete
    Replies
    1. The code is on Github, so you're welcome to look at exactly how I did it. But in essence, I first accounted for the average error of the image to avoid propagating systemic errors [1], then processed pixels in a random order and spread their error to any of the 8 surrounding pixels not yet processed [2].

      [1] https://github.com/EricBurnett/AllRGBv2/blob/master/AllRGBv2/Main.cs#L289
      [2] https://github.com/EricBurnett/AllRGBv2/blob/master/AllRGBv2/Main.cs#L216

      Delete
    2. Thanks! I understood the code, I think :) But though my calculations are the same as yours, I guess I am missing something, since in the images I get the colors are not so smooth as in yours. I will keep trying. Best regards!

      Delete
    3. It sounds from your description like you're using 4 neighbours rather than 8, which may contribute. My only other thought is that your random and/or nearest-colour algorithms might be off, leading to systemic bias somewhere.

      Otherwise, good luck!

      Delete
  5. Now I am doing it with 8 neighbours, and I think I know why the results are a little bit disappointing. The images I use for input are 512x512 pixels with 24bpp, so they can use a total of 16.8M colors. The color mapping is to 18bpp, a total of 2.6K colors, and the output image size is 512x512. It may be the case that such a large difference on the palettes causes pixelation in the final image. If I would map to 24bpp, as is the case for the allRGB contributions (image size of 4096x4096) such inhomogeneities would not appear. What do you think? I am already running a calculation with full 24bpp spectrum, so I guess I will see if this is the case as soon as it finishes (it is taking really long!).

    ReplyDelete