We just released a new iPhone app that quantises photos ( StencilPic--
buy it :)). There is a postscript on this post about terminology, but
the subject is preconditioning. Suppose we do a two-color stencil of
the following image:
|
Source image |
When we run a two-color stencil on this using the luminance channel, there are good results for some image, but this image is a bit problematic.
If we sharpen the image, which enhances edges in the luminance channel we get this:
And the resulting stencil:
I think that such "preconditioning" for stencils is a promising research topic for a nice little research paper. I will not pursue that, so please pursue it, publish it, and tell me what you learn!
PS-- an note on terminology.
We originally called our app "PosterPic" and said it did "posterization". However, many people thought that mean what graphics people would probably call "quantized color table", which we think is now the dominant terminology because that is a named Photoshop effect:
|
Source image (courtesy Parker Night at flickr) |
|
Photoshop "posterization"
|
|
Output from StencilPic |
However, what our app does is something more like the the famous
Fairey Obama poster which a graphics person would call "quantized pseudo-color display" (note Fairey also did the Andre the Giant poster that went
viral in graphics thanks to Brown students). That doesn't have a ring to it, so we went with the term "stencil". This is often black and white, but it's whatever spray paint you want. The google image search of "stencil portrait" is what convinced us this was good terminology:
Another postscript based on the interesting comment. Here is Adelson's shadow with and without sharpening on source image. I just use a screen shot of the last screen in the app so you can see different thresholds.
2 comments:
Wow, that works amazingly well: very interesting.
I think the sharpening is "flattening" gradients in the original image, since gradients are low-frequency. If you think of a pure gradient and your stencil operation, there's no good "threshold" to separate the dark side and the bright side of the gradient.
At the same time, we know that our eyes do luminance adaptation spatially, a la checker shadow illusion. But the sharpening filter is itself doing the luminance adaptation, so now the thresholding operation gets to place edges only where second-order edges existed on the original image.
(In fact, I suspect that Adelson's checker shadow illusion would be a good image to try this technique)
I had no idea what that would look like. Added to the post!
Post a Comment