Why there's no particular reason to be skeptical about PhotoSketch
A new technology called PhotoSketch is making the rounds on the interwebs. It’s quite awesome: You sketch a scene, label the parts, and then the app automatically delivers a scene composited from photographs found on the Internet. There has been some skepticism about this, however. As that site does not allow comments, I thought I’d post my thoughts here instead.
The first argument why skepticism is valid is that it leaps ahead in too many areas, such as finding images based on descriptions of the objects, finding things that fit particular shapes within them, separating them from the background and placing them somewhere. That does seem unlikely at first.
Reading their paper, however, it stops being so odd. Finding the pictures is not more difficult than a Google search (although they don’t mention Google specifically), with a lot of steps to filter out images that just don’t fit. The software specifically chooses images with a largely uniform background, making the extraction and shape processing far easier. The main interesting thing is compositing, for which they appear to have done most of their work. It’s still amazing, but it’s not as unlikely as it seems, especially seeing that research is far ahead of what we see in Photoshop today.
The second point is that this is out of left field. Frankly, it’s not. Companies like Google certainly do spend a lot of time and money on these issues, but that does not mean they always get better results than universities, which can be quite amazing. One of my favorite examples is SLAP (introductory PDF), which is essentially not very different from Microsoft’s Surface, but with the ability to use knobs, sliders, keyboards and so on freely on and interacting with the table surface.
Skepticism may not be a bad idea, but in this case, it’s not really founded.
Written on October 7th, 2009 at 09:47 pm