Adobe Releases More Information On Photoshop Deblurring Tech

With the internet ablaze with interest on Adobe's recent deblur presentation, the company has released more information, some of it controversial.

Adobe deblur
Adobe deblur

Last week, an video of a new piece of Adobe technology started doing the rounds, and incredible presentation which showed off a new technology which could detect and account for motion blur within a photo. Adobe has followed up with this by not only releasing a high definition version of the video so you could actually see what's going on, and then with a blog post explaining the technology a bit more.

The blog is a very interesting read, because it lays out both the strengths and weaknesses of the technology — most notably that without hard edges for the algorithm to detect, it really struggles to make much of a difference.

However, some of the information in the blog post and presentation has caused an uproar among observers. One of the photos used in the demonstration was of Adobe's Kevin Lynch, and was actually perfectly in focus to begin with. Adobe artificially added a blur to it, and then removed it to demonstrate the technology's capabilities.

For those who are curious – some additional background on the images used during the recent MAX demo of our “deblur” technology. The first two images we showed – the crowd scene and the image of the poster, were examples of motion blur from camera shake. The image of Kevin Lynch was synthetically blurred from a sharp image taken from the web. What do we mean by synthetic blur? A synthetic blur was created by extracting the camera shake information from another real blurry image and applying it to the Kevin Lynch image to create a realistic simulation. This kind of blur is created with our research tool. Because the camera shake data is real, it is much more complicated than anything we can simulate using Photoshop’s blur capabilities. When this new image was loaded as a JPEG into the deblur plug-in, the software has no idea it was synthetically generated. This is common practice in research and we used the Kevin example because we wanted it to be entertaining and relevant to the audience – Kevin being the star of the Adobe MAX conference!

This has lead many people to be understandably upset, as it's much easier to account for blur that you've added yourself than it is to do it in a real world example. While synthetic blur may be standard in research, in a presentation that has become largely public, it seems a bit misleading.