Last week, an video of a new piece of Adobe technology started doing the rounds, and incredible presentation which showed off a new technology which could detect and account for motion blur within a photo. Adobe has followed up with this by not only releasing a high definition version of the video so you could actually see what’s going on, and then with a blog post explaining the technology a bit more.
The blog is a very interesting read, because it lays out both the strengths and weaknesses of the technology — most notably that without hard edges for the algorithm to detect, it really struggles to make much of a difference.
However, some of the information in the blog post and presentation has caused an uproar among observers. One of the photos used in the demonstration was of Adobe’s Kevin Lynch, and was actually perfectly in focus to begin with. Adobe artificially added a blur to it, and then removed it to demonstrate the technology’s capabilities.
This has lead many people to be understandably upset, as it’s much easier to account for blur that you’ve added yourself than it is to do it in a real world example. While synthetic blur may be standard in research, in a presentation that has become largely public, it seems a bit misleading.