Jumping spiders provide a completely new way of gauging distance
A new research paper has shown that jumping spiders use a previously unseen method of gauging distance to their prey, one which could eventually find its way into cameras.
In an article published in this week’s issue of Science, researchers have discovered that jumping spiders use a completely novel way of focusing and gauging distance for their attacks. Humans (and many animals) focus by adjusting the focal length of our eyes, and can tell the distance to objects through stereoscopic vision. Some animals that aren’t as good at adjusting focal length shift their around to create binocular vision, a technique called motion parallax. But jumping spiders? They do something completely different.
The researchers discovered that inside the spider’s eyes are four layers of retina. Two of those layers can detect green light, but one of cannot focus it — which means that they have a permanently out of focus version of the scene in addition to an in-focus one. The researchers theorize that by comparing the two, the spiders can judge the distance, and make an accurate jump to attack their prey.
Humans already do this to a certain extent — when you see an out of focus area in a photograph, you automatically understand that it’s a different distance from the area in focus. The jumping spider’s version is just this turned up to 11. By comparing the defocused image to the focused one, it can tell just how far away it is. The scientists tested their theory by observing the spiders under artificial light: when in green light, the arachnids were able to jump accurately; and when bathed in pure red light, they weren’t.
This “image defocus” could provide an interesting alternative to the way that modern cameras focus and analyze distance. Take for instance the current crop of 3D cameras. Some use two lenses and some allow you to take a panorama of shots, and will pick two to stitch together two to make a 3D final — these methods are analogous to the stereoscopic and motion parallax versions found in biology mentioned above. If cameras were able to adopt the defocus method, it could potentially provide 3D information from a single lens. Alternately, it might be a different way of calculating focus, separate from the phase- and contrast-based versions that we see right now.
This method of calculating distance has actually been theorized for a while, and has been studied under the name “depth from defocus“, but hopefully this analysis of the spider’s eyes shows another way to approach the problem, and one that might work with the color-specific pixels of modern-day camera sensors.