disney video
SHARE

Have you ever been at an event, and seen multiple people pull out their smartphone, and record whatever’s going on at the same time? What if there were a way to take all those video feeds, and automatically combine them in a way that not only makes sense, but follows the action, and makes edits in ways that keeps with the generally accepted rules of cinematography?

Researchers at Disney are working on just such a tool, and are presenting it at this year’s SIGGRAPH. You can see it in action below, and it’s an ingenious way to combine multiple technologies into a single package.

A big part of how it works is by using the multiple feeds to map the scene in 3D space, and figure out what the majority of people are looking at, and keep that as the major focus of the final video. Since it seems logical that most of the recorders will be paying attention to the most exciting part of the scene, it makes sense to focus on that.

The tool would also follow some of the basic rules of cinematography in order to make a more coherent final cut. By not breaking the plane of action, or not jumping between cameras at a similar distance from the subject, the final video comes together in a more cohesive way.

Disney isn’t the only company working on this sort of tech—in the video below, their version is compared to one already on the market called Vyclone.

For people who don’t fancy attempting to manually edit together a half dozen video feeds of the same event, this technology could provide a much, much better way of showing the best part of everyone’s individual viewpoints, handily edited together into a single package.

https://www.youtube.com/watch?v=tVRjTphHvmE//

[via Engadget]