CGI now dominates the big screen. Big budgets equal big explosions and there is plenty of time and effort pumped into the tiniest of details. Augmented reality is almost entirely made up of these intricate 3D renderings and as shiny as it is, it does come with a steep amount of man-hours. Researchers at MIT’s Computer Science and Artificial Intelligence have taken a different angle on augmented reality (AR) and have broken some very important ground on revolutionizing the process.

“Interactive Dynamic Video” works entirely based off of vibration. A camera setup records a video of the object as vibrations of different frequencies pass through, referred to as “Vibration Modes.” This information is then run through a clever algorithm that searches for even the tiniest wiggle. The object and its potential for movement is analyzed and given interactive properties and, voila, an IDV is born. This new imaging model can be pushed, pulled, stacked, poked and punched, just like in real life.

Screen Shot 2016-08-03 at 1.20.16 PM

While the tech to track and animate movements is already there in the film and video game industries – green screen, etc. – most of this work takes careful preparation and needs a controlled environment and specific lighting to work. What makes MIT’s innovation so unique is not only it’s time and money saving implication, but its ability to create these realistically interactive images in an uncontrolled environment. You only need a simple camera setup and editing software to create these IDVs and there are plenty of applications open to improvement by using this tech.

Of course, there’s VR, but engineering and film could benefit significantly. Lowering the production cost of a feature without compromising the quality is an option. Checking the structural integrity of a structure is another.

What do you think of this new tech? Let us know in the comments below!