The scientists investigated a principle called “Fourier transform,” which is a signal processing phenomenon that breaks down separate frequencies within a signal. For their project, the research team used the Xbox One’s Kinect motion sensor and camera, taking advantage of its depth sensor.
“You physically cannot make a camera that picks out multiple reflections,” says Ayush Bhandari, a PhD student in the MIT Media Lab and first author on the new paper. “That would mean that you take time slices so fast that [the camera] actually starts to operate at the speed of light, which is technically impossible. So what’s the trick? We use the Fourier transform.” The MIT researchers had already developed a system that fires light into a scene and gauges the differences between the arrival times of light reflected by nearby objects — such as panes of glass — and more distant objects. In modifying the Kinect, the team ended up actually joining forces with Microsoft Research, working to adapt the camera to use specific frequencies of light, and then recognizing reflections from different depths. “The researchers developed a special camera that emits light only of specific frequencies and gauges the intensity of the reflections. That information, coupled with knowledge of the number of different reflectors positioned between the camera and the scene of interest, enables the researchers’ algorithms to deduce the phase of the returning light and separate out signals from different depths.” Laurent Daudet, a professor of physics at Paris Diderot University says, “What is remarkable about this work is the mixture of advanced mathematical concepts, such as sampling theory and phase retrieval, with real engineering achievements.” Scientists say that the technology is a cool development, which will definitely help the photography industry. The Media Lab has been praised by academics for being able to solve the reflection problem with a video game accessory instead of depending on large, costly, lab-quality equipment. Source: MIT News