In this project we got to implement the arctangent function in FPGA hardware — among other things! It turns out to be an important thing to know how to do if you want to extract depth information from non-stereoscopic images. (Technology that gives the same result is already available in things like the Microsoft Kinect). We worked with a company that uses a combination of picoprojector and camera to obtain depth information. The projector illuminates the scene with a special light pattern, which is captured by the camera and then processed by an FPGA to extract the depth of each pixel in the scene, at full frame rates. In effect you get X, Y, and Z for each pixel in real time. Because we had to transfer live data to a PC, our FPGA board also includes a USB3 interface that accommodates the high data rates involved. The same FPGA also interfaces to an Aptina sensor as the image source.
We also coded the FPGA controlling the projector in the development system, migrated it to ASIC code, and assisted in design reviews with the the ASIC vendor.
If you can get the same result from 3D cameras or the Microsoft Kinect, why reinvent the wheel? Because … size. Imagine having a 3D scanner in your iPhone. We’ll wait while you ponder the implications.