Project Tango at this point is probably not new to anyone reading this as we’ve discussed it before, but in the past few years Google has been hard at work making positional tracking and localization into a consumer-ready application. While there was an early tablet available with an Nvidia Tegra SoC inside, there were a number of issues on both hardware and software. As the Tegra SoC was not really designed for workloads that Project Tango puts on a mobile device, much of the work was done on the GPU and CPU, with offloading to dedicated coprocessors like ST-M’s Cortex M3 MCUs for sensor hub and timestamp functionality, computer vision accelerators like a VPU from Movidius, and other chips that ultimately increased BOM and board area requirements.
At SIGGRAPH today Google recapped some of this progress that we’ve seen at Google I/O as far as algorithms go and really polishing the sensor fusion, feature tracking, modeling, texturing, and motion tracking aspects of Tango. Anyone that has tried to do some research into how well smartphones can act as inertial navigation devices will probably know that it’s basically impossible to avoid massive integration error that makes the device require constant location updates from an outside source to avoid drifting.
With Tango, the strategy taken to avoid this problem works at multiple levels. At a high level, sensor fusion is used to combine both camera data and inertial data to cancel out noise from both systems. If you traverse the camera tree, the combination of feature tracking on the cameras as well as depth sensing on the depth sensing camera helps with visualizing the environment for both mapping and augmented reality applications. The combination of a traditional camera and a fisheye camera also allows for a sort of distortion correction and additional sanity checks for depth by using parallax, although if you’ve ever tried dual lens solutions on a phone you can probably guess that this distance figure isn’t accurate enough to rely completely on. These are hard engineering problems, so it hasn’t been until recently that we’ve actually seen programs that can do all of these things reliably. Google disclosed that without using local anchor points in memory that the system drifts at a rate of about 1 meter every 100 meters traversed, so if you never return to previously mapped areas the device will eventually have a non-trivial amount of error. However, if you return to previously mapped areas the algorithms used in Tango will be able to reset its location tracking and eliminate accumulated error.
With the Lenovo Phab 2 Pro, Tango is finally coming to fruition in a consumer-facing way. Google has integrated Tango APIs into Android for the Nougat release this fall. Of course, while software is one part of the equation, it’s going to be very difficult to justify supporting Tango capabilities if it needs all of the previously mentioned coprocessors in addition to the depth sensing camera and fisheye camera sensors.
In order to enable Tango in a way that doesn’t require cutting into battery size or general power efficiency, Qualcomm has been working with Google to make the Tango API run on the Snapdragon SoC in its entirety rather than on dedicated coprocessors. While Snapdragon SoCs generally have a global synchronous clock, Tango really pushes the use of this to its full extent by using this clock on multiple sensors to enable the previously mentioned sensor fusion. In addition to this, processing is done on the Snapdragon 652 or 820’s ISP and Hexagon DSP, as well as the integrated sensor hub with low power island. The end result is that there enabling the Tango APIs requires no processing on the GPU and relatively minimal processing on the CPU such that Tango-enabled applications can run without hitting thermal limits and allowing for more advanced applications using Tango APIs. Qualcomm claimed that less than 10% of cycles on the S652 and S820 are used on the CPU and less than 35% of cycles on the DSP are needed as well. Qualcomm noted in further discussion that the use of Hexagon Vector Extensions would further cut down on CPU usage, and that much of the current CPU usage was on the NEON vector units.
To see how all of this translates Qualcomm showed off the Lenovo Phab 2 Pro with some preloaded demo apps like a home improvement application from Lowe's which supports size measurements and live preview of appliances in the home with fairly high level of detail. The quality of the augmented reality visualization is actually shockingly good to the extent that the device can differentiate between walls and the floor so you can’t just stick random things in random places, and the placement of objects is static enough that there’s no strange floatiness that often seems to accompany augmented reality. Objects are redrawn fast enough that camera motion results in seamless and fluid motion of virtual objects, and in general I found it difficult to see any real issues in execution.
While Project Tango still seemed to have some bugs to iron out and some features or polish to add, it looks like as it is now the ecosystem has progressed to the point where Tango API features are basically ready for consumers. The environment tracking for true six degree of freedom movement surely has implications for mobile VR headsets as well, and given that only two extra cameras are needed to enable Tango API features it shouldn’t be that difficult for high-end devices to integrate such features, although due to the size of these sensors it may be more targeted towards phablets than regular smartphones.
from AnandTech http://ift.tt/2a9ayrQ
via IFTTT
No comments:
Post a Comment