While Audience is traditionally focused on voice products, today they’re attempting to make their first moves into combined voice recognition and sensor hub products that leverage sensor fusion and neural networks. The NUE N100 is the first of this line of products, which is able to do keyword recognition and can keep the main CPU from waking until a command is received and registered. Audience focused on emphasizing how their solution eliminates the need for additional waiting once the initial wakeup occurs as it can cache the spoken command and feed it into a given system like Google voice actions. In addition, this solution is said to have reduced false wakeup rate, which means that there is far less power wasted on unintended activation. Audience’s solution can cache up to 5 key words, and can accurately distinguish between different people due to their use of neural network-based solutions, and can be programmed either by the end user or the OEM.
Outside of this VoiceQ system, Audience is also introducing MotionQ, which are contextual motion systems. In its current state, using various sensors present on a smartphone or tablet, the motion processing is able to determine whether the device is in a pocket, on a desk or in a person’s hand, whether the device is being held in a sitting, standing, walking or running position, and whether the device is in a car, train, bike, or many other contextual scenarios relying on the neural network algorithms as previously mentioned. The N100 also has OSP support, which means that OEMs can take the N100 and implement custom algorithms in addition to the work that Audience has already done. The N100 will be available for sampling in mid-2015, which means that devices shipping with this chip should appear around in 2016.
from AnandTech http://ift.tt/1BshfOi
via IFTTT
No comments:
Post a Comment