CEVA Integrates DSP and Voice Neural Networks with TensorFlow Lite for Microcontrollers

CEVA, a licensor of wireless connectivity and smart sensing technologies, today announced that its CEVA-BX DSP cores and WhisPro speech recognition software targeting conversational AI and contextual awareness applications now also support TensorFlow Lite for Microcontrollers, a framework for deploying tiny machine learning on processors in edge devices.

Tiny machine learning brings artificial intelligence to extremely low-power, always-on, battery-operated IoT devices for on-device sensor data analytics in areas such as audio, voice, image, and motion. Users of TensorFlow Lite for Microcontrollers now have a unified processor architecture to run both the framework and the associated neural network workloads required to build intelligent connected products.

CEVA's WhisPro speech recognition software and custom command models are integrated with the TensorFlow Lite framework, further accelerating the development of small footprint voice assistants and other voice controlled IoT devices.

"CEVA has been at the forefront of machine learning and neural networks inferencing for embedded systems and understands that the future of ML is Tiny going into extremely power and cost constrained devices," said Pete Warden, technical lead of TensorFlow at Google, in a statement. "Their continued investment into powerful architectures, tools, and software which support TensorFlow models provide a compelling offering for a new generation of intelligent embedded devices to harness the power of AI."

"The increasing demand for on-device AI to augment contextual awareness and conversational AI workloads poses new challenges to the cost, performance and power efficiency of intelligent devices. TensorFlow Lite for Microcontrollers dramatically simplifies the development of these devices, by providing a lean framework to deploy machine learning models on resource-constrained processors," said Erez Bar-Niv, chief technology officer at CEVA, in a statement. "With full optimization of this framework for our CEVA-BX DSPs and our WhisPro speech recognition models, we are lowering the entry barrier for SoC companies and OEMs to add intelligent sensing to their devices."

The CEVA-BX DSP family facilitates simultaneous processing of front-end voice, sensor fusion, audio processing, and general DSP workloads in addition to AI runtime inferencing. This also allows customers and algorithm developers to take advantage of CEVA's extensive audio, voice and speech machine learning software and libraries to accelerate their product designs.

SpeechTek Covers
for qualified subscribers
Subscribe Now Current Issue Past Issues
Related Articles

CEVA Partners with Fluent.ai on Multilingual Speech Understanding for Edge Devices

Fluent.ai's technologies for neural network-based speech-to-intent applications have been optimized for CEVA's low-power audio and sensor hub processors targeting wearables, consumer devices, and IoT.

Dolby MS12 Multistream Decoder Now Supported on CEVA's Audio DSP

Latest Dolby MS12 implemented on CEVA-BX2 DSPs, providing audio support for smart TVs, set-top-boxes, and more.

Cadence Tensilica HiFi IP Now Supports TensorFlow Lite for Microcontrollers

Cadence's optimized software enables low-power neural network inferencing for advanced audio, voice, and sensing applications.

Novatek Adopts CEVA Audio/Voice Technology for Its Smart TV SoCs

Novatek adds CEVA's DSP and far-field voice front-end software for voice wakeup and control capabilities to TVs.