Google LiteRT: Unlocking your hardware’s full speed

Paul Krill
2 Min Read

Introducing a new GPU engine within the on-device AI framework, providing extensive GPU and NPU compatibility across diverse platforms including Android, iOS, macOS, Windows, Linux, and web environments.

An abstract view of high-speed technology in a modern metropolis, with blurred lights creating a futuristic backdrop.
 

LiteRT, Google’s cutting-edge on-device inference framework developed from TensorFlow Lite (TFLite), has unveiled enhanced acceleration capabilities, leveraging a ”next-generation GPU engine” known as ML Drift.

Google revealed on January 28 that this achievement positions LiteRT as a truly universal on-device framework, marking a considerable improvement over its predecessor, TFLite. LiteRT now boasts 1.4x faster GPU performance compared to TFLite, offers a streamlined approach for both GPU and NPU acceleration across various edge platforms, facilitates superior cross-platform deployment for generative AI models, and provides excellent PyTorch/JAX integration through effortless model conversion, according to Google. The company initially previewed these new acceleration features for LiteRT last May.

Available on GitHub, LiteRT powers countless everyday applications, ensuring low latency and robust privacy for billions of devices, Google stated. Thanks to the innovative ML Drift GPU engine, LiteRT supports OpenCL, OpenGL, Metal, and WebGPU, enabling developers to deploy models seamlessly across mobile, desktop, and web environments. For Android, LiteRT intelligently prioritizes optimal performance using available resources, while gracefully reverting to OpenGL for broader device compatibility. Furthermore, LiteRT introduces a cohesive, simplified NPU deployment process that abstracts away complex, vendor-specific SDKs and efficiently manages fragmentation across numerous SoC (system on chip) variants, Google explained.

Further documentation for LiteRT can be found at ai.google.dev.

Libraries and FrameworksSoftware DevelopmentGenerative AIArtificial Intelligence
 
Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *