Global Device Benchmark
Optimized AI model performance through real-world device benchmarking
ZETIC.MLange provides the best user experience by benchmarking the performance of AI models on a pool of real-world devices. It performs benchmarking for different manufacturers' different processors including CPU, GPU, and NPU. With the result of the benchmark, MLange ensures the best performance in the deployed user target device, regardless of the device type.
Objective
Guarantee optimized target library installation
ZETIC.MLange generates various types of target libraries from your AI model. Not all target libraries will work perfectly on the end-user's device, so we select the best target library for each device based on analyzed benchmark results. To accomplish this, we automatically benchmark and collect the performance time of the target library on all possible devices used by real users.
Check model availability on end-user device
Certain hardware may not support the target libraries converted by ZETIC.MLange. In response to this, we can automatically perform inference of all target libraries on all possible devices and check their availability to ensure safe model performance.
The above process ensures that the optimized target library is installed and running on the end-user's device.
How it works
Make test environment
We build an on-device test environment on the operating systems where the on-device AI model will run.
Perform benchmark on all possible devices
Collect target library, AI model metadata, application binary, etc. to create tests and perform benchmarks on a pool of real devices.
Analyze the results
Once the benchmarks are done, collect the results of the benchmarks performed by the SoC Manufacturer and target Library on each device. The results will look like this:
YOLOv11 Benchmark Results
| Device | SoC Manufacturer | CPU | GPU | NPU | Remarks |
|---|---|---|---|---|---|
| Samsung Galaxy A34 | MediaTek | 172.08 ms | 96.38 ms | 249.41 ms | x1.79 |
| Samsung Galaxy S22 5G | Qualcomm | 79.76 ms | 36.99 ms | 8 ms | x9.97 |
| Samsung Galaxy S23 | Qualcomm | 89.56 ms | 27.5 ms | 5.24 ms | x17.09 |
| Samsung Galaxy S24+ | Qualcomm | 60.43 ms | 21.46 ms | 3.92 ms | x15.42 |
| Samsung Galaxy S25 | Qualcomm | 53.69 ms | 17.22 ms | 3.72 ms | x14.43 |
| Apple iPhone 12 | Apple | 123.12 ms | 22.73 ms | 3.51 ms | x35.08 |
| Apple iPhone 14 | Apple | 111.29 ms | 15.75 ms | 3.75 ms | x29.68 |
| Apple iPhone 15 Pro Max | Apple | 96.36 ms | 7.72 ms | 2.05 ms | x47.00 |
| Apple iPhone 16 | Apple | 102.09 ms | 7.9 ms | 1.9 ms | x53.73 |
Source: Original Benchmark Report
On end-user device
Using the analyzed benchmark results, target libraries are prioritized by SoC manufacturer and processor. The end-user device will use the hardware identifier value to fetch and install the optimal target library.
Detailed profiling result is our premium feature
We execute profiling for all kinds of users and guarantee the best performance of the On-device AI app. However, we currently provide detailed profiling results for Starter users only.
Please contact us for more information.