Object Detection (YOLOv8 / YOLOv11)
Build on-device AI object detection applications with ZETIC.MLange
On-device AI Object Detection App with ZETIC.MLange supporting both YOLOv8 and YOLOv11 models.
We provide YOLOv11 demo application source code for both Android and iOS. If the input model key is changed to YOLOv8, you can experience YOLOv8 as well.
What is YOLOv11?
YOLOv11 is the latest version of the acclaimed real-time object detection and image segmentation model.
- Official documentation by Ultralytics: YOLOv11 Docs
- Currently, we only support detector mode. Additional features will be supported later
Step-by-step Implementation
Prerequisites
We prepared a model key for you: Steve/YOLOv11_comparison. You can skip to Step 3 if you want to use our pre-configured model.
Export YOLOv11 model
You will get yolo11n.onnx model after running this script:
from ultralytics import YOLO
import torch
# Load a YOLOv11 model
model = YOLO("yolo11n.pt")
# Export the model
model.export(format="onnx", opset=12, simplify=True, dynamic=False, imgsz=640)Prepare input sample
You can use our default sample input: yolo8_detector_input.npy
Or prepare your own input from an image file:
import cv2
import numpy as np
def preprocess_image(image_path, target_size=(640, 640)):
img = cv2.imread(image_path)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, target_size)
img = img.astype(np.float32) / 255.0
img = np.transpose(img, (2, 0, 1))
img = np.expand_dims(img, axis=0)
return imgGenerate ZETIC.MLange Model
If you want to generate your own model, you can upload the model and input with MLange Dashboard,
or use CLI:
zetic gen -p $PROJECT_NAME -i images.npy yolo11n.onnxImplement ZeticMLangeModel
For detailed application setup, please follow the Deploy to Android Studio guide.
// (1) Load Zetic MLange model
val model = ZeticMLangeModel(CONTEXT, PERSONAL_KEY, PROJECT_NAME)
// (2) Prepare model inputs
val inputs: Array<Tensor> = // Prepare your inputs
// (3) Run and get output tensors of the model
val outputs = model.run(inputs)For detailed application setup, please follow the Deploy to Xcode guide.
// (1) Load Zetic MLange model
let model = try ZeticMLangeModel(tokenKey: PERSONAL_KEY, name: PROJECT_NAME, version: VERSION)
// (2) Prepare model inputs
let inputs: [Tensor] = [] // Prepare your inputs
// (3) Run and get output tensors of the model
let outputs = try model.run(inputs)Prepare YOLOv8 image feature extractor
We provide a YOLOv8 feature extractor as an Android and iOS module. This feature extractor works with both YOLOv8 and YOLOv11 models.
We are using ZETIC.MLange extension module in here. check out the documentation here.
val model = ZeticMLangeModelWrapper(this, PERSONAL_KEY, PROJECT_NAME)
val pipeline = ZeticMLangePipeline(
feature = YOLOv8(this, model = model),
inputSource = CameraSource(this, preview.holder, preferredSize),
)
pipeline.loop { result ->
// visualize YOLO result here
} import ZeticMLange
import ext
let model = try ZeticMLangeModelWrapper(PERSONAL_KEY, PROJECT_NAME)
let pipeline = ZeticMLangePipeline(feature: model, inputSource: CameraSource())
pipeline.startLoop()
while true {
let frame = pipeline.latestResult
// visualize YOLO result here
}
pipeline.stopLoop()Conclusion
With ZETIC.MLange, you can easily build your own on-device AI application with NPU utilization. We continuously upload models to our examples and HuggingFace page.
Please stay tuned and contact us for collaborations!