Face Detection
Build on-device AI face detection applications with ZETIC.MLange
On-device AI Face Detection App with ZETIC.MLange
We provide Face Detection demo application source code for both Android and iOS.
What is Face Detection?
The Face Detection model in Google's MediaPipe is a high-performance machine learning model designed for real-time face detection in images and video streams.
- Official documentation: Face Detector - Google AI
Step-by-step Implementation
Prerequisites
Prepare the Face Detection model from GitHub and convert it to ONNX format:
pip install tf2onnx
python -m tf2onnx.convert --tflite face_detection_short_range.tflite --output face_detection_short_range.onnx --opset 13Generate ZETIC.MLange Model
If you want to generate your own model, you can upload the model and input with MLange Dashboard,
or use CLI:
zetic gen -p $PROJECT_NAME -i faces.npy face_detection_short_range.onnxImplement ZeticMLangeModel
We prepared a model key for the demo app: google/MediaPipe-Face-Detection. You can use this model key to try the ZETIC.MLange Application.
For detailed application setup, please follow the Deploy to Android Studio guide.
val model = ZeticMLangeModel(this, PERSONAL_KEY, "google/MediaPipe-Face-Detection")
val outputs = model.run(inputs)For detailed application setup, please follow the Deploy to Xcode guide.
let model = try ZeticMLangeModel(PERSONAL_KEY, "google/MediaPipe-Face-Detection")
let outputs = try model.run(inputs)Prepare Face Detection feature extractor
We provide a Face Detection feature extractor as an Android and iOS module.
The Face Detection feature extractor extension will be released as an open-source repository soon.
// (0) Initialize Face Detection wrapper
val feature = FaceDetectionWrapper()
// (1) Preprocess bitmap and get processed float array
val inputs = feature.preprocess(bitmap)
// ... run model ...
// (2) Postprocess to bitmap
val resultBitmap = feature.postprocess(outputs)import ZeticMLange
import ext
// (0) Initialize Face Detection wrapper
let feature = FaceDetectionWrapper()
// (1) Preprocess UIImage and get processed float array
let inputs = feature.preprocess(image)
// ... run model ...
// (2) Postprocess to UIImage
let resultBitmap = feature.postprocess(&outputs)Complete Face Detection Implementation
// (0) Initialize model and feature
val model = ZeticMLangeModel(this, PERSONAL_KEY, "google/MediaPipe-Face-Detection")
val faceDetection = FaceDetectionWrapper()
// (1) Preprocess image
val faceDetectionInputs = faceDetection.preprocess(imagePtr)
// (2) Process model
val faceDetectionOutputs = model.run(faceDetectionInputs)
// (3) Postprocess model run result
val faceDetectionPostprocessed = faceDetection.postprocess(faceDetectionOutputs)// (0) Initialize model and feature
let model = try ZeticMLangeModel(PERSONAL_KEY, "google/MediaPipe-Face-Detection")
let faceDetection = FaceDetectionWrapper()
// (1) Preprocess image
let faceDetectionInputs = faceDetection.preprocess(uiImage)
// (2) Process model
let faceDetectionOutputs = try model.run(faceDetectionInputs)
// (3) Postprocess model run result
let faceDetectionPostprocessed = faceDetection.postprocess(&faceDetectionOutputs)Conclusion
With ZETIC.MLange, building your own on-device AI applications with NPU utilization is incredibly easy. We've developed a custom OpenCV module and an ML application pipeline, making the implementation of models like face detection remarkably simple and efficient.
This streamlined approach allows you to integrate advanced features with minimal effort. We're continually uploading new models to our examples and HuggingFace page.
Stay tuned, and contact us for collaborations!