Face Detection
Run MediaPipe Face Detection on-device with ZETIC Melange.
Build an on-device face detection application using Google's MediaPipe Face Detection model with ZETIC Melange. This tutorial covers converting the model, deploying it, and running inference on Android and iOS.
We provide Face Detection demo application source code for both Android and iOS.
What You Will Build
A real-time face detection application that identifies face locations in camera frames using the MediaPipe Face Detection model, accelerated on-device with NPU hardware.
Prerequisites
- A ZETIC Melange account with a Personal Key (sign up at melange.zetic.ai)
- Python 3.8+ with
tf2onnxinstalled - The Face Detection TFLite model (
face_detection_short_range.tflite) - Android Studio or Xcode for mobile deployment
What is Face Detection?
The Face Detection model in Google's MediaPipe is a high-performance machine learning model designed for real-time face detection in images and video streams.
- Official documentation: Face Detector - Google AI
Step 1: Convert the Model to ONNX
Prepare the Face Detection model and convert it from TFLite to ONNX format:
pip install tf2onnx
python -m tf2onnx.convert --tflite face_detection_short_range.tflite --output face_detection_short_range.onnx --opset 13Step 2: Generate Melange Model
Upload the model and inputs via the Melange Dashboard or use the CLI:
zetic gen -p $PROJECT_NAME -i faces.npy face_detection_short_range.onnxStep 3: Implement ZeticMLangeModel
We prepared a model key for the demo app: google/MediaPipe-Face-Detection. You can use this model key to try the Melange Application.
For detailed application setup, please follow the Android Integration Guide guide.
val model = ZeticMLangeModel(this, PERSONAL_KEY, "google/MediaPipe-Face-Detection")
val outputs = model.run(inputs)For detailed application setup, please follow the iOS Integration Guide guide.
let model = try ZeticMLangeModel(personalKey: PERSONAL_KEY, name: "google/MediaPipe-Face-Detection")
let outputs = try model.run(inputs)Step 4: Use the Face Detection Wrapper
We provide a Face Detection feature extractor as an Android and iOS module.
The Face Detection feature extractor extension will be released as an open-source repository soon.
// (0) Initialize Face Detection wrapper
val feature = FaceDetectionWrapper()
// (1) Preprocess bitmap and get processed float array
val inputs = feature.preprocess(bitmap)
// ... run model ...
// (2) Postprocess to bitmap
val resultBitmap = feature.postprocess(outputs)import ZeticMLange
import ext
// (0) Initialize Face Detection wrapper
let feature = FaceDetectionWrapper()
// (1) Preprocess UIImage and get processed float array
let inputs = feature.preprocess(image)
// ... run model ...
// (2) Postprocess to UIImage
let resultBitmap = feature.postprocess(&outputs)Complete Face Detection Implementation
// (0) Initialize model and feature
val model = ZeticMLangeModel(this, PERSONAL_KEY, "google/MediaPipe-Face-Detection")
val faceDetection = FaceDetectionWrapper()
// (1) Preprocess image
val faceDetectionInputs = faceDetection.preprocess(imagePtr)
// (2) Process model
val faceDetectionOutputs = model.run(faceDetectionInputs)
// (3) Postprocess model run result
val faceDetectionPostprocessed = faceDetection.postprocess(faceDetectionOutputs)// (0) Initialize model and feature
let model = try ZeticMLangeModel(personalKey: PERSONAL_KEY, name: "google/MediaPipe-Face-Detection")
let faceDetection = FaceDetectionWrapper()
// (1) Preprocess image
let faceDetectionInputs = faceDetection.preprocess(uiImage)
// (2) Process model
let faceDetectionOutputs = try model.run(faceDetectionInputs)
// (3) Postprocess model run result
let faceDetectionPostprocessed = faceDetection.postprocess(&faceDetectionOutputs)Conclusion
With ZETIC Melange, building on-device face detection applications with NPU acceleration is straightforward. We have developed a custom OpenCV module and an ML application pipeline, making the implementation remarkably simple and efficient.
We are continually uploading new models to our examples and HuggingFace page.
Stay tuned, and contact us for collaborations!