ZETIC.MLange

Face Landmark

Build on-device AI face landmark detection applications with ZETIC.MLange

On-device AI Face Landmark App with ZETIC.MLange

We provide Face Landmark demo application source code for both Android and iOS.

What is Face Landmark?

The Face Landmark model in Google's MediaPipe is a highly efficient machine learning model used for real-time face detection and landmark extraction.

Model Pipelining

For accurate use of the face landmark model, it is necessary to pass an image of the correct facial area to the model. To accomplish this, we construct a pipeline with the Face Detection model:

  1. Face Detection: Use the Face Detection model to accurately detect face regions in the image. Extract that part of the original image using the detected face region information.
  2. Face Landmark: Input the extracted face image into the Face Landmark model to analyze facial landmarks.

Step-by-step Implementation

Prerequisites

Prepare the Face Detection and Face Landmark models from GitHub and convert them to ONNX format.

Face Detection model:

pip install tf2onnx
python -m tf2onnx.convert --tflite face_detection_short_range.tflite --output face_detection_short_range.onnx --opset 13

Face Landmark model:

pip install tf2onnx
python -m tf2onnx.convert --tflite face_landmark.tflite --output face_landmark.onnx --opset 13

Generate ZETIC.MLange Model

If you want to generate your own model, you can upload the model and input with MLange Dashboard,

or use CLI:

zetic gen -p $PROJECT_NAME -i input.npy face_detection_short_range.onnx

Implement ZeticMLangeModel

We prepared model keys for the demo app: face_detection and face_landmark. You can use these model keys to try the ZETIC.MLange Application.

For detailed application setup, please follow the Deploy to Android Studio guide.

    val faceLandmarkModel = ZeticMLangeModel(this, "face_landmark")

    faceLandmarkModel.run(inputs)

    val outputs = faceLandmarkModel.outputBuffers

For detailed application setup, please follow the Deploy to Xcode guide.

    let faceLandmarkModel = ZeticMLangeModel("face_landmark")

    faceLandmarkModel.run(inputs)

    let outputs = faceLandmarkModel.getOutputDataArray()

Prepare Face Landmark feature extractor

We provide a Face Landmark feature extractor as an Android and iOS module.

The Face Landmark feature extractor extension will be released as an open-source repository soon.

    // (0) Initialize Face Landmark wrapper
    val feature = FaceLandmarkWrapper()

    // (1) Preprocess bitmap and get processed float array
    val inputs = feature.preprocess(bitmap)

    // ... run model ...

    // (2) Postprocess to bitmap
    val resultBitmap = feature.postprocess(outputs)
    import ZeticMLange

    // (0) Initialize Face Landmark wrapper
    let feature = FaceLandmarkWrapper()
    
    // (1) Preprocess UIImage and get processed float array
    let inputs = feature.preprocess(image)

    // ... run model ...

    // (2) Postprocess to UIImage
    let resultBitmap = feature.postprocess(&outputs)

Complete Face Landmark Pipeline Implementation

The complete implementation requires pipelining two models: Face Detection followed by Face Landmark.

Step 1: Face Detection

    // (0) Initialize face detection model
    val faceDetectionModel = ZeticMLangeModel(this, "face_detection")
    val faceDetection = FaceDetectionWrapper()
    
    // (1) Preprocess image
    val faceDetectionInputs = faceDetection.preprocess(bitmap)

    // (2) Run face detection model
    faceDetectionModel.run(faceDetectionInputs)
    val faceDetectionOutputs = faceDetectionModel.outputBuffers

    // (3) Postprocess to get face regions
    val faceDetectionPostprocessed = faceDetection.postprocess(faceDetectionOutputs)

Step 2: Face Landmark

    // (0) Initialize face landmark model
    val faceLandmarkModel = ZeticMLangeModel(this, "face_landmark")
    val faceLandmark = FaceLandmarkWrapper()
    
    // (1) Preprocess with detected face regions
    val faceLandmarkInputs = faceLandmark.preprocess(bitmap, faceDetectionPostprocessed)

    // (2) Run face landmark model
    faceLandmarkModel.run(faceLandmarkInputs)
    val faceLandmarkOutputs = faceLandmarkModel.outputBuffers

    // (3) Postprocess to get landmarks
    val faceLandmarkPostprocessed = faceLandmark.postprocess(faceLandmarkOutputs)

Step 1: Face Detection

    // (0) Initialize face detection model
    let faceDetectionModel = ZeticMLangeModel("face_detection")
    let faceDetection = FaceDetectionWrapper()
    
    // (1) Preprocess image
    let faceDetectionInputs = faceDetection.preprocess(bitmap)

    // (2) Run face detection model
    faceDetectionModel.run(faceDetectionInputs)
    let faceDetectionOutputs = faceDetectionModel.getOutputDataArray()

    // (3) Postprocess to get face regions
    let faceDetectionPostprocessed = faceDetection.postprocess(&faceDetectionOutputs)

Step 2: Face Landmark

    // (0) Initialize face landmark model
    let faceLandmarkModel = ZeticMLangeModel("face_landmark")
    let faceLandmark = FaceLandmarkWrapper()
    
    // (1) Preprocess with detected face regions
    let faceLandmarkInputs = faceLandmark.preprocess(bitmap, faceDetectionPostprocessed)

    // (2) Run face landmark model
    faceLandmarkModel.run(faceLandmarkInputs)
    let faceLandmarkOutputs = faceLandmarkModel.getOutputDataArray()

    // (3) Postprocess to get landmarks
    let faceLandmarkPostprocessed = faceLandmark.postprocess(&faceLandmarkOutputs)

Conclusion

Discover just how easy and lightning-fast building your own on-device AI applications can be with ZETIC.MLange! Harness the full power of mobile NPUs for unparalleled performance and speed.

We're continually adding new models to our examples and HuggingFace page.

Stay tuned and contact us to collaborate on exciting projects!