Face Emotion Recognition (EMO-AffectNet)
Build on-device AI face emotion recognition applications with ZETIC.MLange
On-device AI Face Emotion Recognition App with ZETIC.MLange
We provide Face Emotion Recognition demo application source code for both Android and iOS.
What is EMO-AffectNet?
EMO-AffectNet is a ResNet-50 based deep convolutional neural network architecture that is often used for various computer vision tasks, including image classification and facial emotion recognition.
- Model on Hugging Face: face_emotion_recognition
Model Pipelining
For accurate use of the face emotion recognition model, it is necessary to pass an image of the correct facial area to the model. To accomplish this, we construct a pipeline with the Face Detection model:
- Face Detection: Use the Face Detection model to accurately detect face regions in the image. Extract that part of the original image using the detected face region information.
- Face Emotion Recognition: Input the extracted face image into the Face Emotion Recognition model to analyze emotions.
Step-by-step Implementation
Prerequisites
Prepare the models from Face Emotion Recognition (Hugging Face) and Face Detection (GitHub).
Face Detection model:
pip install tf2onnx
python -m tf2onnx.convert --tflite face_detection_short_range.tflite --output face_detection_short_range.onnx --opset 13Face Emotion Recognition model:
You can find the ResNet50 class here.
import torch
import torch.nn as nn
import numpy as np
emo_affectnet = ResNet50(7, channels=3)
emo_affectnet.load_state_dict(torch.load('FER_static_ResNet50_AffectNet.pt'))
emo_affectnet.eval()
model_cpu = emo_affectnet.cpu()
# cur_face would be cropped face image type of numpy array
model_traced = torch.jit.trace(model_cpu, (cur_face))
np_cur_face = cur_face.detach().numpy()
np.save("data/cur_face.npy", np_cur_face)
output_model_path = "models/FER_static_ResNet50_AffectNet_traced.pt"
torch.jit.save(model_traced, output_model_path)Generate ZETIC.MLange model keys
If you want to generate your own model keys for both models, you can upload the models and inputs with MLange Dashboard,
or use CLI:
# face detection model
zetic gen -p $PROJECT_NAME -i input.npy face_detection_short_range.onnx
# emotion recognition model
zetic gen -p $PROJECT_NAME -i input.npy FER_static_ResNet50_AffectNet.onnxImplement ZeticMLangeModel
We prepared model keys for the demo app: face_detection and face_emotion_recognition. You can use these model keys to try the ZETIC.MLange Application.
For detailed application setup, please follow the Deploy to Android Studio guide.
val faceEmotionRecognitionModel = ZeticMLangeModel(this, "face_emotion_recognition")
faceEmotionRecognitionModel.run(inputs)
val outputs = faceEmotionRecognitionModel.outputBuffersFor detailed application setup, please follow the Deploy to Xcode guide.
let faceEmotionRecognitionModel = ZeticMLangeModel("face_emotion_recognition")
faceEmotionRecognitionModel.run(inputs)
let outputs = faceEmotionRecognitionModel.getOutputDataArray()Prepare Face Emotion Recognition feature extractor
We provide a Face Emotion Recognition feature extractor as an Android and iOS module.
The Face Emotion Recognition feature extractor extension will be released as an open-source repository soon.
// (0) Initialize Face Emotion Recognition wrapper
val feature = FaceEmotionRecognitionWrapper()
// (1) Preprocess bitmap and get processed float array
val inputs = feature.preprocess(bitmap)
// ... run model ...
// (2) Postprocess to bitmap
val resultBitmap = feature.postprocess(outputs) import ZeticMLange
// (0) Initialize Face Emotion Recognition wrapper
let feature = FaceEmotionRecognitionWrapper()
// (1) Preprocess UIImage and get processed float array
let inputs = feature.preprocess(image)
// ... run model ...
// (2) Postprocess to UIImage
let resultBitmap = feature.postprocess(&outputs)Complete Face Emotion Recognition Pipeline Implementation
The complete implementation requires pipelining two models: Face Detection followed by Face Emotion Recognition.
Step 1: Face Detection
// (0) Initialize face detection model
val faceDetectionModel = ZeticMLangeModel(this, "face_detection")
val faceDetection = FaceDetectionWrapper()
// (1) Preprocess image
val faceDetectionInputs = faceDetection.preprocess(bitmap)
// (2) Run face detection model
faceDetectionModel.run(faceDetectionInputs)
val faceDetectionOutputs = faceDetectionModel.outputBuffers
// (3) Postprocess to get face regions
val faceDetectionPostprocessed = faceDetection.postprocess(faceDetectionOutputs)Step 2: Face Emotion Recognition
// (0) Initialize face emotion recognition model
val faceEmotionRecognitionModel = ZeticMLangeModel(this, "face_emotion_recognition")
val faceEmotionRecognition = FaceEmotionRecognitionWrapper()
// (1) Preprocess with detected face regions
val faceEmotionRecognitionInputs = faceEmotionRecognition.preprocess(bitmap, faceDetectionPostprocessed)
// (2) Run face emotion recognition model
faceEmotionRecognitionModel.run(faceEmotionRecognitionInputs)
val faceEmotionRecognitionOutputs = faceEmotionRecognitionModel.outputBuffers
// (3) Postprocess to get emotions
val faceEmotionRecognitionPostprocessed = faceEmotionRecognition.postprocess(faceEmotionRecognitionOutputs)Step 1: Face Detection
// (0) Initialize face detection model
let faceDetectionModel = ZeticMLangeModel("face_detection")
let faceDetection = FaceDetectionWrapper()
// (1) Preprocess image
let faceDetectionInputs = faceDetection.preprocess(bitmap)
// (2) Run face detection model
faceDetectionModel.run(faceDetectionInputs)
let faceDetectionOutputs = faceDetectionModel.getOutputDataArray()
// (3) Postprocess to get face regions
let faceDetectionPostprocessed = faceDetection.postprocess(&faceDetectionOutputs)Step 2: Face Emotion Recognition
// (0) Initialize face emotion recognition model
let faceEmotionRecognitionModel = ZeticMLangeModel("face_emotion_recognition")
let faceEmotionRecognition = FaceEmotionRecognitionWrapper()
// (1) Preprocess with detected face regions
let faceEmotionRecognitionInputs = faceEmotionRecognition.preprocess(bitmap, faceDetectionPostprocessed)
// (2) Run face emotion recognition model
faceEmotionRecognitionModel.run(faceEmotionRecognitionInputs)
let faceEmotionRecognitionOutputs = faceEmotionRecognitionModel.getOutputDataArray()
// (3) Postprocess to get emotions
let faceEmotionRecognitionPostprocessed = faceEmotionRecognition.postprocess(&faceEmotionRecognitionOutputs)Conclusion
With ZETIC.MLange, building your own on-device AI applications with NPU utilization is incredibly easy and silky smooth. We provide the simplest way to implement machine learning applications within pipelines. For example, in our Face Emotion Recognition application, we construct a straightforward pipeline: Face Detection → Face Emotion Recognition.
We're continually uploading new models to our examples and HuggingFace page.
Stay tuned, and contact us for collaborations!