Advanced Configuration
Advanced Melange configuration options for Android.
This guide covers advanced configuration options available in the Melange Android SDK.
Inference Mode Selection
Melange supports multiple inference modes to balance speed and accuracy. By default, the SDK uses ModelMode.RUN_AUTO, which selects the fastest configuration while maintaining high-quality results (SNR > 20dB).
// Default (Auto): balanced speed and accuracy
val model = ZeticMLangeModel(
context = this,
personalKey = PERSONAL_KEY,
name = MODEL_NAME,
modelMode = ModelMode.RUN_AUTO
)
// Speed-first: minimum latency
val modelFast = ZeticMLangeModel(
context = this,
personalKey = PERSONAL_KEY,
name = MODEL_NAME,
modelMode = ModelMode.RUN_SPEED
)
// Accuracy-first: maximum precision
val modelAccurate = ZeticMLangeModel(
context = this,
personalKey = PERSONAL_KEY,
name = MODEL_NAME,
modelMode = ModelMode.RUN_ACCURACY
)For a detailed explanation of each mode, see Inference Mode Selection.
Model Version Pinning
By default, the SDK loads the latest model version. You can pin to a specific version for production stability:
val model = ZeticMLangeModel(
context = this,
personalKey = PERSONAL_KEY,
name = MODEL_NAME,
version = 2 // Pin to a specific version
)Multi-Model Pipelines
For applications that chain multiple models (e.g., detection followed by classification), initialize each model separately and pass outputs as inputs:
// Initialize pipeline models
val detectionModel = ZeticMLangeModel(this, PERSONAL_KEY, "detection_model")
val classificationModel = ZeticMLangeModel(this, PERSONAL_KEY, "classification_model")
// Run pipeline
val detectionOutputs = detectionModel.run(inputs)
// Process detection outputs and prepare classification inputs
val classificationOutputs = classificationModel.run(classificationInputs)For a complete pipeline example, see Multi-Model Pipelines.
Threading Considerations
Model initialization performs a network call on first use. Always initialize models on a background thread to avoid blocking the UI.
lifecycleScope.launch(Dispatchers.IO) {
val model = ZeticMLangeModel(this@MainActivity, PERSONAL_KEY, MODEL_NAME)
val outputs = model.run(inputs)
withContext(Dispatchers.Main) {
// Update UI with results
}
}Next Steps
- Inference Mode Selection: Detailed mode comparison
- Performance Optimization: Tips for best performance
- ZeticMLangeModel API Reference: Full API documentation