Previous
Deploy software
Deploy a trained ML model to one machine or an entire fleet using the same fragment-based workflow you use for modules. When you retrain and upload a new model version, machines configured to track that version update automatically.
ML models in Viam are deployed as registry packages, the same way modules are. A machine needs two services to run a model:
You configure both services in a fragment, apply the fragment to your machines, and every machine downloads the model and starts running inference.
When you upload a new version of the model, machines update automatically (unless you pin to a specific version).
tflite and add the tflite_cpu ML model service (or the appropriate model service for your model framework).mlmodel and add the mlmodel vision service.In the fragment configuration, each module and ML model package has a version field.
2026-03-15T10-30-00) to prevent automatic updates.stable tag on your fragment, deploy stable to production machines, and a development tag for test machines. When the new model is validated on development machines, move the stable tag to the new fragment revision.For more on version strategies, see Update software.
Configure a maintenance window to control when model updates are applied, so machines are not interrupted during operation.
Apply the fragment to machines individually or through provisioning:
viam-defaults.json file so new machines apply it automatically on first boot. See Provision devices.viam machines part fragments add --part=<part-id> --fragment=<fragment-id>.After applying the fragment, verify that the model is running on your machines:
When you retrain and upload a new model version:
stable tag to the new revision.To monitor which model version each machine is running, use the fleet dashboard or check machine status programmatically. See Update software for details.
Was this page helpful?
Glad to hear it! If you have any other feedback please let us know:
We're sorry about that. To help us improve, please tell us what we can do better:
Thank you!