ML Model Monitoring Cheat Sheet

Measure drift, latency, and data quality after deployment

Last Updated: November 21, 2025

Monitoring Focus

Metric Indicator
Prediction drift Compare distributions of new predictions vs training baseline.
Latency Track 95th percentile response time for each model version.
Feature quality Alert if required features are missing or out of range.
Target leakage Watch for sudden accuracy jumps that suggest leakage.

Toolkit Commands

pip install evidently
Add dashboards and checks around data/drift.
evidently profile --dataset new.csv
Quickly profile production data snapshots.
kubectl logs -f deployment/model
Tail infra logs to correlate anomalies.
alertmanager reload
Update alerts when thresholds change.

Summary

Detect drift and latency spikes early by logging predictions, summarizing distributions, and reusing existing alerting systems.

💡 Pro Tip: Log both features and predictions so post-training investigations can reconstruct the situation.
← Back to Data Science & ML | Browse all categories | View all cheat sheets