Compare is a microservice that will generate statistics between two streams of data; one is a truth stream and a second stream under test. Because Compare is a general tool, it allows you to quickly generate customized or standard methods for comparing performance between two, or many, machine learning models. Generated statistics allow the user to understand the accuracy and performance of models, and Compare can be used in production or during development and test to compare candidate models.
Add performance evaluation code next to any model to continuously report on accuracy and other relevant metrics
Monitor distributions of live production data to alert for drift or data outside of expected thresholds
Enable the capability to trigger automatic retraining of the model once it falls outside desired thresholds of performance