New for Amazon SageMaker – Perform Shadow Tests to Compare Inference Performance Between ML Model Variants
As you move your machine learning (ML) workloads into production, you need to continuously monitor your deployed models and iterate when you observe a deviation in your model performance. When you build a new model, you typically start validating the model offline using historical inference request data. But this data sometimes fails to account for Read more about New for Amazon SageMaker – Perform Shadow Tests to Compare Inference Performance Between ML Model Variants[…]