AWS – Amazon SageMaker Serverless Inference is now generally available
Today, we are excited to announce general availability of Amazon SageMaker Serverless Inference in all AWS Regions where SageMaker is generally available (except the AWS China regions). With SageMaker Serverless Inference, you can quickly deploy machine learning (ML) models for inference without having to configure or manage the underlying infrastructure. When deploying your ML models, simply select the serverless option and Amazon SageMaker automatically provisions, scales, and turns off compute capacity based on the volume of inference requests. With SageMaker Serverless Inference, you pay only for the compute capacity used to process inference requests (billed by the millisecond) and the amount of data processed; you do not pay for idle time. SageMaker Serverless Inference is ideal for applications with intermittent or unpredictable traffic.
Read More for the details.