AWS – Amazon Bedrock Model Evaluation now supports evaluating custom models
Model Evaluation on Amazon Bedrock allows you to evaluate, compare, and select the best foundation models for your use case. Amazon Bedrock offers a choice of automatic evaluation and human evaluation. You can use automatic evaluation with predefined algorithms for metrics such as accuracy, robustness, and toxicity. Additionally, for those metrics or subjective and custom metrics, such as friendliness, style, and alignment to brand voice, you can set up a human evaluation workflow with a few clicks. Human evaluation workflows can leverage your own employees or an AWS-managed team as reviewers. Model evaluation provides built-in curated datasets or you can bring your own datasets.
Now, customers can evaluate their own custom fine-tuned models from fine-tuning and continued pretraining jobs on Amazon Bedrock. This allows customers to complete the cycle of selecting a base model, customizing it, evaluating it, and customizing it again if needed or continuing to production if they are satisfied with its evaluation outcome. To evaluate a custom model, simply select the custom model from the list of models to evaluate in the model selector tool when creating an evaluation job.
Model Evaluation on Amazon Bedrock is now Generally Available in these commercial regions and the AWS GovCloud (US-West) Region.
To learn more about Model Evaluation on Amazon Bedrock, see the Amazon Bedrock developer experience web page. To get started, sign in to Amazon Bedrock on the AWS Management Console or use the Amazon Bedrock APIs.
Read More for the details.