AWS – Amazon Bedrock Guardrails announces policy based enforcement for responsible AI
Amazon Bedrock Guardrails announces Identity and Access Management (IAM) policy-based enforcement capabilities to build safe, generative AI applications at scale. This new feature enables customers to apply specific guardrails to model inference calls, ensuring responsible AI policies are applied across all AI interactions. Bedrock Guardrails provides configurable safeguards to detect and filter undesirable content, topic filters to define and disallow specific topics, sensitive information filters to redact personally identifiable information (PII), word filters to block specific words, and detect model hallucinations by detecting grounding and relevance of model responses and identify, correct, and explain factual claims in model responses using Automated Reasoning checks. Guardrails can be applied across any foundation model including those hosted with Amazon Bedrock, self-hosted models, and third-party models outside Bedrock using the ApplyGuardrail API, providing a consistent user experience and standardizing safety and privacy controls.
Starting today, Bedrock Guardrails provides a new condition key bedrock:GuardrailIdentifier that can be used in IAM policies to enforce the use of specific guardrails with associated policies. This new condition key can be applied on all Bedrock Invoke and Converse APIs. If the guardrail configured in your IAM policy does not match the specified guardrail, the request will be rejected, ensuring compliance with the responsible AI policies of the organization.
IAM policy-based enforcement to comply with responsible AI policies is now available in all AWS regions where Bedrock Guardrails is supported today.
To learn more, see the technical documentation and the Bedrock Guardrails product page.
Read More for the details.