Welcome to above the clouds

AWS – Amazon Bedrock Agents simplifies agent creation and launches Return of Control capability
Agents for Amazon Bedrock enable generative AI applications to automate multi-step tasks across company systems and data sources. Agents removes the undifferentiated lifting of orchestration, infrastructure hosting and management, and we’re making building Agents easier than ever. Read More for the details.

AWS – Meta Llama 3 now available in Amazon Bedrock
You can now access Meta’s Llama 3 models, Llama 3 8B and Llama 3 70B, in Amazon Bedrock. Meta Llama 3 is designed for you to build, experiment, and responsibly scale your generative artificial intelligence applications. You can now use these two new Llama 3 models in Amazon Bedrock enabling you to easily experiment with […]

AWS – Model evaluation on Amazon Bedrock is now Generally Available
Model Evaluation on Amazon Bedrock allows you to evaluate, compare, and select the best foundation models for your use case. Amazon Bedrock offers a choice of automatic evaluation and human evaluation. You can use automatic evaluation with predefined algorithms for metrics such as accuracy, robustness, and toxicity. Additionally, for those metrics or subjective and custom […]

AWS – Amazon Titan Image Generator model in Amazon Bedrock now generally available
Amazon Titan Image Generator enables content creators with rapid ideation and iteration resulting in high efficiency image generation. The Amazon Titan Image Generator model is now generally available in Amazon Bedrock, helping you easily build and scale generative AI applications with new image generation and image editing capabilities. Read More for the details.

AWS – Knowledge Bases for Amazon Bedrock now simplifies asking questions on a single document
Knowledge Bases for Amazon Bedrock allows you to connect foundation models (FMs) to internal company data sources to deliver more relevant, context-specific, and accurate responses. Knowledge Bases (KB) now provides a real-time, zero-setup, and low-cost method to securely chat with single documents. Read More for the details.

AWS – Knowledge Bases for Amazon Bedrock now supports multiple data sources
Knowledge Bases for Amazon Bedrock is a fully managed Retrieval-Augmented Generation (RAG) capability that allows you to connect foundation models (FMs) to internal company data sources to deliver more relevant and accurate responses. Knowledge Bases now supports adding multiple data sources, across accounts. Read More for the details.

AWS – Watermark detection for Amazon Titan Image Generator now available in Amazon Bedrock
Amazon Titan Image Generator’s new watermark detection feature is now generally available in Amazon Bedrock. All Amazon Titan-generated images contain an invisible watermark, by default. The watermark detection mechanism allows you to identify images generated by Amazon Titan Image Generator, a foundation model that allows users to create realistic, studio-quality images in large volumes and […]

AWS – Custom Model Import for Amazon Bedrock
We are excited to announce the preview of Custom Model Import for Amazon Bedrock. Now you can import customized models into Amazon Bedrock to accelerate your generative AI application development. This new feature allows you to leverage your prior model customization investments within Amazon Bedrock and consume them in the same fully-managed manner as Bedrock’s […]

AWS – Agents for Amazon Bedrock add support for Anthropic Claude 3 Haiku and Sonnet
Agents for Amazon Bedrock enable developers to create generative AI-based applications that can complete complex tasks for a wide range of use cases and deliver answers based on company knowledge sources. In order to complete complex tasks, with high accuracy, reasoning capabilities of the underlying foundational model (FM) play a critical role. Read More for […]
AWS – Guardrails for Amazon Bedrock is generally available with new safety & privacy controls
Today, we are announcing the general availability of Guardrails for Amazon Bedrock that enables customers to implement safeguards across large language models (LLMs) based on their use cases and responsible AI policies. Customers can create multiple guardrails tailored to different use cases and apply them on multiple LLMs, providing a consistent user experience and standardizing […]