AWS – Meta’s Llama 3.2 models are now available for fine-tuning in Amazon Bedrock
Amazon Bedrock now supports fine-tuning for Meta’s Llama 3.2 models (1B, 3B, 11B, and 90B), enabling businesses to customize these generative AI models with their own data. Llama 3.2 models are available in various sizes, from small (1B and 3B) to medium-sized multimodal models (11B and 90B). Llama 3.2 11B and 90B models are the first in the Llama series to support both text and vision tasks, achieved by integrating image encoder representations into the language model. Fine-tuning allows you to adapt Llama 3.2 models for domain-specific tasks, enhancing performance for specialized use cases.
The Llama 3.2 90B model excels in advanced reasoning, long-form text generation, coding, multilingual translation, and image reasoning tasks such as captioning, visual question answering, and document analysis. The Llama 3.2 11B model is designed for content creation, conversational AI, and enterprise applications, with strong performance in text summarization, sentiment analysis, and visual understanding. For resource-constrained scenarios, the lightweight Llama 3.2 1B and 3B models enable on-device applications, excelling in tasks like text summarization, classification, and retrieval while ensuring low latency and enhanced privacy. By fine-tuning Llama 3.2 models in Amazon Bedrock, businesses can further enhance their capabilities for specialized applications, improving accuracy and relevance without needing to build models from scratch.
You can fine-tune Llama 3.2 models in Amazon Bedrock in the US West (Oregon) AWS Region. For pricing, visit the Amazon Bedrock pricing page. To get started, see the Amazon Bedrock user guide and visit the Amazon Bedrock console.
Read More for the details.