AWS – TwelveLabs’ Marengo Embed 2.7 can now be used for synchronous inference in Amazon Bedrock
Amazon Bedrock now supports synchronous inference for TwelveLabs’ Marengo 2.7, expanding the capabilities of this multimodal embedding model to deliver low-latency text and image embeddings directly within the API response. This update enables developers to build more responsive, interactive search and retrieval experiences while maintaining the same powerful video understanding capabilities that have made Marengo 2.7 a breakthrough in multimodal AI.
Since its introduction to Amazon Bedrock earlier this year, Marengo 2.7 has transformed how organizations work with video content through asynchronous inference—ideal for processing large video, audio, and image files. The model generates sophisticated multi-vector embeddings, enabling precise temporal and semantic retrieval across long-form content. Now with synchronous inference support, users can leverage these advanced embedding capabilities for text and image inputs with significantly reduced latency. This makes it perfect for applications such as instant video search where users find specific scenes using natural language queries, or interactive product discovery through image similarity search. For generating embeddings from video, audio, and large-scale image files, continue using asynchronous inference for optimal performance.
Marengo 2.7 with synchronous inference is now available in Amazon Bedrock in US East (N. Virginia), Europe (Ireland), and Asia Pacific (Seoul). To get started, visit the Amazon Bedrock console and request model access. To learn more, read the blog, product page, Amazon Bedrock pricing, and documentation.
Read More for the details.