AWS – Amazon Bedrock Data Automation now supports generating custom insights from videos
Amazon Bedrock Data Automation (BDA) now supports video blueprints so you can generate tailored, accurate insights in a consistent format for your multimedia analysis applications. BDA automates the generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. With video blueprints, you can customize insights — such as scene summaries, content tags, and object detection — by specifying what to generate, the output data type, and the natural language instructions to guide generation.
You can create a new video blueprint in minutes or select from a catalog of pre-built blueprints designed for use cases such as media search or highlight generation. With your blueprint, you can generate insights from a variety of video media including movies, television shows, advertisements, meetings recordings, and user-generated videos. For example, a customer analyzing a reality television episode for contextual ad placement can use a blueprint to summarize a scene where contestants are cooking, detect objects like ‘tomato’ and ‘spaghetti’, and identify the logos of condiments used for cooking. As part of the release, BDA also enhances logo detection and the Interactive Advertising Bureau (IAB) taxonomy in standard output.
Video blueprints are available in all AWS Regions where Amazon Bedrock Data Automation is supported.
To learn more, see the Bedrock Data Automation User Guide and the Amazon Bedrock Pricing page. To get started with using video blueprints, visit the Amazon Bedrock console.
Read More for the details.