AWS – Amazon Bedrock Data Automation now provides support for enhancing transcriptions
Amazon Bedrock Data Automation (BDA) now supports enhanced transcription output for audio files by providing the option to distinguish between various speakers and separately process audio from each channel. Additionally, BDA expands support for blueprint creation using a guided and natural language-based interface for extracting custom insights to audio modality. BDA is a feature of Amazon Bedrock that automates generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. With this launch, developers can now enable speaker diarization and channel identification in standard output. Speaker diarization detects each unique speaker and tracks speaker changes in a multi-party audio conversation. Channel identification enables separate processing of audio from each channel. For example, speakers such as a customer and sales agent can be separated into unique channels, making it easier to analyze the transcript.
Speaker diarization and channel identification make transcripts easier to read and extract custom insights from a variety of multi-party voice conversations such as customer calls, education sessions, public safety calls, clinical discussions, and meetings. This enables customers to identify ways to improve employee productivity, add subtitles to webinars, enhance customer experience, or increase regulatory compliance. For example, Telehealth customers can summarize the recommendations of a doctor by assigning the doctors and patients to pre-identified channels.
Amazon Bedrock Data Automation is available in a total of 7 AWS Regions in US West (Oregon), US East (N. Virginia), GovCloud (US-West), Europe (Frankfurt), Europe (London), Europe (Ireland), Asia Pacific (Mumbai) and Asia Pacific (Sydney). To learn more, visit the Bedrock Data Automation page, Amazon Bedrock Pricing page, or view documentation.
Read More for the details.