AWS – Amazon Q Business launches support for hallucination mitigation in chat responses
Today, Amazon Q Business is launching a feature to reduce hallucinations in chat responses. Hallucinations are confident responses made by generative AI applications that are not justified by its underlying data. The new feature enables customers to mitigate hallucinations in real-time during chat conversations.
Large Language Models (LLMs) underlying generative AI applications have reduced the extent of hallucination in their responses, but it is possible that these models could hallucinate. Hallucination mitigation is therefore needed to generate reliable and trustworthy responses. The Q Business hallucination mitigation feature helps ensure more accurate retrieval augmented generation (RAG) responses from data connected to the application. This data could either come from connected data sources, or from files uploaded during chat. During chat, Q Business evaluates a response for hallucinations. If a hallucination is detected with high confidence, it corrects the inconsistencies in its response real-time during chat and generates a new, edited message.
The feature for Amazon Q Business is available in all regions where Q Business is available. Customers can opt into using this feature by enabling it through API or through the Amazon Q console. For more details, refer to the documentation. For more information about Amazon Q Business and its features, please visit the Amazon Q product page.
Read More for the details.