
Answer-first summary for fast verification
Answer: Add messages to the model prompt.
## Detailed Explanation To enable an LLM on Amazon Bedrock to reference and utilize content from previous customer messages in a multi-turn chatbot conversation, the most direct and effective approach is to **include the conversation history in the model's prompt**. ### Why Option B is Correct **B: Add messages to the model prompt** - This is the optimal solution because: 1. **Context Preservation**: Large language models operate on the context provided in their input prompt. By appending previous messages (both customer queries and chatbot responses) to each new prompt, the model gains access to the full conversation history. 2. **State Management**: Since LLMs are stateless by design—they don't inherently remember previous interactions—the prompt serves as the mechanism to maintain conversational state across multiple turns. 3. **Amazon Bedrock Implementation**: When using Amazon Bedrock's InvokeModel or InvokeModelWithResponseStream APIs, developers can structure the prompt to include a conversation history section, typically formatted as a sequence of user-assistant message pairs. 4. **Cost-Effective and Immediate**: This approach requires no additional AWS services or complex configurations. It's implemented directly in the application logic that calls the Bedrock API. ### Why Other Options Are Less Suitable **A: Turn on model invocation logging to collect messages** - While logging is important for monitoring and debugging, it doesn't enable the LLM to use previous messages during inference. Logging captures data for analysis but doesn't feed it back into the model's context window. **C: Use Amazon Personalize to save conversation history** - Amazon Personalize is a recommendation service designed for personalized user experiences based on historical behavior patterns. It's not designed for real-time conversation context management and would add unnecessary complexity for this specific requirement. **D: Use Provisioned Throughput for the LLM** - Provisioned Throughput in Amazon Bedrock ensures consistent performance and cost predictability by reserving model capacity, but it doesn't address the fundamental challenge of providing conversation context to the model. ### Best Practice Considerations When implementing this solution, consider: - **Context Window Limits**: Be mindful of the model's maximum context length and implement truncation or summarization strategies for long conversations. - **Prompt Engineering**: Structure the conversation history clearly in the prompt, typically using role-based formatting (e.g., "Human:" and "Assistant:" prefixes). - **Memory Management**: For production systems, implement efficient storage and retrieval of conversation history from databases like Amazon DynamoDB, then inject relevant portions into each prompt. This approach aligns with AWS best practices for building conversational AI applications on Amazon Bedrock, where the prompt serves as the primary mechanism for maintaining conversational context across multiple interactions.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
How can an LLM on Amazon Bedrock be enabled to reference and utilize content from earlier messages in a multi-turn customer support chatbot conversation?
A
Turn on model invocation logging to collect messages.
B
Add messages to the model prompt.
C
Use Amazon Personalize to save conversation history.
D
Use Provisioned Throughput for the LLM.