
Answer-first summary for fast verification
Answer: Context window
## Detailed Explanation When selecting a foundation model on Amazon Bedrock for a generative AI application, the **context window** is the critical factor that determines the maximum amount of information that can be included in a single prompt. ### Why Context Window is the Correct Answer The context window defines the maximum number of tokens (words, characters, or subwords) that a foundation model can process in a single input prompt. This includes both the input provided by the user and the model's generated output. In Amazon Bedrock, different foundation models have varying context window sizes, which directly impacts how much contextual information can be provided in a prompt. For example: - A model with a 4,000-token context window can handle approximately 3,000 words of input - A model with a 32,000-token context window can process significantly more information This parameter is essential for applications that require processing lengthy documents, maintaining conversation history, or providing extensive background information to guide the model's responses. ### Analysis of Other Options **A. Temperature** - This parameter controls the randomness and creativity of the model's output. Higher temperature values produce more diverse and creative responses, while lower values yield more predictable, deterministic outputs. Temperature affects output quality and style, not input capacity. **C. Batch Size** - This refers to the number of prompts processed simultaneously during inference. While batch size affects throughput and computational efficiency, it doesn't determine how much information can fit into an individual prompt. It's an optimization parameter for handling multiple requests efficiently. **D. Model Size** - This typically refers to the number of parameters in the model architecture. While larger models often have greater capabilities and may correlate with larger context windows, model size itself doesn't define input capacity. Model size primarily affects performance characteristics, training requirements, and inference costs. ### Best Practice Considerations When evaluating foundation models on Amazon Bedrock for generative AI applications, companies should: 1. **Assess context window requirements** based on their specific use case (e.g., document summarization needs larger windows than simple Q&A) 2. **Consider tokenization differences** between models, as different tokenization schemes affect how much text fits within a given token limit 3. **Account for output space** within the context window, since the total window includes both input and generated response 4. **Evaluate model-specific limitations** as context windows vary significantly across different foundation model families available in Bedrock The context window is therefore the fundamental consideration for determining prompt capacity, making option B the optimal choice for this scenario.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
When selecting a foundation model on Amazon Bedrock for a generative AI application, which factor should the company evaluate to determine the maximum amount of information that can be included in a single prompt?
A
Temperature
B
Context window
C
Batch size
D
Model size