Explanation
The correct answer is B. Context window.
Why Context Window is Correct:
- Definition: The context window refers to the maximum amount of text (tokens) that a foundation model can process in a single prompt or conversation.
- Relevance to the Question: When the company wants to know "how much information can fit into one prompt," they are directly asking about the model's capacity to handle input data, which is precisely what the context window determines.
- Amazon Bedrock Context: In Amazon Bedrock, different foundation models have different context window sizes, which is a critical factor when choosing a model for specific use cases.
Why Other Options are Incorrect:
- A. Temperature: Controls the randomness/creativity of the model's output (higher temperature = more random, lower temperature = more deterministic). This doesn't determine how much information fits in a prompt.
- C. Batch size: Refers to the number of samples processed before updating model parameters during training, or the number of requests processed simultaneously in inference. This is about processing efficiency, not prompt capacity.
- D. Model size: Typically refers to the number of parameters in the model, which affects capabilities and performance, but doesn't directly determine how much information can fit in a single prompt.
Key Takeaway:
When selecting a foundation model on Amazon Bedrock for applications that require processing large amounts of context (like document analysis, long conversations, or complex queries), the context window size is a critical consideration that directly addresses the question of "how much information can fit into one prompt."