
Answer-first summary for fast verification
Answer: Use negative prompts.
## Detailed Explanation When a foundation model generates images unrelated to the provided prompts, the issue typically stems from the model interpreting prompts too broadly or generating content that includes unintended elements. To address this, the most effective prompting technique is **negative prompting**. ### Why Negative Prompts Are Optimal (Option B) Negative prompts explicitly instruct the model about what elements, themes, or characteristics to **avoid** in the generated output. This technique: 1. **Provides explicit constraints** - By specifying what should not appear in the images, the model receives clearer guidance about the boundaries of acceptable outputs. 2. **Reduces ambiguity** - When a prompt like "a sunny beach scene" might generate images with unrelated elements (e.g., random animals, people, or objects), adding negative prompts such as "no people, no animals, no buildings" helps the model focus on the core elements. 3. **Improves relevance** - Negative prompts help filter out common but unwanted associations that foundation models might make based on their training data. 4. **Works with existing models** - This technique doesn't require retraining the foundation model; it's a prompt engineering approach that can be immediately implemented. ### Analysis of Other Options - **Option A (Zero-shot prompts)**: Zero-shot prompting refers to providing a prompt without examples. While this is a common approach, it doesn't specifically address the problem of unrelated content generation. Zero-shot prompts alone lack the explicit guidance needed to exclude unwanted elements. - **Option C (Positive prompts)**: Positive prompts describe what should be included in the image. While essential, they don't inherently prevent the model from adding unrelated elements. The company is already using positive prompts (as indicated by the problem statement), and they're still getting unrelated images. - **Option D (Ambiguous prompts)**: Ambiguous prompts would worsen the problem by providing unclear instructions, likely increasing rather than decreasing unrelated image generation. ### Best Practice Application In AWS AI/ML services and foundation model applications, negative prompting is a recognized technique for improving output quality. When using services like Amazon Bedrock or working with foundation models, specifying negative prompts helps refine generation by: - Excluding specific object categories - Avoiding certain styles or themes - Preventing inclusion of unwanted contextual elements This approach aligns with AWS best practices for prompt engineering, where clear, explicit instructions (including both what to include and what to exclude) yield more accurate and relevant model outputs.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
A company observes that its foundation model (FM) produces images not relevant to the provided prompts. The company aims to adjust its prompting methods to reduce the generation of irrelevant images.
Which solution addresses this requirement?
A
Use zero-shot prompts.
B
Use negative prompts.
C
Use positive prompts.
D
Use ambiguous prompts.