
Answer-first summary for fast verification
Answer: Prompt engineering could expose the model to vulnerabilities such as prompt injection attacks.
## Analysis of Prompt Engineering Risks and Limitations Prompt engineering involves crafting specific instructions to guide generative AI model outputs. While valuable for improving performance, it has inherent risks and limitations that must be understood. ### Correct Answer: B **Prompt engineering could expose the model to vulnerabilities such as prompt injection attacks.** This is the most accurate description of a specific, concrete risk associated with prompt engineering. Prompt injection attacks occur when malicious users craft inputs that override or manipulate the original prompt's instructions, potentially causing the model to: - Disclose sensitive information - Execute unintended actions - Generate harmful or biased content - Bypass safety guardrails This vulnerability stems from the fundamental nature of how generative models process concatenated prompts and user inputs, creating an attack surface that sophisticated prompt engineering alone cannot fully eliminate. ### Why Other Options Are Less Suitable **A: Prompt engineering does not ensure that the model always produces consistent and deterministic outputs, eliminating the need for validation.** - While true that prompt engineering doesn't guarantee deterministic outputs, this describes a general limitation of generative AI rather than a specific risk introduced by prompt engineering itself. The statement about "eliminating the need for validation" is incorrect regardless of prompt quality. **C: Properly designed prompts reduce but do not eliminate the risk of data poisoning or model hijacking.** - This statement is factually correct but describes risks primarily associated with model training and fine-tuning, not prompt engineering. Data poisoning and model hijacking typically occur during the training phase, while prompt engineering operates during inference. **D: Prompt engineering does not ensure that the model will consistently generate highly reliable outputs when working with real-world data.** - Similar to option A, this describes a general limitation of AI models rather than a specific risk introduced by prompt engineering. All AI models face challenges with real-world data variability, regardless of prompt quality. ### Key Insight Prompt injection represents a direct, exploitable vulnerability that emerges specifically from the prompt engineering paradigm. Unlike general model limitations, this is an attack vector that adversaries can actively exploit, making it a critical security consideration when implementing prompt-engineered systems in production environments.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
Which situation illustrates a possible risk and limitation of prompt engineering when using a generative AI model?
A
Prompt engineering does not ensure that the model always produces consistent and deterministic outputs, eliminating the need for validation.
B
Prompt engineering could expose the model to vulnerabilities such as prompt injection attacks.
C
Properly designed prompts reduce but do not eliminate the risk of data poisoning or model hijacking.
D
Prompt engineering does not ensure that the model will consistently generate highly reliable outputs when working with real-world data.