
Answer-first summary for fast verification
Answer: Prompt injection
## Detailed Explanation Prompt injection (Option B) is the correct AI system input vulnerability that must be addressed before deploying the chatbot. This vulnerability occurs when malicious users craft inputs designed to manipulate or subvert the AI model's intended behavior. ### Why Prompt Injection is the Critical Vulnerability: 1. **Direct Input Manipulation Risk**: Since the chatbot is publicly accessible 24/7 on an ecommerce website, it's exposed to potentially malicious users who could attempt to inject prompts that override the chatbot's original instructions. 2. **Ecommerce-Specific Threats**: In an order submission context, prompt injection could lead to: - Bypassing security protocols - Extracting sensitive customer information - Manipulating order processing logic - Accessing internal system data 3. **Immediate Deployment Concern**: Unlike other vulnerabilities that might develop over time, prompt injection is an inherent risk that exists from the moment the chatbot goes live and must be mitigated proactively. ### Analysis of Other Options: - **A: Data Leakage**: While important, this is typically a broader data security concern rather than a specific AI system input vulnerability. It involves unauthorized access to data, not manipulation of the AI's behavior through crafted inputs. - **C: LLM Hallucinations**: This refers to the model generating incorrect or nonsensical information. While relevant to AI quality, it's not specifically an "input vulnerability" that attackers exploit through malicious inputs. - **D: Concept Drift**: This occurs when the statistical properties of the target variable change over time, causing model performance degradation. It's a maintenance issue that develops gradually, not an immediate input vulnerability that needs resolution before deployment. ### Best Practices for Mitigation: Before deployment, the company should implement: 1. Input validation and sanitization 2. Prompt engineering with clear boundaries 3. Context-aware filtering 4. Monitoring for suspicious input patterns 5. Regular security testing and updates Prompt injection represents the most direct and immediate threat to the chatbot's security and functionality in this public-facing ecommerce scenario.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.