
Answer-first summary for fast verification
Answer: Implement a safety filter that detects any harmful inputs and ask the LLM to respond that it is unable to assist
## Explanation Option A is the correct answer because implementing a safety filter provides proactive protection against malicious inputs. Here's why: - **Safety Filter Approach**: A safety filter can detect and block harmful inputs before they reach the LLM, preventing the model from processing potentially dangerous or inappropriate content - **Controlled Response**: By asking the LLM to respond that it's unable to assist, the application maintains a safe and controlled interaction without engaging with malicious content - **Prevention vs. Reaction**: This approach prevents the problem rather than reacting to it after the fact **Why other options are less effective:** - **Option B**: Reducing interaction time doesn't address the core security issue - malicious users can still submit harmful inputs within the limited time - **Option C**: Continuing the conversation after detecting malicious input is counterproductive and potentially dangerous as it may still process harmful content - **Option D**: Increasing compute power only speeds up processing but doesn't provide any security against malicious inputs This approach aligns with best practices for LLM application security, ensuring user safety while maintaining the application's intended functionality for legitimate users.
Author: LeetQuiz .
Ultimate access to all questions.
Question: 29
A Generative AI Engineer is developing an LLM application that users can use to generate personalized birthday poems based on their names.
Which technique would be most effective in safeguarding the application, given the potential for malicious user inputs?
A
Implement a safety filter that detects any harmful inputs and ask the LLM to respond that it is unable to assist
B
Reduce the time that the users can interact with the LLM
C
Ask the LLM to remind the user that the input is malicious but continue the conversation with the user
D
Increase the amount of compute that powers the LLM to process input faster
No comments yet.