
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
Q9. A developer uses an LLM to refactor code automatically but receives syntactically correct yet inefficient solutions. Which prompting technique helps the model reason through optimization trade-offs?
A
Zero-shot prompting with a very detailed instruction
B
Chain-of-thought prompting to analyze performance step-by-step
C
Role-based priming ("You are a performance engineer...")
D
Asking the model to generate 10 random refactoring ideas
Explanation:
Explanation:
Chain-of-thought prompting is the most effective technique for this scenario because:
Step-by-step reasoning: Chain-of-thought prompting encourages the LLM to break down complex problems into smaller, logical steps, which is crucial for analyzing optimization trade-offs.
Performance analysis: When refactoring code, efficiency considerations require analyzing time complexity, space complexity, memory usage, and algorithmic improvements. Chain-of-thought prompting helps the model explicitly consider these factors at each step.
Trade-off evaluation: Optimization often involves balancing different factors (e.g., readability vs. performance, memory vs. speed). Chain-of-thought prompting allows the model to weigh these trade-offs systematically.
Why other options are less effective:
A. Zero-shot prompting: Even with detailed instructions, zero-shot prompting doesn't encourage the step-by-step reasoning needed for complex optimization analysis.
C. Role-based priming: While helpful for setting context, it doesn't inherently provide the structured reasoning process needed for optimization trade-offs.
D. Random generation: Generating random ideas doesn't ensure systematic analysis of performance implications and trade-offs.
Example of chain-of-thought prompting for this scenario:
"Refactor this code for better performance. Please reason step-by-step:
1. Analyze the current time complexity
2. Identify bottlenecks in the algorithm
3. Consider alternative data structures
4. Evaluate memory vs. speed trade-offs
5. Provide the optimized solution with justification"
"Refactor this code for better performance. Please reason step-by-step:
1. Analyze the current time complexity
2. Identify bottlenecks in the algorithm
3. Consider alternative data structures
4. Evaluate memory vs. speed trade-offs
5. Provide the optimized solution with justification"
This approach ensures the LLM systematically considers all relevant optimization factors before providing a solution.