
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
A team is using an LLM to generate API documentation for a legacy microservice. Zero-shot prompts produce vague descriptions and incorrect parameter formats. Which prompting strategy will most reliably generate accurate, structured API docs?
A
Chain-of-thought prompting to force step-by-step reasoning
B
Few-shot prompting with 3–4 high-quality API examples from the microservice
C
Role-based prompting ("You are an API architect...")
D
Temperature = 0 with no examples
Explanation:
Correct Answer: B (Few-shot prompting with 3–4 high-quality API examples from the microservice)
This question appears to be about optimizing API documentation generation or improving AI model performance for microservice API tasks. Let's analyze each option:
Option A: Chain-of-thought prompting to force step-by-step reasoning
Option B: Few-shot prompting with 3–4 high-quality API examples from the microservice
Option C: Role-based prompting ("You are an API architect...")
Option D: Temperature = 0 with no examples
Why B is correct: For generating accurate and consistent API documentation, providing actual examples from the microservice gives the model the most direct guidance on format, structure, and content expectations. Few-shot prompting is particularly effective for tasks where specific patterns or formats need to be followed, which is essential for API documentation consistency.