
Answer-first summary for fast verification
Answer: Few-shot prompting with 3–4 high-quality API examples from the microservice
## Explanation **Correct Answer: B (Few-shot prompting with 3–4 high-quality API examples from the microservice)** This question appears to be about optimizing API documentation generation or improving AI model performance for microservice API tasks. Let's analyze each option: **Option A: Chain-of-thought prompting to force step-by-step reasoning** - Chain-of-thought prompting encourages the model to break down complex problems into intermediate steps - While useful for complex reasoning tasks, it may not be the most efficient approach for generating API documentation **Option B: Few-shot prompting with 3–4 high-quality API examples from the microservice** - **✓ This is the correct answer** - Few-shot prompting provides concrete examples that help the model understand the specific format and requirements - High-quality examples from the actual microservice ensure the generated output matches the expected patterns - 3–4 examples provide sufficient context without overwhelming the model **Option C: Role-based prompting ("You are an API architect...")** - Role-based prompting sets context but doesn't provide concrete examples - While helpful for establishing perspective, it lacks the specific guidance that examples provide **Option D: Temperature = 0 with no examples** - Temperature = 0 makes the model more deterministic but doesn't improve understanding of the task - Without examples, the model has no reference for what constitutes good API documentation **Why B is correct:** For generating accurate and consistent API documentation, providing actual examples from the microservice gives the model the most direct guidance on format, structure, and content expectations. Few-shot prompting is particularly effective for tasks where specific patterns or formats need to be followed, which is essential for API documentation consistency.
Author: Ritesh Yadav
Ultimate access to all questions.
No comments yet.
A team is using an LLM to generate API documentation for a legacy microservice. Zero-shot prompts produce vague descriptions and incorrect parameter formats. Which prompting strategy will most reliably generate accurate, structured API docs?
A
Chain-of-thought prompting to force step-by-step reasoning
B
Few-shot prompting with 3–4 high-quality API examples from the microservice
C
Role-based prompting ("You are an API architect...")
D
Temperature = 0 with no examples