
Answer-first summary for fast verification
Answer: Setting temperature to 0 and specifying strict output format
## Explanation **Setting temperature to 0 and specifying strict output format** is the correct strategy because: 1. **Temperature = 0**: This setting makes the LLM's output deterministic by always selecting the most probable next token, eliminating randomness in the generation process. 2. **Strict output format**: By specifying a precise format (like JSON, SQL, or a specific template), you constrain the LLM's output structure, ensuring consistency in how the migration scripts are formatted. **Why other options are incorrect:** - **A. Self-consistency decoding**: This technique generates multiple outputs and selects the most common one, but it doesn't guarantee deterministic outputs on a single attempt. - **B. Few-shot prompting**: Providing examples helps guide the LLM but doesn't eliminate randomness in generation. - **D. Chain-of-thought prompting**: This helps with reasoning but doesn't address the randomness in output generation. For database schema migration scripts where consistency is critical for deployment, deterministic outputs are essential to avoid deployment failures and ensure reproducibility.
Author: Ritesh Yadav
Ultimate access to all questions.
An LLM is used to produce database schema migration scripts. Outputs vary across attempts, causing deployment inconsistencies. Which strategy will ensure deterministic, stable outputs?
A
Self-consistency decoding
B
Few-shot prompting
C
Setting temperature to 0 and specifying strict output format
D
Chain-of-thought prompting
No comments yet.