
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
An LLM is used to produce database schema migration scripts. Outputs vary across attempts, causing deployment inconsistencies. Which strategy will ensure deterministic, stable outputs?
A
Self-consistency decoding
B
Few-shot prompting
C
Setting temperature to 0 and specifying strict output format
D
Chain-of-thought prompting
Explanation:
Setting temperature to 0 and specifying strict output format is the correct strategy because:
Temperature = 0: This setting makes the LLM's output deterministic by always selecting the most probable next token, eliminating randomness in the generation process.
Strict output format: By specifying a precise format (like JSON, SQL, or a specific template), you constrain the LLM's output structure, ensuring consistency in how the migration scripts are formatted.
Why other options are incorrect:
A. Self-consistency decoding: This technique generates multiple outputs and selects the most common one, but it doesn't guarantee deterministic outputs on a single attempt.
B. Few-shot prompting: Providing examples helps guide the LLM but doesn't eliminate randomness in generation.
D. Chain-of-thought prompting: This helps with reasoning but doesn't address the randomness in output generation.
For database schema migration scripts where consistency is critical for deployment, deterministic outputs are essential to avoid deployment failures and ensure reproducibility.