
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
An LLM is used to produce database schema migration scripts. Outputs vary across attempts, causing deployment inconsistencies. Which strategy will ensure deterministic, stable outputs?
A
Self-consistency decoding
B
Few-shot prompting
C
Setting temperature to 0 and specifying strict output format
D
Chain-of-thought prompting
Explanation:
Setting temperature to 0 is the key strategy for achieving deterministic outputs from an LLM. Here's why:
Temperature controls randomness in LLM outputs
Temperature = 0 means the model will always choose the most probable next token
This eliminates randomness and ensures consistent outputs across multiple runs
Format constraints guide the LLM to produce structured outputs
Schema specifications ensure the output follows a predictable pattern
Validation rules can be applied to check output consistency
A. Self-consistency decoding - This involves sampling multiple outputs and selecting the most frequent one, which doesn't guarantee deterministic results on a single run.
B. Few-shot prompting - While providing examples improves quality, it doesn't eliminate randomness in the output generation process.
D. Chain-of-thought prompting - This helps with reasoning but doesn't control the randomness of the output.
Set temperature = 0 to eliminate randomness
Define strict JSON/XML schema for output format
Include validation rules in the prompt
Use deterministic seed if available
Implement post-generation validation to ensure consistency
This approach is crucial for database migrations where consistency and reliability are paramount, as inconsistent scripts could lead to data corruption or deployment failures.