
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
An LLM is used to produce database schema migration scripts. Outputs vary across attempts, causing deployment inconsistencies. Which strategy will ensure deterministic, stable outputs?
A
Self-consistency decoding
B
Few-shot prompting
C
Setting temperature to 0 and specifying strict output format
D
Chain-of-thought prompting
Explanation:
Setting temperature to 0 is the key strategy for achieving deterministic outputs from an LLM. Here's why:
A. Self-consistency decoding - This involves sampling multiple outputs and selecting the most frequent one, which doesn't guarantee deterministic results on a single run.
B. Few-shot prompting - While providing examples improves quality, it doesn't eliminate randomness in the output generation process.
D. Chain-of-thought prompting - This helps with reasoning but doesn't control the randomness of the output.
This approach is crucial for database migrations where consistency and reliability are paramount, as inconsistent scripts could lead to data corruption or deployment failures.