
Answer-first summary for fast verification
Answer: Temperature = 0.0
**Explanation:** In Large Language Model (LLM) configurations: - **Temperature** controls the randomness of predictions. A temperature of 0.0 makes the model completely deterministic, always selecting the highest probability token. This ensures the same input always produces the same output. - **Top-p** (nucleus sampling) sets a cumulative probability threshold (e.g., 0.9 means considering tokens that make up 90% of the probability mass). This still allows for randomness. - **Temperature = 1.5** increases randomness, making outputs more creative and less predictable. - **Top-k** limits the sampling pool to the k most likely tokens (e.g., 50 tokens), which still allows for variation. For financial analytics requiring deterministic outputs, **Temperature = 0.0** is the correct configuration as it eliminates randomness and ensures reproducibility.
Author: Ritesh Yadav
Ultimate access to all questions.
A financial analytics company needs deterministic LLM outputs so that every run produces the same result given the same input. Which configuration is most appropriate?
A
Temperature = 0.0
B
Top-p = 0.9
C
Temperature = 1.5
D
Top-k = 50
No comments yet.