
Answer-first summary for fast verification
Answer: Adjust the prompt.
## Explanation **Why A is correct:** - Adjusting the prompt is the most direct way to control LLM output characteristics like length and language. - By modifying the prompt with specific instructions (e.g., "Respond briefly in Spanish"), you can guide the model to produce outputs that match your requirements. - Prompt engineering is a standard technique for customizing pre-trained LLMs without retraining or changing model architecture. **Why other options are incorrect:** - **B. Choose an LLM of a different size:** Model size affects capabilities and performance but doesn't directly control output length or language specificity. - **C. Increase the temperature:** Temperature controls randomness/creativity in outputs, not length or language. - **D. Increase the Top K value:** Top K sampling affects diversity by limiting token selection to top K probabilities, but doesn't control output length or language. **Key Concept:** Prompt engineering allows fine-tuning of LLM behavior by providing specific instructions, constraints, and examples in the input prompt.
Author: Ritesh Yadav
Ultimate access to all questions.
A company is using a pre-trained large language model (LLM) to build a chatbot for product recommendations. The company needs the LLM outputs to be short and written in a specific language. Which solution will align the LLM response quality with the company's expectations?
A
Adjust the prompt.
B
Choose an LLM of a different size.
C
Increase the temperature.
D
Increase the Top K value.
No comments yet.