
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
Assuming the API key was properly defined, what change does the Generative AI Engineer need to make to fix their chain?
A
prompt_template = "Tell me a {adjecive} joke"
prompt = PromptTemplate(
input_variables=["adjecive"],
template=prompt_template
)
llm = LLMChain(prompt=prompt)
llm.generate("funny")
prompt_template = "Tell me a {adjecive} joke"
prompt = PromptTemplate(
input_variables=["adjecive"],
template=prompt_template
)
llm = LLMChain(prompt=prompt)
llm.generate("funny")
B
prompt_template = "Tell me a {adjecive} joke"
prompt = PromptTemplate(
input_variables=["adjecive"],
template=prompt_template
)
llm = LLMChain(prompt=prompt.format("funny"))
llm.generate()
prompt_template = "Tell me a {adjecive} joke"
prompt = PromptTemplate(
input_variables=["adjecive"],
template=prompt_template
)
llm = LLMChain(prompt=prompt.format("funny"))
llm.generate()
Explanation:
Option A is correct because it properly uses the LLMChain with the PromptTemplate object and passes the input variable "funny" to the generate() method. This is the standard way to use LangChain's LLMChain:
Option B is incorrect because:
prompt.format("funny"))generate() method is then called without any arguments, but LLMChain expects input valuesIn LangChain, the correct pattern is to pass the PromptTemplate object to LLMChain and then provide the input variables when calling generate() or invoke() methods.