
Ultimate access to all questions.
Assuming the API key was properly defined, what change does the Generative AI Engineer need to make to fix their chain?
B
prompt_template = "Tell me a {adjecive} joke"
prompt = PromptTemplate(
input_variables=["adjecive"],
template=prompt_template
)
llm = LLMChain(prompt=prompt.format("funny"))
llm.generate()
```_
prompt_template = "Tell me a {adjecive} joke"
prompt = PromptTemplate(
input_variables=["adjecive"],
template=prompt_template
)
llm = LLMChain(prompt=prompt.format("funny"))
llm.generate()
```_
Next
Explanation:
Option A is correct because it properly uses the LLMChain with the PromptTemplate object and passes the input variable "funny" to the generate() method. This is the standard way to use LangChain's LLMChain:
Option B is incorrect because:
prompt.format("funny"))generate() method is then called without any arguments, but LLMChain expects input valuesIn LangChain, the correct pattern is to pass the PromptTemplate object to LLMChain and then provide the input variables when calling generate() or invoke() methods.