
Ultimate access to all questions.
What is the correct way to initialize an LLMChain with an OpenAI model using a prompt template?
prompt_template = "Tell me a {adjective} joke"
prompt = PromptTemplate(
input_variables=["adjective"],
template=prompt_template
)
llm = OpenAI()
llm_chain = LLMChain(prompt=prompt, llm=llm)
response = llm_chain.generate([{"adjective": "funny"}])