What is the correct way to initialize an LLMChain with an OpenAI model using a prompt template? ```python prompt_template = "Tell me a {adjective} joke" prompt = PromptTemplate( input_variables=["adjective"], template=prompt_template ) llm = OpenAI() llm_chain = LLMChain(prompt=prompt, llm=llm) response = llm_chain.generate([{"adjective": "funny"}]) ``` | Databricks Certified Generative AI Engineer - Associate Quiz - LeetQuiz