
Answer-first summary for fast verification
Answer: Using the `option_context` context manager
The correct answer is **D. Using the `option_context` context manager**. This method allows you to temporarily set specific option values for the execution of a block of code within the context manager. Here's a brief guide on how to use it: 1. Import the context manager: ```python from pandas.api.extensions import option_context ``` 2. Create a context: ```python with option_context("display.max_rows", 100, "compute.use_numba", True): # Code to execute with these options df = pd.DataFrame(…) result = df.groupby(…).mean() # … ``` Within the context, the specified options are temporarily set to the given values. Any code executed within this block will use those options. Once the block exits, the options revert to their previous values. **Why not the others?** - **Option A** is incorrect because modifying the Spark configuration directly does not affect Pandas API on Spark options. - **Option B** is incorrect because Pandas API on Spark does not utilize a `config.py` file for setting options. - **Option C** is incorrect because `set_option()` changes options globally, not just within a specific code block. **Key Benefits of Using `option_context`:** - **Isolation:** Ensures options only affect the intended code block, preventing unintended side effects. - **Readability:** Makes code more readable by explicitly stating which options are applied within a particular context. - **Flexibility:** Allows temporary modifications of options without affecting global settings.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.