Ultimate access to all questions.
In a scenario where you are working with a large dataset stored as JSON files in a directory named 'json_data', with files named in the pattern 'data_YYYYMMDD.json' (where YYYYMMDD represents the date), you need to create a temporary view named 'json_view' in Spark to analyze this data. Considering the need for efficient data processing and the correct use of Spark's DataFrame API, which of the following queries would you use to achieve this? Choose the best option that correctly reads the JSON files and creates the temporary view with the appropriate options for handling JSON data.