
Ultimate access to all questions.
In a scenario where you are working with a large dataset stored in Azure Databricks, you encounter a 'product_info' column that contains JSON objects. Each JSON object includes various product details, among which is a 'price' field stored as a numeric value. Your task is to write a Spark SQL query that extracts the 'price' field from the JSON objects, casts it to a double type, and creates a new table displaying both the original 'price' and the newly casted 'double_price'. Considering the need for accuracy and efficiency in handling JSON data within Spark SQL, which of the following queries correctly accomplishes this task? (Choose one option)
A
SELECT product_info.price, CAST(product_info.price AS DOUBLE) as double_price FROM dataset_
B
SELECT JSON_EXTRACT(product_info, '.price') AS DOUBLE) as double_price FROM dataset_
C
SELECT product_info['price'], CAST(product_info['price'] AS DOUBLE) as double_price FROM dataset_
D
SELECT price, CAST(price AS DOUBLE) as double_price FROM dataset_