Ultimate access to all questions.
In a scenario where you are working with a Delta Lake table named 'product_data' that contains columns 'product_id', 'product_name', and 'price', you are tasked with creating a new Delta Lake table 'product_prices'. This new table should only include the 'product_id' and 'price' columns, with the 'price' column explicitly cast to a Decimal(10,2) type to ensure precision in financial calculations. Additionally, the solution must adhere to best practices for Delta Lake table creation on Azure Databricks. Considering these requirements, which of the following Spark SQL queries would you use to achieve this task? Choose the best option from the four provided.