Ultimate access to all questions.
In a scenario where you are working with a Delta Lake table named 'product_data' that contains columns 'product_id', 'product_name', and 'price', you are tasked with creating a new Delta Lake table 'product_prices'. This new table should only include the 'product_id' and 'price' columns, with the 'price' column explicitly cast to a Decimal(10,2) type to ensure precision in financial calculations. Additionally, the solution must adhere to best practices for Delta Lake table creation on Azure Databricks. Considering these requirements, which of the following Spark SQL queries would you use to achieve this task? Choose the best option from the four provided.
Explanation:
The correct answer is A because it correctly uses the 'USING DELTA' clause to specify the creation of a Delta Lake table, which is a best practice for table creation in Azure Databricks. It also includes the 'CAST' function to ensure the 'price' column is of type Decimal(10,2), meeting the requirement for precision in financial calculations. The 'AS' keyword is used to define the table name and the SELECT statement specifies the columns to include, fulfilling all the given requirements.