
Answer-first summary for fast verification
Answer: PIVOT
## Explanation The correct answer is **PIVOT**. ### Why PIVOT is correct: 1. **PIVOT operation**: In SQL, the PIVOT operation is specifically designed to transform data from a long format (where each row represents a single observation) to a wide format (where multiple observations are represented as columns). 2. **Spark SQL support**: Databricks Spark SQL supports the PIVOT clause, which allows you to rotate rows into columns, aggregating data as needed. 3. **Common use case**: PIVOT is commonly used for creating cross-tabulation reports, transforming normalized data into denormalized formats for reporting and analysis. ### Why other options are incorrect: - **TRANSFORM**: This is not a standard SQL keyword for converting table formats. In some contexts, it might refer to data transformation operations, but not specifically for long-to-wide conversion. - **SUM**: This is an aggregation function used to calculate totals, not for restructuring table formats. - **CONVERT**: This is typically used for data type conversion (e.g., converting strings to dates) or character set conversion, not for restructuring table layouts. ### Example usage in Spark SQL: ```sql SELECT * FROM sales_data PIVOT ( SUM(amount) FOR product_category IN ('Electronics', 'Clothing', 'Books') ) ``` This would transform rows with different product categories into columns with aggregated sales amounts.
Author: Keng Suppaseth
Ultimate access to all questions.
No comments yet.