
Answer-first summary for fast verification
Answer: No
The solution does not meet the goal because `df.explain().show()` displays the execution plan of the DataFrame operations, not summary statistics. The goal requires calculating min, max, mean, and standard deviation for string and numeric columns, which are descriptive statistics. As confirmed by the community discussion (with 100% consensus and upvoted comments), methods like `df.describe()` or `df.summary()` should be used instead, as they provide the required summary statistics. The execution plan from `explain()` shows how Spark will execute the query but does not compute or display statistical values.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You have a Fabric tenant with a new semantic model in OneLake. You use a Fabric notebook to load the data into a Spark DataFrame. You need to evaluate the data by calculating the minimum, maximum, mean, and standard deviation for all string and numeric columns.
Solution: You run the following PySpark code:
df.explain().show()
Does this solution achieve the goal?
A
Yes
B
No
No comments yet.