
Answer-first summary for fast verification
Answer: No
The solution does not meet the goal because `df.show()` only displays the raw data rows from the DataFrame without performing any statistical calculations. The goal requires calculating specific statistical measures (min, max, mean, and standard deviation) for both string and numeric columns. Community discussion strongly supports this with 100% consensus on answer B (No), and comments provide correct alternatives such as `df.describe()`, `df.summary()`, or using `df.agg()` with appropriate statistical functions, which are the proper PySpark methods for generating these statistics.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You have a Fabric tenant with a new semantic model in OneLake. You are using a Fabric notebook to read the data into a Spark DataFrame. You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
You implement the following solution: Use the PySpark expression df.show().
Does this solution meet the goal?
A
Yes
B
No