Ultimate access to all questions.
You have a Fabric tenant with a new semantic model in OneLake. You use a Fabric notebook to load the data into a Spark DataFrame. You need to evaluate the data by calculating the minimum, maximum, mean, and standard deviation for all string and numeric columns.
Solution: You run the following PySpark code:
df.explain().show()
Does this solution achieve the goal?