Ultimate access to all questions.
You have a Fabric tenant containing a new semantic model in OneLake. You are using a Fabric notebook to read the data into a Spark DataFrame. You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns.
You implement the following PySpark code:
df.explain()
Does this solution meet the goal?