You have a Fabric tenant containing a new semantic model in OneLake. You are using a Fabric notebook to read the data into a Spark DataFrame. You need to evaluate the data to calculate the min, max, mean, and standard deviation values for all the string and numeric columns. You implement the following PySpark code: ```python df.explain() ``` Does this solution meet the goal? | Microsoft Fabric Analytics Engineer Associate DP-600 Quiz - LeetQuiz