You have a Fabric tenant containing a new semantic model in OneLake. You use a Fabric notebook to load the data into a Spark DataFrame. You need to evaluate the data by calculating the minimum, maximum, mean, and standard deviation for all numeric and string columns. Solution: You run the following PySpark code: ```python df.describe().show() ``` Does this solution meet the goal? | Microsoft Fabric Analytics Engineer Associate DP-600 Quiz - LeetQuiz