
Ultimate access to all questions.
Given a PySpark DataFrame named 'spark_df', write a code snippet that demonstrates how to perform a distributed groupby operation on a column named 'category' using Pandas API on Spark._
A
grouped = spark_df.groupby('category').collect()_
B
grouped = spark_df.toPandasAPI().groupby('category').collect()_
C
grouped = spark_df.groupby('category').toPandasAPI().collect()_
D
grouped = spark_df.toPandasAPI().groupby('category').toSpark().collect()_