Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
You are working on a project that requires the application of a pre-trained model to a large Spark DataFrame. The model was trained using pandas and needs to be applied row-wise. How would you implement this in Spark?
A
Convert the entire Spark DataFrame to a pandas DataFrame and then apply the model.
B
Use a Scalar Pandas UDF to apply the model row-wise in Spark.
C
Use a Grouped Map Pandas UDF to apply the model group-wise in Spark.
D
Use an Iterator Pandas UDF to apply the model in chunks.