Ultimate access to all questions.
Upgrade Now 🚀
Sign in to unlock AI tutor
When working with Spark, under what circumstances might you need to use a User-Defined Function (UDF) in a DataFrame transformation pipeline?
A
UDFs are essential for any DataFrame transformation in Spark.
B
For distributed model inference with Spark MLLib, UDFs are a necessity.
C
UDFs play a role solely in data preprocessing tasks within Spark.
D
When performing distributed model inference with libraries not originally designed for distributed computing, such as Scikit-learn, UDFs become necessary.