
Answer-first summary for fast verification
Answer: df.filter(col('value').isNull()).count()
The correct answer is A because it correctly uses the 'isNull' method to filter rows where the 'value' column is NULL and then counts these rows. The '== NULL' syntax is incorrect in Spark SQL.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
Consider a DataFrame df with a column 'value' that contains both NULL and non-NULL values. You need to count the number of rows where 'value' is NULL. Which Spark function would you use to achieve this?
A
df.filter(col('value').isNull()).count()
B
df.filter(col('value') == NULL).count()
C
df.filter(col('value').isNotNull()).count()
D
df.filter(col('value') != NULL).count()