
Ultimate access to all questions.
You are working with a large transactional dataset stored as a Delta table named transactions in your Databricks workspace. As part of a data quality assessment, you need to determine how many records have missing values in the member_id column. Which of the following Spark SQL queries will accurately return the count of rows where member_id is NULL?
A
SELECT count(member_id) FROM transactions;_
B
SELECT sum(CASE WHEN member_id IS NULL THEN 1 ELSE 0 END) FROM transactions;_
C
SELECT count_if(member_id IS NULL) FROM transactions;
D
SELECT count() - count(member_id) FROM transactions;_
E
SELECT count_null(member_id) FROM transactions;