)


)_


)_

Explanation:

Correct answer: D.

Reasoning:

  • Goal: calculate total daily revenue (sum of amount_billed per billing_date) and the number of unique invoices per day (unique billing_id count).
  • Option A wrongly uses sum("billing_id") to compute invoices — summing invoice ids is meaningless for counting invoices.
  • Option B uses count("billing_id") which counts rows; that can be incorrect in the presence of duplicate invoice IDs (if the same invoice appears multiple rows). The question explicitly asks for unique invoices, so we need a distinct count.
  • Option C counts distinct patient_id which gives the number of unique patients per day, not unique invoices.
  • Option D correctly sums amount_billed and uses count_distinct("billing_id") (in PySpark use F.countDistinct("billing_id") or F.countDistinct(col("billing_id"))) to get the unique invoice count.

Example expected output from the sample data:

  • 2024-03-01: total_revenue = 1500 + 6500 = 8000, total_invoices = 2 (billing_id 401 and 403)
  • 2024-03-02: total_revenue = 3000, total_invoices = 1
  • 2024-03-03: total_revenue = 500, total_invoices = 1

Recommended PySpark code: from pyspark.sql import functions as F

daily_revenue_df = billing_df.groupBy("billing_date").agg( F.sum("amount_billed").alias("total_revenue"), F.countDistinct("billing_id").alias("total_invoices") )

Section: ELT With Spark SQL and Python

Powered ByGPT-5

Comments

Loading comments...