
Ultimate access to all questions.
Each analytics team in your organization is running BigQuery jobs in their own projects. You want to enable each team to monitor slot usage within their projects. What should you do?
Explanation:
The correct answer is B. Create a Cloud Monitoring dashboard based on the BigQuery metric slots/allocated_for_project Why B is correct:
BigQuery provides a native metric called slots/allocated_for_project (and slots/allocated_for_project_and_job_type) that directly reports slot allocations per project. This metric is available in Cloud Monitoring and can be filtered by project. Each analytics team can be granted access to a project-specific dashboard using this metric to monitor their own slot usage in real time — no custom logging or metrics required. It is efficient, accurate, and purpose-built for this use case.
Why the others are incorrect: A. query/scanned_bytes → Measures data scanned, not slot usage. Slot consumption depends on query complexity and concurrency, not just bytes scanned. Incorrect. C. Log export per project + custom metric from totalSlotMs → totalSlotMs is per job, not real-time slot allocation. → Requires complex log parsing, custom metrics, and aggregation delays. → Overly complicated and not real-time. Not recommended. D. Aggregated log export at org level → Same issues as C, but now centralized — harder to isolate per-project usage. → Still relies on delayed logs and custom metrics. Not suitable.