
Ultimate access to all questions.
Deep dive into the quiz with AI chat providers.
We prepare a focused prompt with your quiz and certificate details so each AI can offer a more tailored, in-depth explanation.
Each analytics team in your organization is running BigQuery jobs in their own projects. You want to enable each team to monitor slot usage within their projects. What should you do?
A
Create a Stackdriver Monitoring dashboard based on the BigQuery metric query/scanned_bytes
B
Create a Stackdriver Monitoring dashboard based on the BigQuery metric slots/allocated_for_project
C
Create a log export for each project, capture the BigQuery job execution logs, create a custom metric based on the totalSlotMs, and create a Stackdriver Monitoring dashboard based on the custom metric
D
Create an aggregated log export at the organization level, capture the BigQuery job execution logs, create a custom metric based on the totalSlotMs, and create a Stackdriver Monitoring dashboard based on the custom metric
Explanation:
The correct answer is B. Create a Cloud Monitoring dashboard based on the BigQuery metric slots/allocated_for_project Why B is correct:
BigQuery provides a native metric called slots/allocated_for_project (and slots/allocated_for_project_and_job_type) that directly reports slot allocations per project. This metric is available in Cloud Monitoring and can be filtered by project. Each analytics team can be granted access to a project-specific dashboard using this metric to monitor their own slot usage in real time — no custom logging or metrics required. It is efficient, accurate, and purpose-built for this use case.
Why the others are incorrect: A. query/scanned_bytes → Measures data scanned, not slot usage. Slot consumption depends on query complexity and concurrency, not just bytes scanned. Incorrect. C. Log export per project + custom metric from totalSlotMs → totalSlotMs is per job, not real-time slot allocation. → Requires complex log parsing, custom metrics, and aggregation delays. → Overly complicated and not real-time. Not recommended. D. Aggregated log export at org level → Same issues as C, but now centralized — harder to isolate per-project usage. → Still relies on delayed logs and custom metrics. Not suitable.