
Answer-first summary for fast verification
Answer: clusters
## Detailed Explanation To monitor and log compute-related changes in Azure Databricks, the **clusters** service is the appropriate choice. Here's why: ### Why Clusters (Option A) is Correct: - **Clusters represent the primary compute resource** in Azure Databricks. They are the actual compute engines that execute data processing workloads. - **Cluster lifecycle events** such as creation, termination, resizing, configuration changes, and auto-scaling activities are all compute-related changes that need to be logged. - **Azure Databricks diagnostic logs** specifically include cluster-related events that track compute resource modifications. - **Compute monitoring** requires visibility into when clusters start, stop, scale up/down, or change configurations - all of which are captured through cluster service logs. ### Why Other Options Are Not Suitable: **Workspace (Option B):** - Workspace logs primarily track user activities like notebook creation, folder management, and workspace object modifications. - While workspace-level audit logs exist, they don't specifically focus on compute resource changes. - Workspace activities are more about development and collaboration rather than compute infrastructure changes. **DBFS (Option C):** - Databricks File System (DBFS) is a storage layer, not a compute service. - DBFS logs would track file operations, mount point changes, and storage-related activities. - This is unrelated to compute resource monitoring and changes. **SSH (Option D):** - SSH logging relates to secure shell connections and access control. - While SSH might be used to access cluster nodes, it doesn't log compute resource changes themselves. - SSH logs focus on authentication and connection events, not compute infrastructure modifications. **Jobs (Option E):** - Jobs service handles workflow execution and scheduling. - Job logs track when jobs run, succeed, or fail, but they don't capture the underlying compute resource changes. - While jobs execute on clusters, the compute changes themselves are logged at the cluster level. ### Best Practice Recommendation: For comprehensive compute monitoring in Azure Databricks, enable diagnostic logging for clusters to capture all compute-related lifecycle events, configuration changes, and resource modifications. This provides the necessary audit trail for troubleshooting, cost optimization, and compliance requirements related to compute resource usage.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.