
Answer-first summary for fast verification
Answer: Three distinct jobs named "Ingest new data" will be created, but none of them will be executed automatically.
### Explanation **Option A is correct.** 1. **Job Creation vs. Execution**: The `POST /api/2.0/jobs/create` endpoint is strictly for registering a job definition. It returns a `job_id` and stores the configuration, but it does **not** trigger a run. To start a job, one must use the `/jobs/run-now` endpoint or configure a trigger. 2. **No Uniqueness Constraint**: Databricks does not require job names to be unique. Submitting the same payload three times will result in three separate job definitions, each with a unique `job_id` but the same display name. 3. **Default Scheduling**: Since no `schedule` field was provided in the JSON payload, the jobs are created without a trigger. According to Databricks documentation, if the schedule is omitted, the job only runs when manually triggered via the UI or the API. ### Why other options are incorrect: * **Option B**: The Jobs API does not perform deduplication based on the name field. * **Option C**: A schedule (e.g., cron expression) must be explicitly defined in the payload for automatic runs to occur. * **Options D & E**: These assume immediate execution. Creating a job is a metadata operation; it does not launch cluster resources or execute code until a 'run' is specifically initiated.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
A data engineer submits a JSON payload to the Databricks Jobs API (version 2.0) /jobs/create endpoint three times in succession. The payload contains the following specifications:
Assuming all resources are valid and available, what will be the result of these three API calls?
A
Three distinct jobs named "Ingest new data" will be created, but none of them will be executed automatically.
B
A single job named "Ingest new data" will be created without execution because the API deduplicates by name.
C
Three separate jobs named "Ingest new data" will be created, and each will immediately trigger a single daily run.
D
The "/Prod/ingest.py" notebook will be executed three times on the specified existing cluster.
E
Three jobs will be created, and the notebook will be executed three times on independent, newly created clusters matching the provided configuration.