
Ultimate access to all questions.
A new data engineer attempts to create a job using the Databricks REST API by posting the following JSON to the 2.0/jobs/create endpoint:
{
"name": "new_job",
"existing_cluster_id": "1198-132537-dht25rtr",
"spark_python_task": {
"python_file": "dbfs:/first_method.py"
}
}
{
"name": "new_job",
"existing_cluster_id": "1198-132537-dht25rtr",
"spark_python_task": {
"python_file": "dbfs:/first_method.py"
}
}
Another data engineer from the same team tries to create a job using the following JSON:
{
"name": "new_job",
"existing_cluster_id": "1198-132537-dht25rtr",
"spark_python_task": {
"python_file": "dbfs:/example.py"
}
}
{
"name": "new_job",
"existing_cluster_id": "1198-132537-dht25rtr",
"spark_python_task": {
"python_file": "dbfs:/example.py"
}
}
Assuming the first job is created successfully, what happens when the second data engineer tries to create the job?_
A
The job will be created successfully with both jobs named new_job._
B
The job will be created successfully with the second job named new_job_1.
C
The job will not be created as a job with the same name already exists.
D
The job will be created successfully by overwriting the previous job as two jobs cannot share a name.
E
The task in the second job will be appended to the existing job.