
Ultimate access to all questions.
To securely automate a data pipeline process involving nightly batch files containing non-public information from Google Cloud Storage, which needs processing by a Spark Scala job on a Google Cloud Dataproc cluster and results deposited into Google BigQuery, what is the most secure approach?
A
Use a user account with the Project Viewer role on the Cloud Dataproc cluster to read the batch files and write to BigQuery.
B
Grant the Project Owner role to a service account and run the job using that account.
C
Restrict access to the Google Cloud Storage bucket to only allow you to see the files.
D
Use a service account that has permission to read the batch files and write to BigQuery.