
Ultimate access to all questions.
As a Google Cloud Professional Machine Learning Engineer, you are tasked with building a real-time prediction engine that streams files to Google Cloud. Some of these files may contain Personally Identifiable Information (PII). To ensure compliance with data privacy regulations, you plan to use the Cloud Data Loss Prevention (DLP) API to scan for PII and protect sensitive data. How should you structure your data pipeline to ensure that PII is not accessible by unauthorized individuals?
A
Stream all files to Google Cloud, and then write the data to BigQuery. Periodically conduct a bulk scan of the table using the DLP API.
B
Stream all files to Google Cloud, and write batches of the data to BigQuery. While the data is being written to BigQuery, conduct a bulk scan of the data using the DLP API.
C
Create two buckets of data: Sensitive and Non-sensitive. Write all data to the Non-sensitive bucket. Periodically conduct a bulk scan of that bucket using the DLP API, and move the sensitive data to the Sensitive bucket.
D
Create three buckets of data: Quarantine, Sensitive, and Non-sensitive. Write all data to the Quarantine bucket. Periodically conduct a bulk scan of that bucket using the DLP API, and move the data to either the Sensitive or Non-Sensitive bucket.