
Ultimate access to all questions.
Your organization's call center is expanding its operations globally and requires a scalable, secure solution to analyze customer sentiments from over a million daily calls. The data is stored in Cloud Storage with strict compliance requirements: data must not leave its origin region to adhere to local data sovereignty laws, and Personally Identifiable Information (PII) must not be stored or analyzed to ensure customer privacy. Additionally, the data science team uses a third-party visualization tool that requires a SQL ANSI-2011 compliant interface for data access. Given these constraints, how should you design the data pipeline for processing and analytics to ensure it meets all requirements while being cost-effective and scalable? Choose the two best options.
A
Use Pub/Sub for real-time data ingestion and Datastore for storing processed data.
B
Implement Dataflow for data processing to handle the volume and ensure data does not leave its origin region, and use BigQuery for analytics due to its SQL ANSI-2011 compliance and powerful analytics capabilities.
C
Deploy Cloud Functions for processing data in response to events and use Cloud SQL for analytics, ensuring SQL ANSI-2011 compliance.
D
Utilize Dataflow for scalable data processing within the origin region and Cloud SQL for analytics, leveraging its SQL ANSI-2011 compliance.
E
Combine Dataflow for processing data within the origin region and BigQuery for analytics, ensuring SQL ANSI-2011 compliance and scalability.