LeetQuiz Logo
Privacy Policy•contact@leetquiz.com
© 2025 LeetQuiz All rights reserved.
Google Professional Data Engineer

Google Professional Data Engineer

Get started today

Ultimate access to all questions.


Your current setup involves an upstream process that writes data to Google Cloud Storage. This data is subsequently processed by an Apache Spark job running on Google Cloud Dataproc. The Spark jobs execute in the us-central1 region, although the data might reside in any region across the United States. To prepare for a potential catastrophic failure in a single region, you need an effective recovery process ensuring a Recovery Point Objective (RPO) of no more than 15 minutes, implying you can afford a data loss of up to 15 minutes. Furthermore, you aim to maintain minimal latency when accessing the data. What strategy should you employ?

Exam-Like



Powered ByGPT-5