
Answer-first summary for fast verification
Answer: Create a new Pub/Sub subscription for the new pipeline and run both pipelines in parallel until the old one processes all existing messages.
## Explanation When you need to make code changes that make a new Dataflow pipeline incompatible with the current version while ensuring no data loss, the recommended approach is: **Option B is correct** because: - Creating a new Pub/Sub subscription allows both pipelines to run independently - Running pipelines in parallel ensures continuous data processing - The old pipeline can continue processing existing messages without interruption - Once the old pipeline processes all existing messages, it can be safely stopped - This approach maintains data integrity and prevents data loss **Why other options are incorrect:** - **A**: Dataflow cannot automatically handle state migration for incompatible pipeline versions - **C**: Pausing and resuming doesn't work for incompatible code changes - **D**: Stopping and restarting would cause data loss in streaming pipelines - **E**: Templates help with deployment but don't solve the compatibility issue - **F**: State export/import is complex and not recommended for incompatible changes This approach is part of Google Cloud's recommended practice for zero-downtime pipeline updates with incompatible changes.
Author: LeetQuiz .
Ultimate access to all questions.
NO.37 You have Google Cloud Dataflow streaming pipeline running with a Google Cloud Pub/Sub subscription as the source. You need to make an update to the code that will make the new Cloud Dataflow pipeline incompatible with the current version. You do not want to lose any data when making this update. What should you do?
A
Update the existing pipeline with the new code and let Dataflow handle the state migration automatically.
B
Create a new Pub/Sub subscription for the new pipeline and run both pipelines in parallel until the old one processes all existing messages.
C
Pause the existing pipeline, update the code, and then resume the pipeline.
D
Stop the existing pipeline, update the code, and then restart the pipeline.
E
Use Cloud Dataflow's template feature to create a new pipeline version and deploy it alongside the existing one.
F
Export the pipeline state to Cloud Storage, update the code, and then import the state back.
No comments yet.