
Answer-first summary for fast verification
Answer: Increase the maximum number of workers and reduce worker concurrency., Increase the memory available to the Airflow workers.
If an Airflow worker pod is evicted, all task instances running on that pod are interrupted, and later marked as failed by Airflow. The majority of issues with worker pod evictions happen because of out-of-memory situations in workers. To resolve this, you can: (C) Increase the maximum number of workers and reduce worker concurrency to ensure that each worker handles fewer tasks at once, thereby providing more memory to each task. Additionally, (D) Increase the memory available to the Airflow workers, which directly addresses the issue of high memory usage and helps to avoid out-of-memory situations.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You have recently deployed multiple data processing jobs within your Cloud Composer 2 environment, which uses Apache Airflow for workflow management. However, upon examining the Apache Airflow monitoring dashboard, you notice that some tasks are failing. The dashboard indicates a rise in memory usage by the worker nodes, and there have been instances of worker pod evictions due to insufficient memory. To rectify these issues and ensure that your data processing jobs run smoothly, what actions should you take? (Choose two.)
A
Increase the directed acyclic graph (DAG) file parsing interval.
B
Increase the Cloud Composer 2 environment size from medium to large.
C
Increase the maximum number of workers and reduce worker concurrency.
D
Increase the memory available to the Airflow workers.
E
Increase the memory available to the Airflow triggerer.
No comments yet.