Ultimate access to all questions.
You have recently deployed multiple data processing jobs within your Cloud Composer 2 environment, which uses Apache Airflow for workflow management. However, upon examining the Apache Airflow monitoring dashboard, you notice that some tasks are failing. The dashboard indicates a rise in memory usage by the worker nodes, and there have been instances of worker pod evictions due to insufficient memory. To rectify these issues and ensure that your data processing jobs run smoothly, what actions should you take? (Choose two.)