
Ultimate access to all questions.
How should you configure the deployment parameters for a microservices application on Google Kubernetes Engine (GKE) that broadcasts livestreams, given the requirements of automatic scaling during unpredictable traffic spikes, high availability, and resilience to hardware failures? (Select two.)
A
Distribute your workload evenly using a multi-zonal node pool.
B
Distribute your workload evenly using multiple zonal node pools.
C
Use cluster autoscaler to resize the number of nodes in the node pool, and use a Horizontal Pod Autoscaler to scale the workload.
D
Create a managed instance group for Compute Engine with the cluster nodes. Configure autoscaling rules for the managed instance group.
E
Create alerting policies in Cloud Monitoring based on GKE CPU and memory utilization. Ask an on-duty engineer to scale the workload by executing a script when CPU and memory usage exceed predefined thresholds.