Ultimate access to all questions.
You have containerized a legacy application that stores its configuration on an NFS share. You need to deploy this application to Google Kubernetes Engine (GKE) and ensure the application does not serve traffic until the configuration is successfully retrieved. What is the recommended approach?
Explanation:
The requirement is to ensure the application does not serve traffic until the configuration from an NFS share is retrieved. Option B is correct because creating a PersistentVolumeClaim (PVC) linked to an NFS-based PersistentVolume (PV) allows the pod to mount the NFS share as a volume. Kubernetes ensures the volume is mounted before the container starts, so the ENTRYPOINT script can access the configuration files from the mounted volume. This approach leverages Kubernetes' native volume handling, ensuring the application starts only after the configuration is available. Other options either use incorrect tools (A, which uses gsutil for GCS instead of NFS), bake static config into the image (C, which is unsuitable for dynamic configs), or rely on node-level setup (D, which is less reliable and not Kubernetes-native).