
Ultimate access to all questions.
You have recently deployed a machine learning model to a Vertex AI endpoint and configured online serving using Vertex AI Feature Store. Additionally, you have set up a daily batch ingestion job to update your Feature Store with new data. However, you notice that during the batch ingestion jobs, the CPU utilization is high on your Feature Store's online serving nodes, leading to increased feature retrieval latency. How can you improve the online serving performance and reduce latency during these batch ingestion jobs?
A
Schedule an increase in the number of online serving nodes in your featurestore prior to the batch ingestion jobs
B
Enable autoscaling of the online serving nodes in your featurestore
C
Enable autoscaling for the prediction nodes of your DeployedModel in the Vertex AI endpoint
D
Increase the worker_count in the ImportFeatureValues request of your batch ingestion job_