
Answer-first summary for fast verification
Answer: Inference Tables
The question specifically asks for monitoring incoming requests and outgoing responses for a model serving endpoint. Inference Tables is the correct Databricks feature designed for this exact purpose - it automatically logs and monitors all serving endpoint traffic including requests and responses. While Vector Search (B) is relevant to RAG applications for retrieval, it doesn't handle endpoint monitoring. Feature Serving (D) is for serving features to models, not monitoring endpoint traffic. AutoML (A) is for automated model training, not endpoint monitoring. The community discussion shows split opinions between B and C, but Inference Tables (C) directly addresses the monitoring requirement based on Databricks documentation and functionality.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
A Generative AI Engineer is using a provisioned throughput model serving endpoint in a RAG application and needs to monitor the incoming requests and outgoing responses for the endpoint.
Which Databricks feature should they use?
A
AutoML
B
Vector Search
C
Inference Tables
D
Feature Serving
No comments yet.