
Answer-first summary for fast verification
Answer: Distribute the workload across multiple SQL Serverless instances and Spark clusters based on the query complexity and data volume.
Option B is the correct approach. Distributing the workload across multiple SQL Serverless instances and Spark clusters allows for better scalability and performance. It enables the system to handle large volumes of data and complex queries more efficiently. A single SQL Serverless instance (Option A) may not be sufficient for large-scale projects. Using only Spark clusters (Option C) may not be the most cost-effective solution, as SQL Serverless can handle simpler queries more efficiently. A hybrid solution with a load balancer (Option D) may introduce additional complexity and may not be necessary for all projects.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
You are tasked with designing a data exploration layer for a large-scale data analytics project. The project requires the use of SQL Serverless and Spark clusters for query execution. How would you approach this task?
A
Create a single SQL Serverless instance and use it for all query executions.
B
Distribute the workload across multiple SQL Serverless instances and Spark clusters based on the query complexity and data volume.
C
Use only Spark clusters for all query executions, as they are more powerful than SQL Serverless.
D
Create a hybrid solution that combines SQL Serverless and Spark clusters, with a load balancer to distribute the workload.
No comments yet.