
Answer-first summary for fast verification
Answer: Select Large semantic model storage format.
The question focuses on optimizing XMLA endpoint deployment time for a semantic model. According to Microsoft documentation and community consensus (with the highest upvotes), selecting the Large semantic model storage format (Option D) is the correct approach. This format is specifically designed to improve XMLA write operations performance, which directly addresses deployment time optimization. Option A (Small semantic model storage format) is incorrect as it has lower memory and storage limits that could actually slow down large deployments. Option B (Users can edit data models) is unrelated to deployment optimization. Option C (Enable Cache for Shortcuts) is not relevant to XMLA endpoint deployment performance. The community discussion strongly supports Option D with 88% agreement and detailed reasoning from Microsoft Learn documentation.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
No comments yet.
You have a Fabric tenant containing a workspace named Workspace1. You plan to deploy a semantic model named Model1 using the XMLA endpoint. You need to optimize the deployment of Model1 to minimize the deployment time. What should you configure in Workspace1?
A
Select Small semantic model storage format.
B
Select Users can edit data models in the Power BI service.
C
Set Enable Cache for Shortcuts to On.
D
Select Large semantic model storage format.