
Answer-first summary for fast verification
Answer: Deploy the model by using an Amazon SageMaker endpoint.
## Detailed Explanation The question specifies that the company wants to deploy a machine learning model for predictions **without managing servers or infrastructure**. This requirement points directly to a **fully managed service** that abstracts away the underlying infrastructure management. ### Analysis of Each Option: **A: Deploy the model on an Amazon EC2 instance.** - **Not suitable**: Amazon EC2 requires the user to provision, configure, manage, and scale virtual servers. This involves significant infrastructure management, including OS updates, security patches, and scaling decisions, which contradicts the requirement of "without managing servers or infrastructure." **B: Deploy the model on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.** - **Not suitable**: While EKS is a managed Kubernetes service, it still requires the user to manage the worker nodes (EC2 instances or Fargate profiles), container orchestration, scaling policies, and cluster configuration. This involves substantial infrastructure and operational overhead. **C: Deploy the model by using Amazon CloudFront with an Amazon S3 integration.** - **Not suitable**: CloudFront is a content delivery network (CDN) for distributing static or dynamic web content, and S3 is object storage. This combination is not designed for deploying and serving machine learning models for real-time predictions. It lacks the necessary compute infrastructure and model-serving capabilities. **D: Deploy the model by using an Amazon SageMaker endpoint.** - **Optimal choice**: Amazon SageMaker is a **fully managed service** specifically designed for the end-to-end machine learning lifecycle, including model deployment. A SageMaker endpoint: - **Eliminates server management**: SageMaker automatically provisions, scales, and manages the underlying compute instances (or serverless options) required to host the model. - **Provides managed infrastructure**: It handles load balancing, auto-scaling based on traffic, health monitoring, and security updates without user intervention. - **Enables serverless predictions**: With options like SageMaker Serverless Inference, it can deploy models without any infrastructure management, aligning perfectly with the requirement. - **Supports production-grade deployments**: It offers high availability, A/B testing, and monitoring tools out-of-the-box. ### Conclusion: Amazon SageMaker endpoints are purpose-built for deploying ML models in a fully managed environment, allowing the company to focus solely on making predictions without any infrastructure responsibilities. The other options either require significant management effort or are not designed for ML model serving.
Ultimate access to all questions.
No comments yet.
Author: LeetQuiz Editorial Team
A company has created a machine learning model to forecast real estate sale prices and wants to deploy it for predictions without handling any servers or infrastructure.
Which AWS solution satisfies these requirements?
A
Deploy the model on an Amazon EC2 instance.
B
Deploy the model on an Amazon Elastic Kubernetes Service (Amazon EKS) cluster.
C
Deploy the model by using Amazon CloudFront with an Amazon S3 integration.
D
Deploy the model by using an Amazon SageMaker endpoint.