
Answer-first summary for fast verification
Answer: Deploy AWS IoT Greengrass on the local server. Deploy the ML model to the Greengrass server. Create a Greengrass component to take still images from the cameras and run inference. Configure the component to call the local API when a defect is detected.
Deploying AWS IoT Greengrass on the local server and deploying the ML model to the Greengrass server meets all the requirements of the manufacturing company: 1. **Local Inference and Feedback**: By deploying the ML model directly on the local Greengrass server, the company can ensure that the inference (defect detection) is performed locally without relying on internet connectivity. This is crucial for providing real-time feedback to factory workers, even if the internet goes down. 2. **Edge Computing**: AWS IoT Greengrass is designed to run machine learning models at the edge, closer to the devices, which minimizes latency and allows for real-time analytics and decision-making processes. 3. **Integration with Local APIs**: A Greengrass component can be created to handle the extraction of still images from the IP cameras and to run these images through the deployed ML model. The component can also be configured to call the local Linux server API directly with the defect detection results, providing immediate feedback to workers. Therefore, AWS IoT Greengrass is the most appropriate solution for performing ML inference on-premises, ensuring the system operates independently of internet connectivity and meets the company's operational requirements.
Author: LeetQuiz Editorial Team
Ultimate access to all questions.
A manufacturing company is building an inspection solution for its factory. The company has IP cameras at the end of each assembly line. The company has used Amazon SageMaker to train a machine learning (ML) model to identify common defects from still images. The company wants to provide local feedback to factory workers when a defect is detected. The company must be able to provide this feedback even if the factory’s internet connectivity is down. The company has a local Linux server that hosts an API that provides local feedback to the workers. How should the company deploy the ML model to meet these requirements?
A
Set up an Amazon Kinesis video stream from each IP camera to AWS. Use Amazon EC2 instances to take still images of the streams. Upload the images to an Amazon S3 bucket. Deploy a SageMaker endpoint with the ML model. Invoke an AWS Lambda function to call the inference endpoint when new images are uploaded. Configure the Lambda function to call the local API when a defect is detected.
B
Deploy AWS IoT Greengrass on the local server. Deploy the ML model to the Greengrass server. Create a Greengrass component to take still images from the cameras and run inference. Configure the component to call the local API when a defect is detected.
C
Order an AWS Snowball device. Deploy a SageMaker endpoint the ML model and an Amazon EC2 instance on the Snowball device. Take still images from the cameras. Run inference from the EC2 instance. Configure the instance to call the local API when a defect is detected.
D
Deploy Amazon Monitron devices on each IP camera. Deploy an Amazon Monitron Gateway on premises. Deploy the ML model to the Amazon Monitron devices. Use Amazon Monitron health state alarms to call the local API from an AWS Lambda function when a defect is detected.
No comments yet.