Ultimate access to all questions.
You are creating a deep learning model for image recognition on Azure Machine Learning service using GPU-based training. You must deploy the model to an endpoint that supports real-time, GPU-based inferencing.
Which compute type should you use to configure the compute resources for model inferencing?