Azure Machine Learning Overview

 
  • Azure machine learning represents a suite of products to build, train, and deploy machine learning models in Azure. Data and analytics solutions from Azure represent a set of solutions to store and analyze IoT data.
  • You can also build your system using Azure functions to enable a serverless application. Azure Functions enables you to run small pieces of code, or “functions,” in the cloud. These functions can then be combined to create a more comprehensive application. You pay only for the time your code runs (serverless application).

Azure ML model management

  • Building the machine learning is only the first step towards deploying it in production on edge devices. The workflow for deploying models production is:

    Register the model

    • A registered model is a logical grouping for one or more files that make up your model.

    Prepare to deploy (Specify assets, usage, compute target.)

    • The model is then packaged into a Docker image.
    • To deploy the model, you need the following items:
    • An entry script: This script accepts requests, scores the requests by using the model and returns the results. The entry script is specific to your model.
      • The entry script receives data submitted to a deployed web service and passes it to the model. It then takes the response returned by the model and returns that to the client.
      • The script contains two functions that load and run the model:
        • init(): Typically, this function loads the model into a global object. This function is run only once when the Docker container for your web service is started.
        • run(input_data): This function uses the model to predict a value based on the input data.
          • Inputs and outputs of the run typically use JSON for serialization and deserialization.
          • You can also work with raw binary data. You can transform the data before sending it to the model or before returning it to the client.
    • Dependencies, like helper scripts or Python/Conda packages required to run the entry script or model.
    • The deployment configuration for the compute target that hosts the deployed model. This configuration describes things like memory and CPU requirements needed to run the model.
      • These items are encapsulated into an inference configuration and a deployment configuration. The inference configuration references the entry script and other dependencies.
      • You define these configurations programmatically when you use the SDK to perform the deployment. You define them in JSON files when you use the CLI.

    Deploy the model to the compute target

    • Deploy the model as a web service in the cloud or locally.
    • Some additional steps may be needed during deployment include profiling to determine the ideal CPU and memory settings or model conversion to optimize performance.
    • You can use the following compute targets/resources to host your web service deployment.
      • Local web service
      • Azure Machine Learning compute instance web service
      • Azure Kubernetes Service (AKS)
      • Azure Container Instances
        • For Azure Container Instance, the azureml.core.image.ContainerImage class is used to create an image configuration. The image configuration is then used to create a new Docker image.
        • To deploy the image you created, you first need to specify the target you want to use. You then use the AciWebservice class to configure and deploy from an image.
      • Azure Machine Learning compute clusters
      • Azure Functions
      • Azure IoT Edge
      • Azure Data Box Edge

    Test the deployed model as a web service

    • You create a deployment.json file that contains the modules you want to deploy to the device and the routes.
    • You should then push this file to the IoT Hub, which will then send it to the IoT Edge device.
    • The IoT Edge agent will then pull the Docker images and run/test them.
    • At this point, you should be able to monitor/test messages from your edge device to your IoT Hub.