Gaston Verelst
Gaston Verelst is the owner of, an IT consultancy company based in Belgium (the land of beer and chocolate!) He went through a variety of projects during his career so far. Starting with Clipper - the Summer '87 edition, he moved on to C and mainly C++ during the first 15 years of his career. Of course, this is only possible with hands-on experience. Gaston worked on many large scale projects for the biggest banks in Belgium, Automotive, Printing, Government, NGOs. His latest and greatest project is all about extending an IoT gateway built in MS Azure. When he is not working or studying, Gaston can be found on the tatami in his dojo. He is the chief instructor of Ju-Jitsu club Zanshin near Antwerp and holds high degrees in many martial arts as well.

In this article, we will combine two technologies. We want to create an Azure Function using F# — and in this function, we will use a trained ML.NET model to perform classification on the Iris dataset. ML.NET comes with a model builder that will analyze the input file and then generate C# code to create an optimized model. (This portion of the code is used only to create the model, and is generated code, so we will not transform this code in F#.)

This article contains three sections:

  1. Creating a trained model to perform classifications on the Iris dataset. This will be a generated C# project that will create the model when we run it. The trained model will then be stored in a blob for use in the function.
  2. Setting up an Azure Function using F#. This is not as straightforward as it might seem (Microsoft has not yet provided a project template).
  3. Using the model for classification.

For this article, basic knowledge of C#, F#, Visual Studio, and Microsoft Azure is needed.

Project Setup

For the purposes of this article, it’s assumed that you already have an Azure account. If not, go to and follow the instructions. In addition, we will use Visual Studio 2019 as our development environment.

We will use the Iris dataset in this post because the dataset is well-known. It can be found at multiple locations, including GitHub. The CSV file is already included in the training project.

I am using the ML.NET Model Builder. At the time of this writing it is in preview, so expect that changes may occur in the applications as they approach general release.

The solution that we will create below is available on GitHub at:

Training the Model

Each machine-learning (ML) project starts with training. The result of training is a model that will be used later to perform classification. For more info, an example of a training (clustering) algorithm (k-means) can be found in the article "K-means Using F#" on

We will not write our own classifier. Instead, we’ll use ML.NET. We will feed the Iris dataset into it, and a trained model will result.

The version of the Iris dataset we will use contains 150 rows, with the following fields:

  • Sepal_length
  • Sepal_width
  • Petal_length
  • Petal_width
  • Species

Given the first four fields, we want to determine the species. This is a typical classification problem.

We’ll start the training project by creating a blank solution. In this solution, we will create the training project and the Azure Function project.

  1. Open Visual Studio 2019, and go to File > New Project.
  2. Create a Blank Solution project, and name it "IrisClassification".
  3. Create a new project of the type Console App (.NET Core), using C# as a language. Name this project "IrisTraining".
  4. Right-click on the project and add a new folder called "TrainingData". Download the Iris dataset in this folder.
  5. Right-click on the project to add Machine Learning: Add > Machine Learning. This will take you to the ML.NET Model Builder. Here you will follow a wizard to create your ML.NET project.

Building the Machine Learning Model

We want to classify species of flowers, so the scenario we’ll choose is Issue Classification.

In the next step, select Iris.csv. A nice preview of the file will be shown. The important thing here is to select the column to predict. This is the Species column.

The next step is training the model. The only parameter you need to provide here is the maximum time you want to spend training. The higher the time, the better the accuracy. The time is 10 seconds by default, which will be sufficient for a dataset with only 150 rows. When your datasets are much larger, consider playing with this parameter for improved accuracy.

Next, click the "Start training" button. Progress data is shown that indicates the model’s accuracy. (On my computer, I have a 100% score after a couple of seconds.)

Let’s evaluate this (click "Evaluate"):

As you can see, the ML.NET Model Builder has tried several trainers, with multiple parameters to obtain the best model. Click Code to generate the code.

As a final step, confirm that you want to create the projects by clicking on the Add Projects button.

Run the IrisClassificationML.ConsoleApp project to generate the file. (You’ll find it in the IrisClassificationML.ConsoleApp\bin\Debug\netcoreapp2.1 folder.)

We have now created an ML.NET project that uses the Iris dataset. Running the project, we have generated the Model file ( that we will use in our Azure Function.

Next, in the solution, create a new solution folder called "Training" and drag all the projects into this folder.

Copying the Model File into a Blob

In order to use the Model file, we have to store it somewhere in Microsoft Azure. We will organize this to separate the project from other projects that you might already have.

We will use the Azure Cloud Shell to perform these steps. Alternatively, you could use the portal user interface to create the resource group and the objects in it, or you can use ARM templates.

Open Cloud Shell

At the top, click the Cloud Shell icon. If this is the first time you’ve opened the cloud shell, a wizard will appear to set up the shell. You can choose the scripting language to use (PowerShell or Linux Bash). Then, Azure will create some storage for you. The next time you open the cloud shell, no additional setup will be required.

Create a resource group:

az group create --name d-irisclassification-we-rg --location westeurope

Create a storage account and a container:

az storage account create --resource-group d-irisclassification-we-rg --name trainingstg
az storage container create --account-name trainingstg --name trainingdata

Upload the model file. We’ll do this through the portal. This allows us to see if we have created what we expected.

In the portal, click on Resource Groups then click on d-irisclassification-we-rg. This will open the resource group that we just created. It currently only contains the storage account. Click on it to see the contents.

Click on Blobs to see the "trainingdata" container.

Clicking on the container will reveal that it is still empty.

Click on Upload. Select the generated "MLModel.Zip" file and upload it.

Creating an Azure Function App

Creating a function app is simple, provided you want to use C# as a programming language, which is a good choice. But for some problem domains, F# is recommended.

If you search for a template to create an F# function app, you’ll be disappointed. You have two possibilities in this situation:

  • Create an F# class library, add the Microsoft.NET.Sdk.Functions package, and start writing the code. This is a good option, but it isn’t possible to publish your function from within Visual Studio 2019.
  • Create a C# function app, and convert it to an F# function app. This is initially a bit more work, but it will allow you to quickly publish your function.

In a slightly bigger project, the first option is perfect because you’ll most likely let the CI/CD pipline take care of the publishing, rather than publishing directly from your machine.

We will use the second option, converting to an F# function.

Setting up the Project

To keep our solution organized, we will first create a new solution folder called "ClassificationFunction." All the projects concerning it will go under this folder.

In Solution Explorer, right-click on the solution and select New > Solution Folder > … .

Now, create the C# function project:

Right-click on the newly created solution folder. Click Add > New Project, then select Azure Functions.


Click Next.

Give the function project an appropriate name (such as "IrisClassificationFunctions"), then click Create.

On the next page, we can choose how the function will be triggered. For this case, we will choose http trigger. For ease of testing, we will configure the function to respond to a GET request. That will allow for testing the function using a browser.

On the right side, we can choose a storage account. Next, choose the storage account that you just created:

This will add an entry in the local.settings.json file with the connectionstring.

Leave the Authorization level on "Function" and click Create to generate the function project.

Now test the function: Right-click the function project and click Set as startup project, then press F5. This starts a console window, with the following important information:

Open your favorite browser and go to the URL presented on your response. This will give you the equivalent of a "Hello world" function.

Transforming the C# project into an F# project

Close Visual Studio so that there are no locks on files that will be edited directly.

Open the folder that contains the function project. Rename the file from IrisClassificationFunctions.csproj to IrisClassificationFunctions.fsproj.

Open this file in your favorite editor (for example, Notepad++). Make the following changes:

<Project Sdk="Microsoft.NET.Sdk">
    <Compile Include="Function1.fs" />

    <Content Include="host.json">
    <Content Include="local.settings.json">

    <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.28" />

In File Explorer, rename Function1.cs by changing it to "Function1.fs." We will modify this file in Visual Studio.

When you open the solution, it will throw multiple compile errors. Move one level up in the folder hierarchy and open the solution file (IrisClassification.sln) in the editor.

Find the reference to the project "IrisClassificationFunctions" and change the extension of the filename to ".fsproj."

Open the solution again in Visual Studio.

Now we’ll convert the C# code into F# .

namespace IrisClassificationFunctions

open Microsoft.Azure.WebJobs
open Microsoft.AspNetCore.Mvc;
open Microsoft.Azure.WebJobs.Extensions.Http;
open Microsoft.AspNetCore.Http;
open Microsoft.Extensions.Logging;

module Function1 =
    let Run ([<HttpTrigger(AuthorizationLevel.Function, [|"get"|])>] req: HttpRequest) (log: ILogger) = 
        async {
            log.LogInformation("F# HTTP trigger function processed a request.")
            let name = req.Query.["name"]

            match name.Count with 
            | 0 -> return (ObjectResult) (new BadRequestObjectResult("Please pass a name on the query string or in the request body"))
            | _ -> return (ObjectResult) (new OkObjectResult("Hello, " + name.ToString()))

        |> Async.StartAsTask

We use the same attributes as in the C# version to indicate the name of the function and the trigger. The function signature is the same as well.

We use a match expression to indicate what to do with an empty collection. If we want to extend the function to only allow one "name" argument, that’s simple.

Thanks to the concise syntax of F#, only 21 lines of code are required.

We want to name the function "Classify":

  • Rename "Function1.fs" to "Classify.fs"
  • Rename module "Function1" to "Classify"
  • Make the FunctionName "Classify"

When we run the application again, we see this:

This makes more sense!

Using the F# Model

We now have:

  • A trained model for classifying Irises in blob storage
  • A simple working F# function.

The next steps are:

  • Load the model from blob storage
  • Create a PredictionEngine using the model
  • Get the four parameters from the request (sepal_length, sepal_width, petal_length, petal_width) and use them to predict the species
  • Return the predicted species.

When we created the project, we also indicated the storage location. This is stored in the file "local.settings.json":

    "IsEncrypted": false,
    "Values": {
        "AzureWebJobsStorage": DefaultEndpointsProtocol=https;AccountName=trainingstg;…",
        "FUNCTIONS_WORKER_RUNTIME": "dotnet"

Here is the straightforward function to read a value from the configuration. By default, the value will be read from the "Values" element in the Settings file:

    let GetSetting name =
        let currentFolder = Directory.GetCurrentDirectory()
        let builder = (new ConfigurationBuilder()).SetBasePath(currentFolder).AddJsonFile("local.settings.json", true, true).AddEnvironmentVariables()
        let config = builder.Build()

We also need to reference (open) the Microsoft.Extensions.Configuration namespace.

Now we’ll get the model from the blob. We first need to install the WindowsAzure.Storage package.

Open the Package Manager Console: Tools > NuGet Package Manager > Package Manager Console. Select the correct project next to the "Default project" label.

In the Package Manager Console run the following command:

install-package WindowsAzure.Storage

Add two open statements:

open Microsoft.WindowsAzure.Storage      // Namespace for CloudStorageAccount
open Microsoft.WindowsAzure.Storage.Blob // Namespace for Blob storage types

    let GetBlob folder name connection =
        let storageAccount = CloudStorageAccount.Parse(connection)
        let blobClient = storageAccount.CreateCloudBlobClient()
        let container = blobClient.GetContainerReference(folder)  // trainingdata
        let blobRef = container.GetBlockBlobReference(name)       //
        let stream = new MemoryStream()
        (blobRef.DownloadToStreamAsync stream).Wait()
        stream.Seek(0L, SeekOrigin.Begin) |> ignore

The function is straightforward. Don’t forget to reset the stream to the start.

Creating the Prediction Engine

Here we are in the ML.NET realm. So we need to install the ML.NET package. In the Package Manager Console, type the following command:

install-package microsoft.ML

We’ll get the following function definition:

open Microsoft.ML       // Namespace for ML.NET
open Microsoft.ML.Data  // Namespace for ColumnName attribute

    let CreateEngine (stream: Stream) =
        let mlContext = new MLContext()

        let mutable inputSchema = Unchecked.defaultof<DataViewSchema>

        let mlModel = mlContext.Model.Load(stream, &inputSchema)
        mlContext.Model.CreatePredictionEngine<IrisData, Prediction>(mlModel)

It gets a little bit more complicated from this point.

The MLConext.Load method takes a stream and an out-parameter inputSchema, but it also can use a string as a first parameter, being the file path. So we need to specify in the signature of the function that the stream is of type Stream:

let CreateEngine (stream: Stream) = ...
Then, in the Load method, we need to use a call by reference (hence the & prefix):
let mlModel = mlContext.Model.Load(stream, &inputSchema)
We also have to specify the data type for inputSchema — and, unfortunately, inputSchema must be mutable:
let mutable inputSchema = Unchecked.defaultof<DataViewSchema>

We also have to specify the data type for inputSchema — and, unfortunately, inputSchema must be mutable:

let mutable inputSchema = Unchecked.defaultof<DataViewSchema>

When we create the prediction engine, we need to specify the input and output types. We need to create two types like this:

    // types
    type IrisData() =
        [<ColumnName "sepal_length"; DefaultValue>]
        val mutable public SepalLength: float32
        [<ColumnName "sepal_width"; DefaultValue>]
        val mutable public SepalWidth: float32
        [<ColumnName "petal_length"; DefaultValue>]
        val mutable public PetalLength:float32
        [<ColumnName "petal_width"; DefaultValue>]
        val mutable public PetalWidth:float32
        [<ColumnName "species"; DefaultValue>]
        val mutable public Label: string

    type Prediction() =
        [<ColumnName "PredictedLabel";DefaultValue>] 
        val mutable public PredictedLabel : string

And now we can call CreatePredictionEngine:

mlContext.Model.CreatePredictionEngine<IrisData, Prediction>(mlModel)

Creating the engine could be written as:

    let connectionstring = GetValue "AzureWebJobsStorage"
    let model = GetBlob "trainingdata" "" connectionstring
    let engine = CreateEngine  model

Or more functional:

    let engine = GetSetting "AzureWebJobsStorage" |> GetBlob "trainingdata" ""
                                                  |> CreateEngine

This line comes before the definition of the Classify function, so it is only executed once. (Otherwise, our classify function would become quite slow.) We’ll use the lazy evaluation feature of F#. The initialization will only be executed when the values are needed for the first time.

Keep going — we’re almost there!

In the Classify function, we obtain an HttpRequest object called req. It is easy to get the parameters from the query string, as we have seen before:

let name = req.Query.["name"]

To predict the species from the parameters, we need to create an IrisData object. Let’s do that in a separate function:

    // no error checking here, in production code this is of course not acceptable
    // parameters that are "forgotten" or in a wrong format in the request will be 0
    let GetIrisData req =
        let GetParmAsFloat (req: HttpRequest) name =
            let p = req.Query.[name]
            if p.Count = 0 then 0.0f
            else p.ToString() |> float32

        let input = IrisData()
        input.SepalLength <- GetParmAsFloat req "sepalLength"
        input.SepalWidth <- GetParmAsFloat req "sepalWidth"
        input.PetalLength <- GetParmAsFloat req "petalLength"
        input.PetalWidth <- GetParmAsFloat req "petalWidth"

And now, the Classify function itself has become very simple:

    let Run ([<HttpTrigger(AuthorizationLevel.Function, [|"get"|])>] req: HttpRequest) (log: ILogger) = 
        async {
            log.LogInformation("F# HTTP trigger Classify function processed a request.")

            let input = GetIrisData req
            let result = engine.Predict(input)

            return new OkObjectResult(result)
        |> Async.StartAsTask

Publishing the Function to Microsoft Azure

We now have a function that runs locally and does its job. It already uses data from a blob in Azure. We now want to be able to use this function from the cloud. This can be done by right-clicking on the IrisClassificationFunctions project and selecting "Publish".

In the wizard that appears, click Start, then pick a publish target. We have not yet created a function app in Azure, so we’ll let the wizard handle this.

Choose "Create New" and click on the "Publish" button.

On the next page, fill in the necessary parameters. Give the function a good name. Select the appropriate subscription. Select the resource group that we created before (d-irisclassification-we-rg), and select the storage account (trainingstg).

Now, click on the Create button. This may take a little while to execute.

Clicking on the Publish button will finish the work!

On the summary page, you will find the URL where your function is available:

Normally you will deploy using Azure DevOps (or any other CI/CD tool that you use).

We are using a setting called AzureWebJobsStorage to obtain the connection string for the blob storage. Instead of using the configuration string and applying some tricks to make the difference between DEV, TST, ACC, PROD (and any other environment you may need), we will add this setting to the function app:

In the portal, go to your function and on the overview page click on Configuration.

In the list of settings, add the AzureWebJbsStorage setting, and set it to your storage connection string.

Now let’s test the function in the Azure portal. Go back to the Functions page, and click on the Classify function. This will open a JSON file with the function settings.

On the right side, you’ll find the test button. This will allow you to enter the parameters for the function and execute it.

To clean up the objects that we have created in MS Azure, we only need to remove the resource group.


The implementation of the function Is about 90 lines long, which is a lot leaner than the C# equivalent. By splitting everything into small functions, we have created a comprehensive F# function app with some reusable parts.

The lazy evaluation feature of F# makes it easy to create the prediction engine only once. No DI, IoC, and so on are needed.

The function does not contain any error handling. If something goes wrong, a 500 response (internal server error). This can be improved.

Errors are not logged, which is a problem when you need to troubleshoot.

The connection string is plainly visible in the settings. If you want to store it more securely, you may consider using the Azure Keyvault.

How to work with us

  • Contact us to set up a call.
  • We will analyze your needs and recommend a content contract solution.
  • Sign on with ContentLab.
  • We deliver topic-curated, deeply technical content to you.

To get started, complete the form to the right to schedule a call with us.

Send this to a friend