Avatar
Ivan Yakimov works as a software developer for more than 15 years. Mainly he programs on the .NET platform. Ivan wrote desktop and Web applications, created UI and backend, interacted with databases. As a Web developer, Ivan's familiar with modern Node.js and JavaScript. Now Ivan works for Confirmit.

One of the problems of microservices that we have to solve is the problem of development support. If you create a monolith application it is simple: you have one application and you have one machine. You deploy this one application on this machine and work with it.

But if your system consists of many services things become difficult. First of all, you need to deploy many applications, not one. Then you should configure all of them so they could interact with each other. This is much more complicated compared to a single application.

But this is not the end. You see, the system I have described is not a real microservices application yet. One of the biggest benefits of microservice architecture is scalability. That means if one service in your system cannot handle the load, you should be able to spawn another instance of the same service to spread the load between all existing instances.

It is not enough anymore to deploy only one instance of each service in your system. Well, you can live with only one instance, but some problems cannot be solved in such an environment. You really should install several instances of a service to see how it works on production.

And when you deployed these several instances, how can you configure everything so that the other part of your system works with all of them? You need some sort of load balancer here. Things become more and more complicated.

Here I’ll describe one possible approach to solve this problem. I’ll show how to deploy a microservices system on one machine using Docker Swarm.

Project Overview

In this article, we’ll build a simple system that contains two interacting services. These services will be created using ASP.NET Core and the C# language. I use .NET Core 3.1 on my machine. I use Visual Studio for coding, but you can use any IDE you want, such as Visual Studio Code.

We’ll deploy this system on a single machine using Docker containers. We will create multiple instances of each service and make them interact with each other.

In this article, I’ll use the Docker Desktop installation, which is available for both Windows and MacOS.

Now, when our Docker installation is ready, let’s build a sample microservice system.

Demo Microservices System

Our system will contain two services. Both of them will be built using ASP.NET Core. The first service is a Web API. I call it “Backend”. To create it in your command line execute the command:

> dotnet new webapi -o Backend

This command will create our backend service in the Backend folder. I created a new simple controller:

[ApiController]
[Route("[controller]")]
public class DataController : ControllerBase
{
    [HttpGet]
    public IActionResult Get()
    {
        return Content($"Time at {Environment.MachineName} is 
            {DateTime.Now.ToLongTimeString()}");
    }
}

The Get method will return a string with the name of the machine hosting the service and local time.

Also, for the sake of simplicity, I removed HTTPS support by removing the call of app.UseHttpsRedirection(); from the Startup.cs file and by modifying applicationUrl key in the launchSettings.json file. For a production application you should strongly consider enabling HTTPS support for security.

Now let’s create a consumer for this service. I created an MVC project “Frontend”:

> dotnet new mvc -o Frontend

I slightly modified the Index method of the HomeController to call the backend service and show the result on the HTML page:

public async Task<IActionResult> Index()
{
    var url = "http://localhost:5000";

    var client = new HttpClient();
    var response = await client.GetAsync($"{url}/data");
    var content = await response.Content.ReadAsStringAsync();

    return View("Index", content);
}

Here http://localhost:5000 is the base address of the Backend service.

The corresponding HTML template (Index.cshtml) looks like this:

@{
    ViewData["Title"] = "Home Page";
}
@model System.String;

<div class="text-center">
    <h1 class="display-4">Welcome</h1>
    <p>My machine is @Environment.MachineName</p>
    <p>@Model</p>
</div>

As you can see, here I display the text from the Backend service and the name of the machine hosting the Frontend service.

I again removed support for HTTPS for simplicity’s sake.

Now we can start both Backend and Frontend services and they will interact with each other.

But the goal is to create a system where several instances of each service work together. Let’s continue.

Building Images of Services

We’ll create a Docker image for each of the services. Let’s start from the Backend service.

First of all, we’ll build and publish it:

> dotnet build -c Debug
> dotnet publish -c Debug

This created the folder Backend\bin\Debug\netcoreapp3.1\publish because I use .NET Core 3.1.

Now in the Backend folder, I created Dockerfile file with the following content.

FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim
COPY ./bin/Debug/netcoreapp3.1/publish /app/
WORKDIR /app
EXPOSE 80
ENTRYPOINT ["dotnet", "Backend.dll"]

Actually there are many ways to construct a Dockerfile for a service. We even can execute an entire build process inside our containers. But here we’ll use this simple approach.

To build a Docker image with name backend you can just run in the Backend folder:

> docker build -t backend .

Now we can test our image:

> docker run -d -p 8080:80 backend

In a browser, open http://localhost:8080/data and you should see something like this:

Time at 9312ccd3c54a is 13:45:41

Now let’s create an image for the Frontend service. Its Dockerfile is almost the same as for Backend. But here we face another problem: the address of the Backend service in the container is not the same as it was on the developer machine. We need a way to change this address in the Frontend. To do it, I slightly changed my Index method:

public async Task<IActionResult> Index()
{
    var url = Environment.GetEnvironmentVariable("BACKEND_ADDRESS")
        ?? "http://localhost:5000";

    ...

I added the ability to take the required address from an environment variable with the name BACKEND_ADDRESS.

Now we can build an image for the Frontend service. After that, we have two images and we can run two containers.

But what is the base address of the Backend service that we should pass to our Frontend service? Here we face another problem. You see, by default, containers are isolated from each other and cannot communicate. How can we handle this?

Using Docker Compose

Docker has a very convenient instrument that allows us to do what we need here: docker-compose. It is a tool for defining and running multi-container Docker applications through a single configuration file, and it’s very simple to use.

First of all, you need to write docker-compose.yaml file. Let me show such file for our system containing Frontend and Backend services:

version: '3'
services:
  backend:
    image: backend
  frontend:
    image: frontend
    environment:
      BACKEND_ADDRESS: http://backend
    ports:
      - "8080:80"
    depends_on:
      - backend

This file says that our system consists of two services with names backend and frontend. The image property specifies corresponding Docker image for each such service.

Notice that the Frontend service has other properties. The depends_on property says that this service depends on the backend service. It means that backend should be started first. Docker Compose will do it for us.

ports specifies port mappings. Here we want to have access to the port 80 of the corresponding container through the port 8080 on our local machine.

And finally, the environment property sets environment variables in the container. Do you remember our changes in the Index method of the Home controller? Here we set the BACKEND_ADDRESS environment variable to http://backend. Why do we use such an address? Docker Compose makes containers of each service available to containers of all other services described in the same file by the name of service. Here the name of the backend service is backend. So any other service can address it by its name.

Well, our preparation is finished. We start the complete system containing two services with one command from the folder where our docker-compose.yaml lives:

> docker-compose up

Now we can go to our browser, open http:\\localhost:8080 and see the result:

As you see, everything works fine. If you want to stop the entire system, you can do it with another single command:

> docker-compose down

It is great, but it is not a microservices system yet. We need to have several instances of each service. Docker Compose doesn’t allow us to do it. Actually, you can manually describe several instances of each service in the docker-compose.yaml and somehow make them interact with each other. But there is a better way.

Using Docker Swarm

Docker has the ability to form a cluster of Docker host machines. These machines will work as one whole, interact with each other, create and destroy containers, and pass requests to services from one host to another. This is a rather complex system. If you want to know more, please, read the Swarm mode overview in the Docker documentation.

Here we’ll use just a fraction of its abilities. First of all, we need to create a cluster. You may think that the cluster should contain many machines. But hey, a cluster with only one machine is perfectly fine. Let’s create such a cluster containing only your developer machine. This can be done by the command:

> docker swarm init

That’s it. Our cluster is ready. To make sure that everything is OK, you can type:

> docker info

In the output of this command you can find the line:

Swarm: active

It means that our cluster is ready for work.

Now we can deploy our system into the cluster. To do it we should prepare docker-stack.yaml file. Here is its content:

version: '3'
services:
  backend:
    image: backend
    deploy:
      replicas: 2
  frontend:
    image: frontend
    environment:
      BACKEND_ADDRESS: http://backend
    ports:
      - "8083:80"
    depends_on:
      - backend
    deploy:
      replicas: 2

I hope you see that it is very similar to our previous docker-compose.yaml. The only difference is that deploy sections:

deploy:

replicas: 2

They mean that we want two instances of each of our services.

It is time to launch our system. Again, this is very simple to do:

> docker stack deploy -c .\docker-stack.yml my_system

Here my_system is just a name for our system. It can be arbitrary.

If you want to see the state of the application, type:

> docker service ls

You’ll get something like:

ID NAME MODE REPLICAS IMAGE PORTS

rcvquu5tnbb3 my_system_backend replicated 2/2 backend:latest

o5m530c87k2g my_system_frontend replicated 2/2 frontend:latest *:8083->80/tcp

As you see, both our services are working and both have two working instances.

Let’s open a browser at http://localhost:8083. You’ll see the home page of our frontend:

Now hit refresh (F5) several times. You’ll see how names of Frontend and Backend machines change:

Docker Swarm gives you load balancing for free! Every time you contact some service (internally or externally), Docker Swarm decides behind the scenes which instance of the service your request should go to.

And if you want to stop your entire system, you can use the command:

> docker stack rm my_system

Now everything is clean again.

You may want to stop the cluster itself too and return to the usual Docker mode. To achieve it, you should use the command:

> docker swarm leave --force

That’s it.

Conclusion

In this article, I tried to explain how you can deploy a complex microservice application on one developer machine. Certainly, it will not solve all the problems. There will be many things to do and to learn. But I hope I gave you a good starting point. Thank you!

P.S. The source code of the project can be found on GitHub.

How to work with us

  • Contact us to set up a call.
  • We will analyze your needs and recommend a content contract solution.
  • Sign on with ContentLab.
  • We deliver topic-curated, deeply technical content to you.

To get started, complete the form to the right to schedule a call with us.

Send this to a friend