Building a highly scalable, web-based application has never been easier. With consumption- based, serverless computing, you can build your application using a distributed, microservices architecture with little to no running cost until workloads increase as your user base expands.

Introduction

Before you start writing your application, you should consider the following general principles to get the best benefits:

  • Event-based workloads: A microservices architecture is a good choice if your application is primarily responding to different events.
  • Asynchronous: If your processing can be done asynchronously, microservices can give you significant leverage.

Your goal with microservices should be to accept an event or message from the user as quickly as possible, and to return data when completed. This allows your application to scale up and down behind the scenes without affecting the user. To show off this form of application architecture, let’s set up a simple application that receives a message via an HTTP REST- based API service, queues the message before a second service that monitors the queue picks up the message, processes it, and stores it in a database. To do this, we are going to use the following AWS services:

  • AWS Lambda for serverless compute
  • AWS SQS Standard for message queuing
  • DynamoDB to persist messages in a database layer.

Create a REST API with Lambda and API Gateway

Our first task will be to create a REST-based API using AWS Lambda that accepts input from the user. For this example, the API will accept a string with the user’s name as the input using a query parameter. First, select the Lambda option in the Compute category under the Services menu in the AWS Management portal. Select the Author from Scratch option. For the moment, let’s keep the default settings. Name your function appropriately and choose your preferred runtime (in my case, I am using Node.js). Then, create a new execution role for the function with the basic Lambda permissions. Hit Create, and the function designer should display when complete. Let’s add a little bit of code to the default to test our newly created function. In the function code editor, add the following code:

exports.handler = async (event) => {
    Let name = event.name === undefined ? "‘N/A’ : event.name
    const response = {
    statusCode: 200,
    name: name
};
return response;
};

Let’s quickly run through an explanation of this code (which is a fairly simple, standard Lambda Function). First, the function handler takes a JSON object (event). We take this object and extract the name property out of it. We then construct a response to close out the function. The majority of the time, your Lambda Functions should be using the same pattern:

  1. Receive data/events.
  2. Process them in a small specific way.
  3. Send on to the next step and/or close the event.

Let’s test this function using a simple test payload. On the Function Designer screen, click the dropdown next to Test and select "Configure Test Events." Enter a simple JSON object with the name as the key and a string as the value. This will replicate a similar payload from out of the API Gateway. Save and run the test, and you should see a status screen similar to the following: We now have a fairly simple Lambda Function for processing, but at the moment there is nothing to trigger it. Let’s add an API Gateway trigger using the Add Trigger button on the Lambda Function Designer. There are a number of potential triggers that can be hooked up to a function. For our use case, find and select the API Gateway, create a new API, and leave the security open. This means the URL will be publicly available, but there are other options for securing your API (see https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-control-access-to-api.html). Once the trigger has been completed, we need to deploy the API Endpoint. Select API Management from the Services menu and select the API you created through the trigger. Under the Actions menu, select Deploy API, use the default deployment stage, and provide a description. Because we are using the query parameters from the API as a passthrough and are expecting a specifically formatted response back, we need to update our function with the following code:

exports.handler = async (event) => {
    let name = event.queryStringParameters.name === undefined ? 'N/A' : event.queryStringParameters.name;
var responseBody = {
        "name" : name
    };
    const response = {
        statusCode: 200,
        body: JSON.stringify(responseBody),
        "isBase64Encoded": false
    };
return response;
};

Add a Queue to Store Messages

Next, we are going to take the message we receive from our API and store it on Amazon’s Simple Message Queue (SQS) service. Find Simple Queue Service in the Services menu and create a new queue. There are two flavors of SQS: standard and first in/first out. (The new queue window clearly explains the difference.) Because we are eventually only storing the data in DynamoDB, we don’t require guaranteed ordering or processing, so just used a Standard Queue and click Quick Create. We now need to give the function access to the queue through the Identity and Access Management (IAM) page. Under Roles, open the Lambda role we created with the function, click Attach a Policy, and create a new policy. By restricting this policy to write messages to our specific queue, we can control things a little bit more. Save the policy and attach it to the role. Finally, let’s update the function code one last time to write the message to the queue:

exports.handler = async (event) => {
    let name = event.queryStringParameters.name === undefined ? 'N/A' : event.queryStringParameters.name;
var responseBody = {
        "name" : name
    };
    const response = {
        statusCode: 200,
        body: JSON.stringify(responseBody),
        "isBase64Encoded": false
    };
return response;
};

Let’s test the API now using Postman. Find the API URL using the portal under the Stages section. Put this URL into Postman, use a name key under the Params section, and supply a value. The API should return a number of identifiers.

Process Messages from a Queue

Now that we have our first application that stores messages on a queue, let’s create a function that will pick the message up from the queue and create an entry in a DynamoDB table. Create a function as you just did, but this time, also include the Amazon SQS permissions policy template. Next, add a new trigger and link it to the queue we created in the previous step. This allows the function to fire when a queue message comes in. Next, let’s create the DynamoDB table where we will be storing messages. Use the default settings and set the primary key to messageId, which is generated when the message is added to the queue. Finally, we need to create another access policy like we did for the function that writes to the queue. The policy should have permissions to write to the DynamoDB table. Update the function with this code to process the message and store it in the table:

const AWS = require("aws-sdk");
AWS.config.update({ region: "ap-southeast-2"});
const sqs = new AWS.SQS({"accessKeyId": process.env.ACCESS_KEY_ID, "secretAccessKey": process.env.SECRET_ACCESS_KEY});
const sqsURL = "https://sqs.ap-southeast-2.amazonaws.com/548836857282/ExampleNameQueue";
exports.handler = async (event) => {
    let name = event.queryStringParameters.name === undefined ? 'N/A' : event.queryStringParameters.name;
    var queueMessage = {
           MessageBody: name,
            QueueUrl: sqsURL
    };
    const request = sqs.sendMessage(queueMessage);
    const result = await request.promise();    
    if(result){
            const response = {
                statusCode: 200,
                body: JSON.stringify(result),
            "isBase64Encoded": false
            };
            return response;
    } else {
            const response = {
                statusCode: 501,
                body: JSON.stringify(result),
            "isBase64Encoded": false
            };
            return response;
    }
};

If you look at the Items section within the DynamoDB table, you should now be able to see the data saved into the table.

Conclusion

By using a range of serverless services provided through AWS, we were able to create two simple microservices that were able to communicate with each other through a message queueing service. This pattern can be used to create a wide variety of distributed applications, particularly those that require heavy processing. For example:

  • Submitting images to an API that can be queued before processing them with an intensive machine learning model, before being sent back to the user via email
  • Submitting documents to a queue that uses a later function to check and generate a plagiarism score against a large database of reference documents
  • Queueing multiple events from multiple devices that can later be batch-processed by a worker queue, independent from the receiving endpoint.

Any applications where a significant amount of data processing is required or large amounts of burst events need to be handled can benefit greatly from this style of architecture. The book Designing Data-Intensive Applications, by Martin Kleppmann, provides more information on designing scalable applications.