Comparing Amazon and Azure serverless services pt. 2: Deploying them with the Serverless framework

Introduction

In my previous blog post, I shared the experience of a project that needed to use AWS Lambda functions as a backend and a SPA built with React uploaded to an S3 bucket and delivered with Amazon CloudFront. In that post, we went through the positive and negative impacts on the development by using this technology stack. Another requirement was also known at the start of the project, but that was not listed because it was not relevant for the first post – but it becomes hugely important now: the solution needs to be deployable with a Serverless open-source framework if we opted for the Amazon Lambda route – and we did. The aim of this post is to show how to make a deployment to these cloud providers by adding the configuration and changing the code as little as possible.

Infrastructure as a code

Serverless framework at a glance

Serverless was at its 2nd major version when we were building the project and their slogan was that you can easily deploy your application with the framework to most cloud platforms – including AWS, Azure and GCP. This January, the 3rd major version got released, and this motto was changed to include AWS only – ”All-in-one development & monitoring of auto-scaling apps on AWS Lambda”. I can totally agree with this change: this is a great tool for AWS, but there are better options for the other cloud platforms – for example, Azure. We are going to focus on the deployment part and leave out monitoring in this blog post.

For deployment, the heart of the solution is the serverless.yml where you list your functions, their resources and other properties of the solution – like runtime, region, environment variables for the functions. Variables can be used that are defined within the template file and also from a config file outside. This way, you can use the same configuration with different values for different environments. The sample code does not utilize that latter solution, but it can be found here.

Deploying to AWS

Creating this serverless.yml is a really easy job if you know AWS CloudFormation. The syntax is mostly the same as the CloudFormation yml file’s syntax since Serverless uses CloudFormation under the hood when it makes the deployment. You also need to have the AWS CLI installed and you need to add an account to your credentials defined in this document. The profile property identifies the credentials in the file that will be used for the deployment.

AWS CloudFormation in a nutshell

In this particular solution, we can add and configure the functions (set HTTP verb, point them to the correct relative URL), configure the VPC based on your AWS account subnet ID-s and security group (as mentioned in the previous post - the Lambda and the Document DB need to be in the same VPC), and add the resources (which in this case is the Amazon Document DB instance with all the related resources such as its subnet group, parameter group, database cluster and database instance). In the real application, we were able to add all the resources like Cognito user pool for authentication, set it up as an authorizer for the Lambda functions, S3 buckets both for the static resources and the frontend application and CloudFront distribution.

The real power of Serverless, in this case, is that it creates a couple of useful additional resources for each AWS Lambda function. And if you want to achieve this with CloudFormation, you need to write a lot of code; see this example. Serverless creates an API Gateway for the HTTP-triggered resources (which is required to expose the function to the Internet) and CloudWatch logging that can be used for debugging or troubleshooting.

The only drawback is that you first need to run the deployment and it will create the CloudFormation stack with all the resources included in it, then you need to grab the connection string to the Document DB instance from the AWS Console. You need to set that as an environment variable and deploy the solution again. To summarize, the first deployment is semi-automatic because there is a manual step in it. A more sophisticated solution is to store the database connection string in AWS Secret Manager, you can find how to do that here with a CloudFormation template as well – just as a reminder, you will be able to use it in the serverless.yml in the resources section as is.

The complete serverless.yml configuration file can be found here.

Deploying to Azure

Deploying the solution to Azure was a bumpy ride. First, you need to realize that the Azure provider is only available for Serverless as a plugin NPM package - this forecasts the framework’s limited functionality for Azure. Just like AWS, you need to have Azure CLI installed, and you need to log in using the az login command in the command line session to be able to do a deployment.

The functions section is similar to the one for AWS, and obviously there are provider-specific properties in that too. After the deployment, the only additional resource created is the Application Insights instance, which is also nice for monitoring and troubleshooting. Azure API Management is not created as it is not a requirement to trigger an HTTP Azure function.

Unfortunately, the serverless.yml cannot contain a resources section for Azure, so you cannot create the dependencies of your application just with Serverless. It has an ARM template property, which does not extend the declared functions, but replaces it – therefore you can select if you want your dependencies to be included in the deployment or your functions. Of course, you can describe your functions in the ARM template as well alongside its dependencies, but then Serverless does not really help much because the whole infrastructure lies within the ARM template.

Once I created the CosmosDB instance and copied the connection string as an environment variable, the demo application started working as expected.

Conclusion

While Azure provides a more seamless solution from a development perspective, if you need to use a Serverless framework for deployment, AWS is the absolute winner – and this is not a coincidence.

The complete working solution can be found in the same repository as in the other blog post but in a different branch - see it here.