Why you should set up your project's CI/CD pipeline ASAP

In IT, the demand for agile application development and faster delivery increases day by day. Whereas a decade ago, manufacturing downtime for a day might not have been a catastrophic event. In today’s world, where everything is connected to everything else, hours and even minutes until a bugfix reaches production can cost thousands of dollars.

But that is not all as users demand systems to be integrated into others systems; the complexity increases and gives them a hard time explaining their requirements. Delivering features faster is also a way to show them simply, „Is this what you want?”. 

Finding and resolving bugs early in the development process and limiting the needlessly done work due to misunderstood requirements is crucial to limiting development costs.  

Jones, Capers. Applied Software Measurement:Global Analysis of Productivity and Quality.

The business side need is, therefore, more significant than the technological. This post will provide a case study on the project we needed to transform to CI/CD even though we thought it to be close to impossible at first.

Most of the hardships arose from the architecture that was designed to communicate with two legacy frontends (a Silverlight client being one of them) and two new(er) frontend clients (one of them a business grade progressive web application).

If you would like to know more about progressive web applications, check out this link:

How PWAs help in Field Service Management.


 Our initial system architecture looked like the following:


And the release process was the following:

This was the process to get the features to the staging site. To deploy the same features to production we needed to repeat these steps another time. The release process took somewhere between 2 days and a week

So when the client requested a demo, the only thing we could show was the test system. Release processes need to be reformed, and we started investigating what the causes of the slow release are.

These fell into two main categories. Firstly we did tasks that we could have done with automation, and we underestimated the importance of these (Time consuming tasks). Secondly, we tried to accomplish too much in one sprint (Too significant changes), so we spent much time keeping things compatible with the previous system and merging our existing code to develop.

Handling legacy

To avoid breaking the legacy system, we decided to rewrite some functionalities in the backend services code into the Backend API (from older backend services), which led to a more confident release. The API releases were mainly without issues since the code there was independent of the legacy code. 

We also implemented automated tests for the new API and configured continuous integration on the developed branch in Azure. We decided to let go of the fear of messing up the test database. Instead of maintaining local copies of the test database, we used the test database on feature branches for local development. 

The code quality decrease on development, so we started using pull request for code review. This solved errors like „I forgot to push that” and „Have you pulled the latest changes?”

So that caused the test site to be inconsistent for the steps marked red. The test db was already migrated but the code was not yet deployed. This was not optimal but caused less problems than we expected. The changes we added to the Database were mainly incremental and could be handled by Entity Framework. 

Migration and Seed scripts

When we started the project, the agreement was that we generate the scripts for a db migration with  EF and include those in a slack channel. In addition, we also had the seed scripts, which needed to fill tables that were newly created, like categories or groups for existing objects. 

We introduced a convention to use EF for migration and put seed scripts under the folder .\MigrationScripts next to Migrations, so they are in one place, have history, and easily accessible.

Database updates were also a bit of a pain. The workflow was. 

  1. Switch branch
  2. Pull changes
  3. Open visual studio
  4. Select startup project
  5. Select active project
  6. Run EntityFramework\Update-Database -script
  7. Check if the script is ok
  8. Run script on db

This might not seem many steps, but try doing it once a week, and you will look for solutions that will do this with one click. And there is.

Steps 1-2 can be done through git command line commands, 3-6 can be averted when you use Migrate.exe parameterized in a batch script that can be run from anywhere. So with running this batch script, we go straight to step 7. We are now working on how to concatenate this with the seed scripts.

Embracing automation

So there were a couple of annoying mistakes that repeatedly occurred, like forgetting to increase the version in the version file on the frontend or forgetting to include a js file in index HTML. Since there were four index.html (dev, test, staging,prod), these errors usually came out on the test site only.

If anything, we learned from this that you should bring the dev and test environment as close to the production as possible without compromising production data. So we set up bundling with gulp for dev and test environments with the help of Node.js and „fs ” javascript library (File system | Node.js v16.1.0 Documentation (nodejs.org)). If you are starting a new project, use webpack. It will save you much time.

We also changed versioning from Major.Minor.Build Year.Month.Day.HHMM. But we only increase the version in the azure DevOps build, to avoid merge conflicts on feature branches. This guarantees that the version is always up-to-date and unique, and we do not forget to increment. In addition, we know with a single look when we released that version.

Azure pipelines (CI/CD)

With the help of Azure DevOps, we created continuous integration (CI) builds that run automatically when a branch is pushed to the repository. Each environment has its branch as a trigger

In addition to this these, test and staging builds have their matching release pipelines. These will deploy the artifacts to the specific app services hosted in the cloud as soon as they are available. Production environments have the feature disabled for security reasons.

Integration with pull requests

Linking code to work items is also easy with Azure DevOps. You can Either add hashtags on commits to link them to work items (even manipulate the states on them) like it is written here:

Close work items using commit messages - Azure Repost 

Or you can aim directly at requirement-driven development and create the branch directly from DevOps.

Drive Git development from user story and requirements - Azure Boards | Microsoft Docs

We need to integrate the developed branches to staging equivalents when doing a staging release before the demo. Why not use pull-requests from Azure DevOps with source branch as develop and target as staging? This will link the linked user stories to the pull request, build artifact, and release. So there will be no more questions about what is out on the site. It will also avoid problems of something is not pushed or pulled while merging. 

So the steps on releasing a feature to staging 

  1. Create branch
  2. Develop feature
  3. Push changes
  4. Create a pull request (PR)
  5. Approve, Complete PR
  6. Run command line db migration tool
  7. Feature live on site

Neat, isn’t it? 

Automated testing

One of the problems we still needed to solve is testing. The new WebAPI did not contain much logic, and we did not want to test EF itself, but rather the appropriate data is returned on a given request. When you are using a REST API, most of the problems will come up in productions.

Thus we created a test client to run with the build and test call the most critical endpoints of the system, creating orders and logs in the test database. We run this as a nightly build so that the test will be run on the correct version matching the codebase. (If the test ran with each integration build, it might happen that the new test, which calls an endpoint, might not find the endpoint since that is not yet released to the app service) and will not take build resources from development during the day.

Conclusion

With these changes, we reduced our release time to about 1-2 hours. Due to this, we could spend more time on development and less time on correcting mistakes we made previously on integration. This also allowed our users to give feedback on the development early in the process. 

Since we added the new features through the Web API, we did not break existing legacy functionalities from WCF endpoints. When it was possible, we rewrote parts of the system that were most painful to maintain.

We increased traceability with the help of linking commits to work items, pull requests, builds and releases. 

With close communication to our clients (and at their approval), we sometimes took risks on what is ready to be deployed and provided a high availability for hotfixes. These were at times where we could not trace back legacy functionalities fully.

As the project grows, we see that there is a long journey still ahead of us. Still, we intend to keep this continuous improvement mentality since the changes we made to integration and deployment have paid us thousandfold since the start. Most importantly, we made our clients satisfied with the software we develop.