In the modern world, we all like automating repeatable tasks; one of them is the continuous delivery of Sitecore solutions. This post will show you a generic checklist about what to consider when building your Sitecore deployment pipeline for CM and CD servers. (Custom xConnect implementations and Solr server configurations are not in the scope of this article), decoupled from any tool like Azure DevOps, TeamCity, Jenkins, Octopus, etc. Let’s agree on the following requirements:
We have 3 environments: DEV, UAT, and PROD.
When we are talking about deployment of a Sitecore solution, it means the deployment of:
My suggestion is to handle the 2 resources differently. The Sitecore base package can only be changed if the infrastructure changes or with a Sitecore upgrade, which happens infrequently. On the other hand, I suggest using parameters for configurations that differ on each staging environment, e.g., connection strings. By using parameters, you can reuse the same Sitecore package for different stage environments. I suggest using a similar scaled setup for each staging environment to have the closest similarity to PROD. For example, if you have a scaled environment with more content delivery servers on PROD, use a similar setup on DEV and UAT but with fewer resources and with fewer content delivery servers.
This package usually consists of the following:
The first three types of resources from the list above are static in the manner of deployment, means these should be deployed as it is built.
Sitecore configuration patches can be static – same values on each environment – or dynamic – using different values based on the environment. Static configuration patches are simply part of the version control, e.g., patches for custom processors and link providers. Dynamic configuration patches mean values changed by the deployment and can have different values on each server or role or environment, e.g., site configurations. To achieve that, multiple ways are available to do that:
In both cases, you can decide where to store these variables of the different stages. You have the following options:
The developer access levels I mentioned above are typical limitations for these options, but the actual access level depends only on the configuration of your solution.
This package also includes serialized items that need to be synced to the Sitecore databases (core or master) when a deployment happens. The most popular available tools to serialize items are Sitecore Content Serialization (from Sitecore 10), Unicorn, and TDS. We separate them into 3 groups:
This is an important topic to discuss because I have never seen a Sitecore project without any Sitecore Support patches. These are bug fixes provided by the Sitecore Support team, which can consist of the following files and items:
You have two ways, how to include these changes in your release pipeline.
One option is to include it in your release pipeline, separately from the custom implementation. The advantage of this way is that the DevOps team has complete control over the Sitecore Support patches. The disadvantage, however, is that patches are not transparent for the Sitecore developer team, and locally they don’t have the same patches installed, or a custom separated deployment script is needed to deploy the patches locally.
My suggestion is to handle these in a separated project, whether you use Helix principles or not. My preferred way is to include these files and serialized items in your custom Sitecore Visual Studio solution and let MSBuild publish them. This way, the Sitecore Support patches are transparent for the developers, and they can deploy the same patches locally easily without any custom publish script.
It makes sense to mix two ways in some cases, and the team can decide where to put a Sitecore Support patch based on what it contains. For example, infrastructure-related patches can be placed in the release pipeline, but functionality-related patches in the custom implementation solution.
As the content databases (master and web) can grow big, especially in a PROD environment, it is also crucial to handle database backups. Massive databases need huge free disk space on the database server to create backups, but this is not an issue because the hard drive is relatively cheap. Whether you need database backups or not, you most definitely need it on a PROD environment to roll back anytime to a stable point of the database in case of database corruption.
The question is, when to do the backups?
Regular nightly backups are recommended if the content changes are frequent – hundreds of item change on a day by editors or automated synchronization. But do you need it as part of the release pipeline? I say no because creating backups can take an hour(s) with big content databases with multiple sites. Therefore, creating backups will be the bottleneck of your release pipeline and will increase the PROD deployment from minutes to hours.
Then how to handle rollback?
It’s all about timing your PROD deployment. If you have automated nightly database backups, then schedule your PROD deployment for the morning and not the afternoon to lose less content in case of a rollback. As a rollback should happen infrequently, you should handle this case in place not to overcomplicate the deployment of your Sitecore solution.
Fortunately, Sitecore solution deployments can be standardized because the deployment processes are similar to each other on all Sitecore projects. Hopefully, you can find out your automatic deployment pipeline and integrate into any tool you chose with this guideline.
Are you interested in how to do a Zero-Downtime deployment using Azure Kubernetes Service?
Do you need help to automate your Sitecore release pipeline? Contact us!
In the modern world, we all like automating repeatable tasks; one of them is the continuous delivery of Sitecore solutions. This post will show you a generic checklist about what to consider when building your Sitecore deployment pipeline for CM and CD servers.
On one of our recent projects, we needed to implement an application with CRUD operations and some relateively simple integration by pushing content to two different systems, and monitor if they have been processed or not. We opted the Amazon Lambda route with Amazon DocumentDb, and the goal of this blog post is to summarize the developer experience that we faced during development of the project and comparing them to Azure Functions. As .NET developers, we faced that the developer experience is significantly better using the matching Azure technologies, and running an Amazon Lambda function locally might also have a steep learning curve for some people who are not familiar with Docker and containerization.
In this article, I will show you a solution for zero-downtime deployment in Azure Kubernetes Service. To add a context for it, first, we are going through some Deployment strategies. Then, I will choose the one that fits our needs. Some of them are supported by Kubernetes natively, some are not (yet). Next, I will outline a System overview by showing you the necessary Kubernetes objects in our AKS. The following part of my article presents our Azure DevOps deployment pipeline to you and briefly goes through the scripts and other settings that do the main thing: zero-downtime deployment. Finally, I am going to Wrap up the things