In this world of continuous and automated everything, it seems we’ve “solved” most of the problems of deployment of services. However, that only gets us part of the way. Most applications aren’t stateless all the way through, and we need to deploy updates to our databases as well.
This process would never have fit into one post, so instead we’ll split it into parts:
- Where We Are, and Where We Want to Be (this post) in which we discuss what pipeline we already had in place for the application before beginning this work, and how to piggy-back on that for the database updates.
- Packaging the Database Updates in which we script the packaging of migration scripts in a way that the build- and deployment tools can handle.
- Deploying the Database Updates in which we script the actual update of the database, including a pre-update backup to be restored in case of a failure.
- Adding Automatic Rollbacks of Application Services in which we teach Octopus Deploy how to roll back to the last know successful release in case of a failed deploy. We’ll also add a retention policy for database backups.
- Summary of the series in which we examine the deployment process after applying all these changes.
Before going into details on how to set up the database deployment scheme, let me briefly describe the CI/CD pipeline we already had.
The Starting Point
This is what we had in place for the application code:
When a developer checks in code to GitHub, TeamCity builds the solution, runs all the tests and creates a NuGet package, which it pushes to Octopus Deploy. From there, we can deploy the packages to all the servers. (Currently, we have VM instances in Azure as our test environment, as well as a couple of machines on-prem for QA/staging and production.)
Looking closer at the deployment process (the last arrow in the image above), it looked something like this:
However, the pipeline above does not handle database updates at all – those are all handled manually! In a simplified description, this is what we’d do:
- Connect with RDP to the db machine, and figure out the starting point by looking at the list of applied migrations.
- Locally, cd into the directory of our repository project, where we had all the EF Core migrations.
- Generate a migrations script with dotnet ef migrations script –idempotent <last-migration-applied-on-server>
- Connect with RDP to the machine again, and execute the script manually against that database. (Of course, we’d first take appropriate backups etc.)
Naturally, this was pretty error prone. Not so much in that the scripts would fail – they almost never did. But since most deployments didn’t include a database change, it was difficult to keep track of which ones did. We often figured out that we had forgotten to deploy db updates when something in the application broke.
We Can Do Better!
In this series, we’ll piggy-back on the pipeline above, and set up something that looks like this:
The initial steps are the same, so I’ve left them out of the figure, but there is a new flow for the database deployment.
- If there are no pending migrations, we skip the entire db flow. This can save a lot of time in the deployment process.
- Before doing anything, we back up the current state of the database.
- Next, we apply each pending migration, in order and in a transaction (so the entire migration either fails or succeeds).
- If any migration script fails we roll back the database to the backed-up state, and tell Octopus to re-deploy the last successfully deployed version of the application. This means that if something goes wrong, no manual intervention is needed to keep the environment up (with the old configuration).
- If everything went well, we apply a retention policy to the database backups, to prevent them from taking up too much disk space if we deploy often.
In the following parts of this series, we’ll explore how to set this up, using Octopus Deploy, the dotnet ef CLI and some PowerShell scripting to fit all the pieces together.