Azure pipelines are great, but a frustration is how to persist variables through phases.
For two tasks running in the same agent phase you can pass variables a number of ways. If it is a Windows host you can do it through PowerShell or the O/S, otherwise you can drop it to a property file. Both are possible because all tasks within a phase run on the same host.
If you have two tasks which are running in different phases you have a much bigger problem. This is because each phase could run on a different agent, or in the case of an server phase, not run on an accessible host at all.
Even if you could lock a release to an agent and even if you trusted Azure to always do this, frequently you wouldn’t want to do it. I mean, what if you have 4 agents in total and three agents are paused at “manual intervention” – meaning 3 of your 4 agents are out of use and doing nothing but waiting..
Azure DevOps solution
Azure does have a solution to this:
The Zoyinc solution
At a high level my proposed solution is simple, it’s the detail where it gets tricky.
Using Azure API calls you can update certain details of a release. Note I am talking here about a “Release” not a “Release Pipeline Definition“. A “Release” is an instance of a definition. For example you can update global environment variables and change the details of a stage – you just can’t change the details of the current stage.
My solution is to persist variables as “Pipeline variables” which have a scope of “Release” – global if you like. These can be updated and even added at any time using API calls. As I mentioned earlier “the devil is in the detail”.
The types of use cases that come up for persisting variables are around passing values between disparate tasks running in different phases, or reading a users comments in a “Manual Intervention”, or being able to skip certain phases/tasks based on complex conditions. If you put your variable into a pipeline variable then it is visible to all tasks and phases and can be added to the “Control Options” of tasks and phases.
The fiddly bits
A really important consideration is that a release could be deploying to multiple stages concurrently. For this reason, because we use global variables, scope = ‘Release’, we need to have different variables for each stage.
To put all the pieces together and ensure I have caught all the fiddly bits I have created a working demo release pipeline “Persisting Azure Pipeline Variables”.
For simplicity and to make this demo portable it is designed to work with the hosted agents, “Hosted Windows 2019 with VS2019”. As a result they don’t come with the Python module I need, “requests”, so that has to be installed every time, so this has a noticeable impact on deploy timings.
Best practice would be that the scripts should come from Git, but to make the demo more transparent and simpler I have actually put the scripts in as secure files.
This post is not a how-to on Azure DevOps pipelines so I am not going to explain every dot about how to do pipelines, you need to look elsewhere if you don’t know anything about Azure.
These are the files will allow you to setup an instance of this demo for yourself in your Azure project.
Health check Python script
Process code deployment approval comments Python script
Export of the demo release definition
Persisting Azure Pipeline Variables 20190402.json (zip)
Perform Health Checks
As mentioned earlier all our variables are stage specific, all variables are named “GLOBALVAR_<Stage>_<var name>”. So for example the variable to hold the manual intervention comment is:
Its a bit long but it’s functional.
Using the process I have outlined we need to have variable names that match the stage name. It would be very easy not to get this right and end up with a pipeline that is totally dysfunctional. Thus I have written a “Health Check” script that runs right at the beginning. Its primary purpose is to check that any global variables used in either:
- Last Chance Code Deployment Approval Task
- Any custom condition variable expression
- Manual intervention instructions
aligns with the actual stage where the item is. If there are any mistakes the build fails and give you all the details necessary to fix the problem. This health check runs on each deploy and checks all stages not just the current one – so it will test PROD before you get to it.
The following describes how to take the files on this post and create a pipeline.
Import the pipeline json file
Import the pipeline file and you will notice some things you need to do:
On each stage for each of the “Run on agent” phases, which is “Perform Health Checks”, “Process Code Deployment Approval Phase” and “Process Last Chance Code Deployment Approval Phase”, you need to set the “Agent pool” to “Hosted Windows 2019 with VS2019”
Now upload both Python scripts in the “Secure Files” library:
Run the pipeline
Now that you have finished configuring the pipeline you can go ahead and do some deploys: