Choose wisely between Pipelines, Stages, Jobs and Tasks

When modeling pipelines with Go, always remember that, subject to availability of agents:
  1. Multiple instances of a pipeline can run simultaneously. This is common if we have a pipeline that takes half an hour to complete and there are multiple commits to the pipeline's material during that interval.
  2. Stages of a pipeline-instance will execute in sequence.
  3. Jobs within a stage will be executed in parallel. If you have stuff that must run in sequence, model them as tasks within a job, not as multiple jobs. But you could also distribute these tasks over multiple stages having one job each. Why? For one, it gives better visibility on the dashboard and pipeline activity page. Stage progression is visually depicted. Task progression within a job is only depicted as a overall progress bar (because agents only report back to the server on job completion, not for each task). But perhaps more importantly, this gives you finer-grained re-run ability. It isn't possible to re-run individual tasks, only jobs. So modeling a sequence of activities into a number of single-job stages lets us pick and choose what to re-run thus resulting in faster feedback at a micro level.
  4. Note however that refactoring a multi-task job into multiple jobs means that the tasks may not all run on the same agent. Job is the unit of agent activity - so if you care about agent affinity (e.g. not fetch materials on multiple agents), a single job is the way to go.
On the other hand, do make full of parallelizability. Try to refactor pipelines with many stages into multiple pipelines. Try to partition independent compilation or testing activity into multiple jobs. Go is quite powerful and flexible - make sure you put it to work.


  1. One thing I've noticed with stages is that there seems to be a long 10s pause between stages. When you have tasks that only take a few seconds it's annoying to wait 10s just for the stage to begin executing.

    Other than that, good post. I like to see people take more care into their pipeline model just as much as their domain model.

  2. Hmm, I checked with a dev and it seems that idle agents ping the server for work at an interval of 5 secs. This may explain the behaviour you see. So yes, if your job finishes well under a minute, there isn't much to gain in terms of fine grained re-runability by breaking it up.

  3. This is my scenario: a variable number of servers in a given environment ( auto scaling array of servers). I can easily have the list of active servers, using some api. How do you suggest to model such a pipeline?

    1. SLL,

      Are you using Go to orchestrate a deployment to your servers?


Note: Only a member of this blog may post a comment.