How I Finally Solved Bitbucket's CICD After 2 Years Of Trying

| Sep 22, 2024 min read

Introduction

Yes, this is a success story. No, it was not easy.

One of my clients requested so that I implement a company wide CICD process in Bitbucket, as they rely on Atlassian heavily. You know, Jira, Confluence, and stuff;) With my previous experiences with Gitlab, I jumped straight to it. I expected a better experience, Atlassian is used in big companies a lot after all. Surely, they figured out all the meanders of CICD processes, right? RIGHT?!

To my surprise, that was not the case at all. I struggled almost at every step. Starting with parametrization of jobs using env variables, through environment separation, branch restrictions. It was a nightmare. How are we suppose to separate Dev and Prod environments without some weird workarounds and hacks?

Bitbucket Deployments and OpenID Connect integration

To the rescue (or so I thought), came Bitbucket Deployments and OpenID Connect. Deployment looked really promising at first glance.

  1. Branch restrictions
  2. Separation of env variables
  3. Allowing only admins to deploy to a specific environment (Premium-only though)
  4. Each of the environments has a separate OIDC context (perfect for auth restrictions)

That felt like a jackpot. I started coding with enthusiasm again. To this day I think OIDC Connect integration is great, I hate storing credentials in 3rd party services. Deployments on the other hand, that I was strongly disappointed with. Let me show you couple of tickets from Bitbucket’s own Jira:

Multi-step Deployments: https://jira.atlassian.com/browse/BCLOUD-18261

Support manual triggers for jobs in stages: https://jira.atlassian.com/browse/BCLOUD-22223

As an engineer working a lot with Cloud like AWS or GCP, I can’t imagine not using IaaC tools like terraform. Also, my experience has been: never let your production pipeline to just run A-Z automatically. I hate fully automated deployments. I want control. And it looks to me I’m not the only one. Yet, the multistep deployments request has been first opened in 2019! 5 years as for now! And this functionality was never delivered. Over a 1000 upvotes so at least a 1000 unhappy developers trying to use the tool that just can’t get the work done. Solving one problem to simply find another blocking you.

As for me, I had to be honest with my client. The verdict was:

I can’t fulfill your requirements for usability and security with Bitbucket pipelines.

Fast forward to 2024 and Dynamic Pipelines

I couldn’t put my finger on this: why Atlassian fails to fix, I would think, the simplest of problems. Suddenly, few months back I saw this:

https://bitbucket.org/blog/introducing-dynamic-pipelines-a-new-standard-in-ci-cd-flexibility

Could that be it? I decided to give it another try. One graphic sold the whole idea to me immediately:

If I make it right, I can enforce whatever strategy I want on a whole workspace. No matter what any project team would do, I can enforce the final strategy just before it is executed. This was exactly what I would expect from an enterprise standard.

There’s not much documentation released online so far. Not many tutorials. So I had to go in dark. But my idea was simple: SOLVE DEPLOYMENTS. This is not a promotional post for Dynamic Pipelines but I wouldn’t be fair, if I didn’t mention what this functionality can actually do. It’s a Premium feature after all, so we expect great things from it;) In a nutshell, it can modify the pipeline’s yaml (json - actually) definition before it gets processed. Pretty neat, right? So what can you do with this? Amongst most useful would be:

  • adding security scan checks at the beginning
  • prevent steps - if you think something must not be executed, you can raise an error if you detect it
  • enforce runs-on for any step - this was crucial for my setup

I was pretty sure that the overall schema for bitbucket-pipelines.yml should be enough for most use-cases. What is not possible with Deployments (manual triggers, parallel jobs, etc.) is fully supported for normal pipelines. So what I needed, was to simply provide secure environments. Of course, it was not completely straight-forward. I had to learn how to implement Bitbucket Pipelines using Forge Apps first.

Solution

In this article, I will not give you a full code recipe. But I will tell you what steps I took to get to this pipeline:

Here’s the recipe for Bitbucket with AWS integration assuming you have environments separation (DEV and PROD accounts).

AWS/Bitbucket preparations:

  1. Create separate runners (it can be EC2 if you use AWS, it can be Kubernetes cluster - up to you). What is important, you need to be able to pin down the CIDR of those runners - you’ll need it later. Assuming you have separately configured VPCs, you can use VPC’s CIDRs in next steps.
  2. Create a strategy for branches-environments relation. Example could be: anything on master (or main), hotfix/, release/ can be run on PROD environment, anything else - can’t.
  3. Create IAM roles for Bitbucket to assume in both environments. Here’s critical part: restrict Trust Policy for each role to their corresponding VPC (where runner resides) or CIDR of your runners directly.
  4. This way you prevent from unauthorised access from anywhere but your dedicated runners. For example, if your VPC CIDR is: 10.0.0.0/24, than your role’s Trust Policy would look somewhat like this:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::your-account-id:oidc-provider/api.bitbucket.org/2.0/workspaces/your-workspace/pipelines-config/identity/oidc"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "10.0.0.0/24"
                }
            }
        }
    ]
}

Dynamic Pipelines configuration:

  1. In your Dynamic Pipelines setup - detect what branch are you on. Keep that context.
  2. Whenever a job defines: oidc: true - add OIDC configuration for a corresponding account. It’s pretty verbose (and ugly), so I recommend adding this, for the sake of all teams’ sanity, in Dynamic Pipelines:
        const oidcSetupSteps = [
            `echo "Setting up ${envPrefix} OIDC credentials"`,
            'export AWS_REGION=${region}',
            'export AWS_DEFAULT_REGION=${region}',
            `export AWS_ROLE_ARN=${roleArn}`,
            'export AWS_WEB_IDENTITY_TOKEN_FILE=$(pwd)/web-identity-token',
            'echo $BITBUCKET_STEP_OIDC_TOKEN > $AWS_WEB_IDENTITY_TOKEN_FILE'
        ];
    
  3. Validate runs-on - this is critical. Make sure you only allow proper labels for your runners. We want to make sure, dev jobs land into dev runner, and prod - into prod runner.
  4. If you need to run DEV jobs on protected branches (like master), you need one more tweak, which is to figure out a strategy to differentiate those. Unfortunately, Bitbucket does not provide any custom labels/tags, so you need to get creative. But even something so simple and verbose as adding “PROD” at the beginning of the step name should do the work :)

And that’s it. You dispatched all of the jobs automatically to your secure self-hosted runners, and you can stop worrying about noobie developers destroying your production environment.

If you work with Bitbucket and AWS (or possibly other Cloud Provider), and need help with CICD process in your company, you can find a Calendly link on the main page. Let’s talk

And as always - thanks for hanging in there till the end.

Cheers,

Jed