Akatama's Slice of the Internet

From Manual Deploys to Reproducible Infrastructure: Modernizing the Deployment Workflow for My Blog

By Jimmy Lindsey

Feb. 25, 2026 | Categories: website, devops, ci-cd, development

When I first launched this site, deploying updates was a mostly manual process. It worked, but it depended on me. I built container images locally, ran OpenTofu from my desktop, and carefully stepped through changes to avoid breaking anything. For a small personal project, that was fine. Over time, though, the friction became obvious. Security updates required extra effort, and infrastructure changes felt riskier than they should have. My desktop was effectively part of the production deployment pipeline. The system functioned, but it wasn’t resilient.

This round of updates wasn’t about adding new features. I wanted deployments to be predictable, reproducible, and secure, without relying on long-lived credentials or manual steps. That meant adding automated testing, introducing CI/CD, tightening IAM permissions, and reworking parts of my OpenTofu configuration to remove local assumptions. The result isn’t flashy and the site looks mostly the same. Yet, the way I update my website now is fundamentally different.

Here's how that evolution unfolded.

Dependency Updates

When I first deployed my website, I used Django 5.2.1. Since then, several security updates have been released, and at the time of writing, Django 5.2.11 is the latest version. Since I was already making changes, this was a good opportunity to pick up security updates and avoid carrying forward known vulnerabilities. One of the main reasons I want to make it easier to deploy new versions of my website is so I can easily keep dependencies current and reduce any risk. After all, security drift accumulates silently.

Aesthetic Updates

I have kept the look of my website purposefully simple to focus on the actual content I am publishing, and I think readability is probably the most important aspect of my website. The narrow columns looked nice, but it made reading the code kind of miserable, as often a good portion of the code would be off the right edge. Of course, once you scroll to the right, you can't see the code that is in the leftmost columns, and so you need to scroll back and forth to be able to properly read it. The fix is what you see below, where the wider middle column makes it so that most code can be read in a single glance.

body {
  color: var(--text);
  background-color: var(--bg);
  font-size: 1.15rem;
  line-height: 1.5;
  display: grid;
  /* grid-template-columns controls the width of columns.
     In this case, there are three of them, with 70% being the middle
  */
  grid-template-columns: 1fr 70% 1fr;
  margin: 0;
}

/* Center CSS class for centering elements, including the image I have on About Me */
.center {
  display: block;
  margin-left: auto;
  margin-right: auto;
  max-width: 37rem;
}

Configuration Changes and Tests

When I first created my website, I wrote some simple tests that test the Django views and models. However, the tests didn't work very well, because I don't have direct access to the PostgreSQL database on my local machine. For this purpose, I thought it would be nice to always run the tests on an SQLite database. SQLite was what I did all of my original testing on, and it is rather simple to set up on GitHub Actions.

Luckily, pytest-django gives you the capability to specify the location of the settings module for Django, and Django allows you to make multiple settings files that override specific information. So I put this in pytest.ini:

[pytest]
DJANGO_SETTINGS_MODULE = website.settings_test
python_files = tests.py test_*.py *_tests.py
filterwarnings = ignore::DeprecationWarning

Then put this in website/settings_test.py:

from .settings import *

DATABASES = {
    "default": {
        "ENGINE": "django.db.backends.sqlite3",
        "NAME": BASE_DIR / "db.sqlite3",
    }
}

Now anytime I type uv run pytest, pytest will create a new SQLite database and then run the tests. These regression tests will make sure that any future updates to my website won't break any current functionality. This is a small, but crucial part of being able to build, push and then deploy the containerized version of my website.

GitHub Actions Workflow: Testing

Now that I had working tests and the changes I wanted to my website, the next step was to build a GitHub Actions Workflow job that tested my Django application with said tests.

I migrated my local .env values into GitHub Actions secrets (e.g., SECRET_KEY, database credentials, allowed hosts), which are then injected into the workflow at runtime. Technically speaking, you really only need SECRET_KEY at this step, as Django will not properly run without a secret key when DEBUG = False. However, I knew I would need all of these eventually. With that done, I could reference the secrets in a workflow like so: ${{ secrets.SECRET_KEY }}. You can see this below for the run-test job for my workflow:

name: Test-Build-Plan-Deploy
on:
  push

jobs:
  run-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v5
      - name: Set up Python 3.14
        # note: uses .python-version, which exists thanks to uv!
        uses: actions/setup-python@v4
      - name: Install the latest version of uv
        uses: astral-sh/setup-uv@v7
      - name: Install dependencies
        run: uv sync
      - name: Run tests
        env:
          SECRET_KEY: ${{ secrets.SECRET_KEY }}
        run: uv run pytest

The env: part injects my secret key as environment variable for that step, which my Django application reads from. The test run takes 15 seconds, and now every time I push on every branch I make sure I don't break the models or views of my application.

Securing GitHub Actions with AWS IAM

The most difficult part of this entire setup wasn’t Docker, OpenTofu or even GitHub Actions. Instead, it was IAM itself.

The goal was straightforward: allow GitHub Actions to build and push a container image to ECR and apply infrastructure changes. I wanted to do this without embedding long-lived AWS credentials and without giving the workflow excessive permissions.

Using OIDC Instead of Static Credentials

Rather than storing AWS access keys in GitHub secrets, I configured [GitHub's OpenID Connect (OIDC) integration with AWS]((https://docs.github.com/en/actions/how-tos/secure-your-work/security-harden-deployments/oidc-in-aws). This allows a workflow to assume an IAM role dynamically at runtime.

In practice, that meant:

The important part isn't the click-by-click setup, but the outcome. GitHub Actions can request short-lived credentials securely, and no long-lived AWS keys exist in my repository.

Granting ECR Permissions (and Only What's Necessary)

Once the role could be assumed, the next challenge was permissions. To build and push Docker images, the workflow needs:

This sounds simple, but IAM has a way of teaching you through failure. The first few policies I wrote were too restrictive. Only after several failed workflow runs did I arrive at the minimal set required to authenticate and push images successfully.

Expanding IAM Permissions for OpenTofu

The complexity for my role's IAM permissions increased once I tried to use OpenTofu to deploy from my workflow. After all, the workflow wasn't just pushing images, it was:

One of the more frustrating discoveries was that AWS managed policies (e.g., AmazonEC2FullAccess) are broad and difficult to scope tightly to specific resources. If attached, they grant more power than I was comfortable with, so this became a balancing act. If the permissions are too restrictive, then deployments will fail. If they are too permissive, then there is unnecessary risk.

In the end, I settled on a role that has the minimum required permissions across S3, ECR, EC2, RDS and Elastic Beanstalk to support both image pushes and infrastructure updates. It's not perfect, and it will need updates if I make any future changes to the infrastructure, but it is far more secure than using static credentials or blanket full-access policies.

Why This Matters

This IAM work is invisible when everything succeeds, but it's foundational. With OIDC and scoped roles in place:

It was easily the most tedious part of the process, but also the most important from a security perspective.

GitHub Actions Workflow: Docker Build and Push to ECR

Now that I have a role with the correct permissions, I am able to use them to build my container image, tag it, and then push it to my private ECR. Adding onto my workflow from before:

  build-container:
    needs: run-test
    runs-on: ubuntu-latest

    permissions:
      # required for allowing the workflow to request and use an OIDC token for an action or step
      # specifically, we need the below set for aws-actions/configure-aws-credentials@v4
      id-token: write
      contents: read # required for actions/checkout

    steps:
      - uses: actions/checkout@v5
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/WebsiteDeploy
          aws-region: us-west-1
      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2
      - name: Build docker image
        env:
          REPOSITORY: blog_ecr
        run: |
          # try --provenance=false so I don't see weird 0MB docker images on AWS ECR
          # also note that I decided to do Docker build as its own step for future reasons
          docker build --provenance=false -t $REPOSITORY .
      - name: Tag and push docker image to Amazon ECR
        env:
          REPOSITORY: blog_ecr
          REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          IMAGE_TAG: ${{ github.sha }}
        run: |
          docker tag $REPOSITORY:latest $REGISTRY/$REPOSITORY:$IMAGE_TAG
          docker push $REGISTRY/$REPOSITORY:$IMAGE_TAG

As mentioned in the job itself, aws-actions/amazon-ecr-login@v2 is the action that allows us to authenticate with AWS. Note that the full AWS Resource Name (ARN) for my role includes my AWS_ACCOUNT_ID, so I added that to my repo's secrets. Next, I log in to my private ECR with aws-actions/amazon-ecr-login@v2. Note that I currently only have one ECR in my whole AWS account. If you have more, you will probably need to specify what ECR you want to authenticate with. You can find more information here on how to do that.

After that, everything should look mostly normal to someone who is familiar with Docker, with the exception of --provenance=false in the docker build command. Before I started using it, I was seeing some extraneous 0MB images in my private ECR. You can see an example of what I saw in the below image. This argument prevents this from happening. I also want to note that I am
tagging my images with the git commit SHA, which will be important in the next section.

Weird 0 MB Docker images in AWS ECR

OpenTofu: Making Infrastructure Reproducible

Up until this point, I could test my application and build and push a container image automatically, but deploying infrastructure changes still required running OpenTofu from my local machine. That meant my desktop was part of the deployment process, which is exactly what I was trying to eliminate. I wanted future updates to be reproducible and low-friction, which meant that infrastructure needed to be runnable from CI just like everything else. If you want to see the specific changes I made, you can review the first, second, third or fourth PR I merged, or browse my current OpenTofu configuration.

The changes I made fall into four main categories.

Remote State with S3

Originally, my OpenTofu state lived locally. That works for a single developer, but it ties infrastructure changes to one machine. If I wanted my workflow to plan and apply changes, it needed access to the same state file.

I configured an S3 backend with bucket versioning enabled. That gave me a single source of truth for infrastructure state, the ability to run tofu plan from either my desktop or GitHub Actions, and versioned state in case something goes wrong. I also enabled locking with use_lockfile=true to prevent concurrent modifications. Even though I am the only person working on this project, building the system this way prevents subtle race conditions later. This was the first step toward making infrastructure truly portable.

Using the templatefile Function Instead of the template_file Module

OpenTofu's template_file module is deprecated, so I really wanted to move to the replacement: the templatefile function. When I did this, it also made it so I did not need to use the local_file module either.

resource "aws_s3_object" "ebs_deployment" {
  bucket = aws_s3_bucket.ebs.id
  key    = "Dockerrun.aws.json"
  content = templatefile("${path.module}/Dockerrun.aws.json.tftpl", {
    image_name = aws_ecr_repository.blog.repository_url
    tag_name   = var.image_tag
    db_port    = var.db_port
  })
}

The above renders the template directly and uploads the result to S3, removing any local file generation from the process. As I mentioned before, image_tag is the git commit SHA. I did this because it will make every image I create have a unique tag, which will make it much easier if I need to revert changes in the future.

Passing Secrets Explicitly from CI

Previously, I injected environment variables into my EC2 instance by reading from a local .env file. However, since I already migrated my secrets into repository secrets earlier, the cleaner solution was to pass them explicitly into tofu plan via -var flags. This made the dependency on those variables visible and versioned alongside the workflow, and now the deployment process easier to reason about. As a nice bonus, as I was writing the first draft of this post, I realized I had a few environment variables that were unnecessary. That allowed me to reduce some complexity in my deployment.

Versioning Application Deployments

One of the more important changes was how I handled application versions in Elastic Beanstalk. Earlier, all of my deployments simply used the default latest tag for my image, which would make rollback difficult, or in some cases, impossible. As I mentioned above, I now tag my Docker images with the git commit SHA. Now, every deployment references a unique, immutable image. I also made sure to keep at least one previous Elastic Beanstalk application version available for rollback. Because the image tag is commit-based, I know that the referenced image will still exist in ECR.

Why This Work Matters

None of these changes were flashy. They didn't add features or change how the site looks. Yet, they changed everything in regards to how I update my website in the future. I don't have hours to spend maintaining my website every time there's a Django security update or I want to tweak a layout. At best, I might have an evening. The goal was to make updates predictable and fast. This was the final step in removing my local machine from the deployment chain.

By making state remote, using non-deprecated features, passing secrets directly, and running OpenTofu from CI, I turned infrastructure into something I can reason about instead of something I have to babysit. The time I invested here will save me lots of time later, and that means maintenance of this website won't be an afterthought like it has been since I first deployed it.

GitHub Actions Workflow: OpenTofu Plan and Apply

Here is the last job for my workflow:

  plan-deploy:
    needs: build-container
    runs-on: ubuntu-latest

    defaults:
      run:
        working-directory: ./opentofu

    permissions:
      # required for allowing the workflow to request and use an OIDC token for an action or step
      # specifically, we need the below set for aws-actions/configure-aws-credentials@v4
      id-token: write
      contents: read # required for actions/checkout

    steps:
      - uses: actions/checkout@v5
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/WebsiteDeploy
          aws-region: us-west-1
      - name: Setup OpenTofu
        uses: opentofu/setup-opentofu@v1
        with:
          tofu_wrapper: false
      - name: OpenTofu fmt
        run: tofu fmt -check
      - name: OpenTofu init
        run: tofu init
      - name: OpenTofu Validate
        run: tofu validate -no-color
      - name: OpenTofu Plan
        env:
          DJANGO_ALLOWED_HOSTS: ${{ secrets.DJANGO_ALLOWED_HOSTS }}
          DJANGO_LOGLEVEL: ${{ secrets.DJANGO_LOGLEVEL }}
          SECRET_KEY: ${{ secrets.SECRET_KEY }}
          IMAGE_TAG: ${{ github.sha }}
          RDS_DB_NAME: ${{ secrets.RDS_DB_NAME }}
          RDS_PORT: ${{ secrets.RDS_PORT }}
          RDS_USERNAME: ${{ secrets.RDS_USERNAME }}
          RDS_PASSWORD: ${{ secrets.RDS_PASSWORD }}
        run: |
          set -eux
          tofu plan \
            -var "django_allowed_hosts=$DJANGO_ALLOWED_HOSTS" \
            -var "django_loglevel=$DJANGO_LOGLEVEL" \
            -var "secret_key=$SECRET_KEY" \
            -var "image_tag=$IMAGE_TAG" \
            -var "db_name=$RDS_DB_NAME" \
            -var "db_port=$RDS_PORT" \
            -var "db_username=$RDS_USERNAME" \
            -var "db_password=$RDS_PASSWORD" \
            -out blog.plan
      - name: OpenTofu Apply
        if: ${{ github.ref == 'refs/heads/main' }}
        run: tofu apply "blog.plan"

I set the default directory to ./opentofu for the run command. I then set up OpenTofu with opentofu/setup-opentofu@v1, initialize my configuration and validate it. Tofu plan applies all of the secrets I defined for the Django website in my repo secrets. After the plan is generated, it checks to see if the push came from the main branch, and if it didn't it does not apply my plan.

I will talk here about the flow for this part of the workflow. I wanted it to be essentially one button I would push and then my website would be updated. It's not fully one-click, but it's close. In addition to whatever other changes I need to make for my update, I also need to update ebs_app_version_number either in this workflow, or manually in the OpenTofu configuration. On the branch I am doing this development on, I will be able to look at this job and see what resources will be updated and created. That is my verification that the changes I have made are what I expect on the AWS side. Once I am happy with them, I will create a pull request, and then I will merge it. Clicking the merge button is the single-button deploy I wanted, and that does work!

The Full GitHub Actions Workflow
name: Test-Build-Plan-Deploy
on:
  push

jobs:
  run-test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v5
      - name: Set up Python 3.14
        # note: uses .python-version
        uses: actions/setup-python@v4
      - name: Install the latest version of uv
        uses: astral-sh/setup-uv@v7
      - name: Install dependencies
        run: uv sync
      - name: Run tests
        env:
          SECRET_KEY: ${{ secrets.SECRET_KEY }}
        run: uv run pytest

  build-container:
    needs: run-test
    runs-on: ubuntu-latest

    permissions:
      # required for allowing the workflow to request and use an OIDC token for an action or step
      # specifically, we need the below set for aws-actions/configure-aws-credentials@v4
      id-token: write
      contents: read # required for actions/checkout

    steps:
      - uses: actions/checkout@v5
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/WebsiteDeploy
          aws-region: us-west-1
      - name: Login to Amazon ECR
        id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2
      - name: Build docker image
        env:
          REPOSITORY: blog_ecr
        run: |
          # try --provenance=false so I don't see weird 0MB docker images on AWS ECR
          # also note that I decided to do Docker build as its own step for future reasons
          docker build --provenance=false -t $REPOSITORY .
      - name: Tag and push docker image to Amazon ECR
        env:
          REPOSITORY: blog_ecr
          REGISTRY: ${{ steps.login-ecr.outputs.registry }}
          IMAGE_TAG: ${{ github.sha }}
        run: |
          docker tag $REPOSITORY:latest $REGISTRY/$REPOSITORY:$IMAGE_TAG
          docker push $REGISTRY/$REPOSITORY:$IMAGE_TAG

  plan-deploy:
    needs: build-container
    runs-on: ubuntu-latest

    defaults:
      run:
        working-directory: ./opentofu

    permissions:
      # required for allowing the workflow to request and use an OIDC token for an action or step
      # specifically, we need the below set for aws-actions/configure-aws-credentials@v4
      id-token: write
      contents: read # required for actions/checkout

    steps:
      - uses: actions/checkout@v5
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/WebsiteDeploy
          aws-region: us-west-1
      - name: Setup OpenTofu
        uses: opentofu/setup-opentofu@v1
        with:
          tofu_wrapper: false
      - name: OpenTofu fmt
        run: tofu fmt -check
      - name: OpenTofu init
        run: tofu init
      - name: OpenTofu Validate
        run: tofu validate -no-color
      - name: OpenTofu Plan
        env:
          DJANGO_ALLOWED_HOSTS: ${{ secrets.DJANGO_ALLOWED_HOSTS }}
          DJANGO_LOGLEVEL: ${{ secrets.DJANGO_LOGLEVEL }}
          SECRET_KEY: ${{ secrets.SECRET_KEY }}
          IMAGE_TAG: ${{ github.sha }}
          RDS_DB_NAME: ${{ secrets.RDS_DB_NAME }}
          RDS_PORT: ${{ secrets.RDS_PORT }}
          RDS_USERNAME: ${{ secrets.RDS_USERNAME }}
          RDS_PASSWORD: ${{ secrets.RDS_PASSWORD }}
        run: |
          set -eux
          tofu plan \
            -var "django_allowed_hosts=$DJANGO_ALLOWED_HOSTS" \
            -var "django_loglevel=$DJANGO_LOGLEVEL" \
            -var "secret_key=$SECRET_KEY" \
            -var "image_tag=$IMAGE_TAG" \
            -var "db_name=$RDS_DB_NAME" \
            -var "db_port=$RDS_PORT" \
            -var "db_username=$RDS_USERNAME" \
            -var "db_password=$RDS_PASSWORD" \
            -out blog.plan
      - name: OpenTofu Apply
        if: ${{ github.ref == 'refs/heads/main' }}
        run: tofu apply "blog.plan"

Conclusion

None of these changes added new features, but operationally, everything changed.

Deployments are now triggered automatically and gated by tests. Docker images are immutable and tagged with commit SHAs. GitHub Actions assumes short-lived AWS credentials instead of relying on stored access keys. Infrastructure state lives remotely and can be applied from anywhere. Yet, the biggest shift wasn’t the tooling itself. It was moving from a workflow that depended on manual coordination to one that is declarative and reproducible. My desktop is no longer part of the deployment process, secrets are explicit, permissions are scoped and rollbacks are straightforward.

I don’t have hours to spend maintaining this project every week. The goal was to make updates boring: predictable, safe, and low-friction. Investing the time to tighten IAM policies, clean up OpenTofu assumptions, and automate deployments pays off every time I push a change. The site may look the same, but the system behind it is significantly more mature.