Akatama's Slice of the Internet

How I Created and Deployed My Blog

By Jimmy Lindsey

July 21, 2025 | Categories: website, aws, devops

Hello! If you are reading this, then you are on my personal website and blog. This website was created as a side project for myself. Naturally, if I just wanted a personal website and blog, there were several more efficient ways I could've gone about it. However, I had several goals for this project, and honestly, it was a lot of fun to work on! Additionally, it was cool to work on a project with uv and ruff. I definitely will be using those two tools in the future.

Goals

  1. Create a website and blog myself in Django.
  2. Create a (Docker) container image of my website.
  3. Once the website was created, I wanted to deploy it on AWS and keep it running.

If you have similar goals or what to learn similar things, then I am absolutely okay with you following this project yourself. You can see the GitHub for my project here. However, I do want to note that this blog post is not an introduction to Django. I will include references throughout the post, and I will point out unique or specific details for how I did things. If you haven't used Django before, I started by following this guide from RealPython.

Design

This is just meant to be a simple website, with 3 main pages: blog, about me, and resume. Some blogs have comment sections, but I didn't want to bother with that because I would need to handle user accounts myself or use SSO with Google / Facebook / LinkedIn / ect accounts. Since I will be posting these blogs on social media, I figured that if anyone wanted to comment on them, they could comment on them there.

Blog

The most complicated part of my website.

The blog itself has 3 pages. First, a blog index which contains a paginated list of all my blog posts starting from the most recent. Blogs can have multiple categories and the categories are clickable so users can search for similar blog posts, so there is a page for that as well. Finally, the blog detail page. You can see what the views look like down below:

def blog_index(request):
    posts = Post.objects.all().order_by("-created_on")

    paginator = Paginator(posts, 10)  # show 10 blog posts per page
    page_number = request.GET.get("page")
    page_obj = paginator.get_page(page_number)

    context = {
        "posts": page_obj,
    }
    return render(request, "blog/index.html", context)


def blog_category(request, category):
    posts = Post.objects.filter(categories__name__contains=category).order_by(
        "-created_on"
    )

    paginator = Paginator(posts, 10)  # show 10 blog posts per page
    page_number = request.GET.get("page")
    page_obj = paginator.get_page(page_number)

    context = {
        "category": category,
        "posts": page_obj,
    }
    return render(request, "blog/category.html", context)


def blog_detail(request, pk):
    post = Post.objects.get(pk=pk)
    context = {"post": post}

    return render(request, "blog/detail.html", context)

There are two models for the blog, a Post and a Category. In the Post model, note the many to many field type for categories, the intro field (used for the blog index) and the body. The body is MarkdownX, which I got from the django-markdownx plugin. I will talk about my use of markdown in a future section.

class Category(models.Model):
    name = models.CharField(max_length=30)

    class Meta:
        verbose_name_plural = "categories"

    def __str__(self):
        return self.name

class Post(models.Model):
    title = models.CharField(max_length=255)
    intro = models.CharField(max_length=200, default="")
    body = MarkdownxField()
    created_on = models.DateTimeField(auto_now_add=True)
    last_modified = models.DateTimeField(auto_now=True)
    categories = models.ManyToManyField("Category", related_name="posts")

    def __str__(self):
        return self.title

    @property
    def formatted_markdown(self):
        return markdownify(self.body)

Resume and About Me

The Resume and About Me pages are very, very similar, to the point where I don't think it is worth it to show both of them. If you are really curious, you can see the Resume models.py and views.py here. Basically, both have two main qualities: the main content is made up of a MarkdownX as in the blog, and both can only have one instance created on the Admin page. For this, I overrode the save method, which throws a Validation Error if there is an attempt to create a second Resume or About Me instance.

def about_view(request):
    about_me = AboutMe.objects.get(pk=1)
    context = {
        "about_me": about_me,
    }
    return render(request, "about_me/about.html", context)
class AboutMe(models.Model):
    title = models.CharField(max_length=255)
    body = MarkdownxField()

    class Meta:
        verbose_name_plural = "About me"

    def save(self, *args, **kwargs):
        """
        Create only one About Me instance
        """
        if not self.pk and AboutMe.objects.exists():
            raise ValidationError(
                "There can only be one About Me, you cannot add another"
            )
            return None
        return super(AboutMe, self).save(*args, **kwargs)

    @property
    def formatted_markdown(self):
        return markdownify(self.body)

MarkdownX

A bit about markdown. There are several markdown plugins for Django. I picked MarkdownX because it also allows me to see the rendered content as I am editing it on the Admin page. It does require some configuration, especially if you want some markdown extensions to be available. Here is what this configuration looks like in settings.py:

MARKDOWNX_MARKDOWN_EXTENSIONS = [
    "markdown.extensions.extra",
    "markdown.extensions.codehilite",
    "markdown.extensions.nl2br",
    "markdown.extensions.toc"
]

You can find all of the available extensions on the python-markdown documentation. These are the ones I picked for now:

The codehilite extension requires further configuration. Follow the below instructions to get the CSS file you need:

uv add pygmentize
# if you want to see what themes are available
pygmentize -L style
# note below that I picked the github-dark theme
pygmentize -S github-dark -f html -a .codehilite > styles.css

Now copy this file to you static files directory and run uv run manage.py collectstatic. Finally, you need to add this CSS file to base.html under the head element.

Markdown itself is nice because I can use it to generate html, and since all of my website is made up of content I am going to be writing and changing, this allows me to update my website without having to deploy a new version. Beware if you are making a website that accepts user-generated content though, as you need to make sure you properly sanitize user input.

Look

Next is the look of my website. Django has a very powerful templating system that allows you to dynamically create the HTML of the page based on the objects that are being passed from the view. For example, if I have 10 blog post objects that are being passed to the template, I can use a loop to iterate through all of them and then display them as I want on my website.

I would say that if you want complete examples of how this all looks, I would look at my repo. For this post, I will just show you some small, but interesting, snippets.

Iterating through all of blog posts passed from the blog_index view function to index.html. This also includes looping through all the categories each blog post has:

    {% block posts %}
        {% for post in posts %}
          <div class="blog-item">
            <h4><a class="post-link" href="{% url 'blog_detail' post.pk %}">{{ post.title }}</a></h4>
            <small><p>
              {% load tz %}
                {% localtime on %}
                  {{ post.created_on.date|date }} | 
                  {% if post.created_on|date != post.last_modified|date %}
                  Last Updated: {{ post.last_modified|date }} | 
                  {% endif %}
                {% endlocaltime %}
                Categories:
                {% for category in post.categories.all %}
                    <a href="{% url 'blog_category' category.name %}">
                    {{ category.name }}</a>{% if not forloop.last %},{% endif %}
                {% endfor %}
            </p></small>
            <p class="meta"><i>{{ post.intro }}</i></p>
          </div>
        {% endfor %}
    {% endblock posts %}

category.html extends index.html, and all it is really doing is putting the category on the top as the page title.

{% extends "blog/index.html" %}

{% block page_title %}
<h4>Category: {{ category }}</h4>
{% endblock page_title %}

Last for the templating, the actual blog post has to do some timezone stuff to make sure we only put a "Last Updated:" marker on the post if it was updated a day after it was posted. Also, note that since Django tries to keep websites safe, I have to let it know that the generated HTML from MarkdownX is safe.

{% block page_content %}
    <small>
      {% load tz %}
      {% localtime on %}
        {{ post.created_on.date|date }} | 
        {% if post.created_on|date != post.last_modified|date %}
          Last Updated: {{ post.last_modified.date|date }} | 
        {% endif %}
      {% endlocaltime %}
      Categories:
        {% for category in post.categories.all %}
            <a href="{% url 'blog_category' category.name %}">
                {{ category.name }}</a>{% if not forloop.last %},{% endif %}
        {% endfor %}
    </small>
    <p>{{ post.formatted_markdown | safe}}</p>
{% endblock page_content %}

Finally, I want to talk about the CSS I am using. I am using a total of 5 CSS files. The first one is Simple.css. This is the main look of my website, which I augmented with my own CSS file to override some colors:

:root {
  color-scheme: dark;
  --bg: #212121;
  --accent-bg: #2b2b2b;
  --text: #dcdcdc;
  --text-light: #ababab;
  --accent: #ad82ed;
  --accent-hover: #ad82ed;
  --accent-text: var(--bg);
  --code: #f06292;
  --preformatted: #ccc;
  --disabled: #111;
}

The third I mentioned above in the Markdown section. The last two CSS files are both related to Font Awesome. I am using an older variant of Font Awesome so I can just have it in my website's static files. I am only using these right now for some links to my socials such as LinkedIn, Discord and GitHub.

Static Files

I actually had some trouble with serving the static files when it came time to try to use my website without manage.py runserver. It seemed to work fine if I had nginx running on the server itself, however, it just didn't seem to work when nginx was running in a container. Around that same time, I became very busy with my life and work, which I am sure didn't help. In the end, I tried out WhiteNoise, and it worked instantly.

WhiteNoise is software that helps with serving static files for websites. It can work with various web frameworks, and there is a nice plugin for Django. Another benefit WhiteNoise provides is that will send the caching headers if you are using a CDN. I am not sure this advantage is really that big a deal for a small website like mine, but it is nice that if I decide to use a CDN for fun in the future, I am already set up.

Here is what I did to enable the use of WhiteNoise:

  1. uv add whitenoise
  2. Add "whitenoise.middleware.WhiteNoiseMiddleware", below "django.middleware.security.SecurityMiddleware" in the Middleware section of settings.py.

That is all that is required. I did do a little more for the compression support (and help for the future in regards to caching). This is also added to settings.py:

STORAGES = {
    "staticfiles": {
        "BACKEND": "whitenoise.storage.CompressedManifestStaticFilesStorage",
    },
}

As a warning, I have read some that exposing your website through only Gunicorn running WhiteNoise can have some security implications. In my case, while I am using Gunicorn and WhiteNoise, all traffic coming to my website is going through a separate load balancer that is sending everything to nginx on my VM. Something to keep in mind if you have a different set up though!

Testing

I am not going to go through my testing strategy too much, as there aren't that many tests. If you are really curious, you can see all of them. Pytest with pytest-django are the frameworks I used for my tests. I know Django has its own very capable testing framework, but I am more used to pytest, so I decided to just stick with that.

Pretty much everything I pointed out above is something I wrote a test for. So for the About Me and Resume objects I have a test that verifies you cannot create a second instance. Also, I wrote tests to verify that pagination is happening appropriately (e.g. I created 11 blog posts and made sure that only 10 posts are shown on the first page, and that if I access the second page I see the others).

Thoughts about Django

Django is a very powerful web application framework. I found it very enjoyable to use, and I do not think it got in my way at any time during this project. In the future, it would be cool to see any limitations that arise with a more complicated project. Also, I have to give a shout out to Django's documentation, as it is outstanding. The documentation is so good, it is enough for me to recommend Django for almost any web application you would want to build.

Containerization

Now that I had a tested and working example of my website, I decided it was time to containerize my application. I have created a number of containers in the past for both projects and challenges on labs.iximiuz.com, but unfortunately it is not something I do for my job. The main goal here was to make a container that will work well into the future, whereas when you are creating one for a challenge or a one-and-done project, you don't really care once you have confirmed it works.

Below you will find the Dockerfile for my application.

FROM python:3.13-slim AS base

RUN mkdir /app

WORKDIR /app

# Environment variables that allow Python to run better in Docker

ENV PYTHONUNBUFFERED=1 \
  PYTHONDONTWRITEBYTECODE=1

FROM base AS build

RUN apt-get update && apt-get install -y --no-install-recommends \
  build-essential \
  && rm -rf /var/lib/apt/lists/*

# Download uv. Note the version, 0.7.9 is what I used when developing

COPY --from=ghcr.io/astral-sh/uv:0.7.9 /uv /bin/uv

# UV_COMPILE_BYTECODE=1 is an important startup time optimization
ENV UV_COMPILE_BYTECODE=1 UV_LINK_MODE=copy

WORKDIR /app

# These next few lines install all the necessary dependencies
COPY uv.lock pyproject.toml ./
RUN --mount=type=cache,target=/root/.cache/uv \
  uv sync --frozen --no-install-project --no-dev

COPY . .
RUN --mount=type=cache,target=/root/.cache/uv \
  uv sync --frozen --no-dev

FROM base AS runtime

# Create a new user that is not root
RUN addgroup --gid 1001 --system nonroot && \
  adduser --no-create-home --shell /bin/false \
  --disabled-password --uid 1001 --system --group nonroot

RUN chown nonroot:nonroot /app

USER nonroot:nonroot

WORKDIR /app

ENV VIRTUAL_ENV=/app/.venv \
  PATH="/app/.venv/bin:$PATH"

COPY --from=build --chown=nonroot:nonroot /app ./

EXPOSE 8000

CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "3", "website.wsgi"]

First, a base stage that the other two stages rely on. This base stage mostly just tweaks Python to run better in Docker with environment variables.

During the build stage, we download uv and then install all of our dependencies.

During the runtime stage, we create a nonroot user. We make sure that the /app directory is owned by this user. The last three steps are very important. We copy the dependencies and our application code from the build stage to our runtime stage in the /app directory. We expose a port (in this case 8000). Finally, we run Gunicorn with 3 workers bound to the same port we exposed.

If you are not as familiar with containers, I definitely recommend you start off with just building a basic one stage Dockerfile. You will not get the security or size benefits of a multi-stage build, but it is a bit easier to debug. An example of a one stage Dockerfile that works for Django can be found on Docker's website. If you want to learn more about multi-stage Dockerfiles, this is a great tutorial. Finally, to see some of the optimization made here with respect to Python and uv, you can see this article.

AWS

Now that I have my application created and containerized, the next step is to deploy onto AWS. The next steps will be laid out as if I did this all in one fell swoop, but instead I did a lot of these one at a time. Or at least as close to one at a time as I could. I always find this the best way to learn so if I run into any problems I can isolate it to a few issues. This means that all of these things were adjusted multiple times as I got further on each one.

As an example of what I am saying: first I created the RDS database, then I made sure I could connect to it from my local. I had to change my settings.py file to be able to connect to it, and for this first attempt I just hardcoded all the values. Of course, doing it this way is a security hazard, and it can also make it harder to change when the app is already deployed. However, I never planned to keep it this way, I just wanted to test my connection to the database in the simplest possible way so I could make adjustments if things didn't work. In the end, I used environment variables for a lot of the Django configuration, and that is what I recommend you do for the final state of that file.

I created an AWS account for all of this, which means I got access to their free tier. I can talk about the costs of keeping my website running in a future post. The reason I picked AWS over Azure is I actually have quite a bit of Azure experience, and I thought this project was a good time to get my toes wet with AWS. It can be good to use what you know in a project, because things work out how you expect. Certainly if I went with Azure this project would've been completed a lot sooner, but I also would've learned less.

With each step, I will include an OpenTofu configuration I created that relates to it. You can see the full configuration at this link.

Database

I am using an Amazon RDS instance with PostgreSQL as the engine.

resource "aws_db_subnet_group" "db_group" {
  name       = "${var.naming_prefix}_db_subnet_group"
  subnet_ids = [aws_subnet.db1.id, aws_subnet.db2.id]
  tags       = local.common_tags
}

resource "aws_db_instance" "blog_db" {
  allocated_storage      = var.db_storage
  db_name                = var.db_name
  engine                 = "postgres"
  engine_version         = "17.4"
  instance_class         = var.db_instance_class
  username               = var.db_username
  password               = var.db_password
  db_subnet_group_name   = resource.aws_db_subnet_group.db_group.name
  port                   = var.db_port
  vpc_security_group_ids = [aws_security_group.db_security.id]
  publicly_accessible    = false
  tags                   = local.common_tags
}

If you are familiar with Terraform or OpenTofu, this configuration will look pretty normal to you. I do want to note that for Python to properly work with PostgreSQL, I did install psycopg3, which is only known as psycopg on PyPI. Most tutorials online recommend you use psycopg2, and it is the most used. However, the developers themselves recommend psycopg3 for a new project. This has worked well for me.

Elastic Container Registry (ECR)

There is definitely a way to pull your container image from a registry that isn't hosted on AWS for Elastic Beanstalk, but I don't want people to have easy access to my built container image anyway. As such, I decided to create an Elastic Container Registry. Also, it is part of the AWS free tier as well.

resource "aws_ecr_repository" "blog" {
  name                 = var.ecr_name
  image_tag_mutability = "MUTABLE"
  image_scanning_configuration {
    scan_on_push = true
  }
  encryption_configuration {
    encryption_type = "AES256"
  }
  tags = local.common_tags
}

Once you have ECR set up, all you need to do is run the following commands. Make sure you have the AWS CLI already set up.

aws ecr get-login-password --region us-west-1 | sudo docker login --username <username> --password-stdin <ecr_url>
# build your container, this name is just an example
sudo docker build -t blog_image .
sudo docker tag blog_image:latest <ecr_url>/blog_image:latest
sudo docker push <ecr_ul>/blog_image:latest

Elastic Beanstalk

There are a lot of possible ways you could deploy an application like this on AWS. The most obvious would be to just use an EC2 instance (as that's what my Elastic Beanstalk Environment does anyway). I picked Elastic Beanstalk because it handles a lot of the logging as well as managing the classic Elastic Load Balancer. I did decide to manage the database myself, but it can even manage that for you.

This is by far the most complicated OpenTofu configuration for this project. I must thank Sebastian Estenssoro for his Medium post. I did make some changes, especially in regards to the settings for my application. The soul of what he did is still the same, though.

resource "aws_s3_bucket" "ebs" {
  bucket = var.ebs_bucket_name
  tags   = local.common_tags
}

data "template_file" "ebs_config" {
  template = file("${path.module}/Dockerrun.aws.json.tpl")
  vars = {
    image_name = aws_ecr_repository.blog.repository_url
  }
}

resource "local_file" "ebs_config" {
  content  = data.template_file.ebs_config.rendered
  filename = "${path.module}/Dockerrun.aws.json"
}

resource "aws_s3_object" "ebs_deployment" {
  depends_on = [local_file.ebs_config]
  bucket     = aws_s3_bucket.ebs.id
  key        = "Dockerrun.aws.json"
  source     = "${path.module}/Dockerrun.aws.json"
  lifecycle {
    replace_triggered_by = [local_file.ebs_config]
  }
}

This creates a file based on Dockerrun.aws.json.tpl named Dockerrun.aws.json using the template and local providers, then uploading that file to s3. My understanding is that these providers are considered obsolete, but it just works so well I didn't see any reason to change it. If you are not aware, Dockerrun.aws.json is a file that Elastic Beanstalk needs to take my uploaded Docker image and run it on an EC2 instance.

{
  "AWSEBDockerrunVersion": "1",
  "Image": {
    "Name": "${image_name}",
    "Update": "true"
  },
  "Entrypoint": "gunicorn",
  "Command": "--bind 0.0.0.0:8000 --workers 3 website.wsgi"
}

This is what the Dockerrun.aws.json.tpl file looks like. The image_name is provided from the ECR configuration, as you can see above.

resource "aws_iam_role" "ebs" {
  name = var.ebs_name

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}

resource "aws_iam_role_policy_attachment" "web_tier" {
  role       = aws_iam_role.ebs.name
  policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkWebTier"
}

resource "aws_iam_role_policy_attachment" "worker_tier" {
  role       = aws_iam_role.ebs.name
  policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkWorkerTier"
}

resource "aws_iam_policy" "ecr" {
  name        = "${var.naming_prefix}-ECRFullAccessPolicy"
  description = "Provides full access to Amazon Elastic Container Registry (ECR)"

  policy = jsonencode({
    Version = "2012-10-17",
    Statement = [
      {
        Action = [
          "ecr:GetAuthorizationToken",
          "ecr:BatchCheckLayerAvailability",
          "ecr:GetDownloadUrlForLayer",
          "ecr:GetRepositoryPolicy",
          "ecr:DescribeRepositories",
          "ecr:ListImages",
          "ecr:BatchGetImage"
        ],
        Effect   = "Allow",
        Resource = aws_ecr_repository.blog.arn,
      },
      {
        Action = [
          "ecr:GetAuthorizationToken"
        ]
        Effect   = "Allow"
        Resource = "*"
      }
    ],
  })
}

resource "aws_iam_role_policy_attachment" "ecr" {
  role       = aws_iam_role.ebs.name
  policy_arn = aws_iam_policy.ecr.arn
}

resource "aws_iam_instance_profile" "beanstalk_iam_instance_profile" {
  name = "blog-beanstalk-iam-instance-profile"
  role = aws_iam_role.ebs.name
}

This sets up the permissions Elastic Beanstalk needs to be able to pull the Docker image from ECR.

locals {
  common_tags = {
    project = "blog"
  }
  env = {
    for tuple in regexall("(.*)=(.*)", file("../.env")) : tuple[0] => sensitive(tuple[1])
  }
}

The important bit here is the env local. Since I was using an .env file for so much of my local development and testing, I thought it would be nice if OpenTofu could take that .env file and turn it into a OpenTofu map. I found how to do this on StackOverflow. Do note the sensitive function, which makes it so OpenTofu is not logging important information like my database username or password.

resource "aws_elastic_beanstalk_application" "app" {
  name = var.ebs_name
  tags = local.common_tags
}

resource "aws_elastic_beanstalk_application_version" "app_version" {
  name        = "${var.ebs_name}-version"
  application = aws_elastic_beanstalk_application.app.name
  bucket      = aws_s3_bucket.ebs.id
  key         = aws_s3_object.ebs_deployment.id
  tags        = local.common_tags
}

data "aws_acm_certificate" "domain" {
  domain      = var.domain_name
  types       = ["AMAZON_ISSUED"]
  most_recent = true
}

resource "aws_elastic_beanstalk_environment" "env" {
  name                   = "ebs-blog-env"
  application            = aws_elastic_beanstalk_application.app.name
  version_label          = aws_elastic_beanstalk_application_version.app_version.name
  solution_stack_name    = "64bit Amazon Linux 2023 v4.5.2 running Docker"
  tier                   = "WebServer"
  wait_for_ready_timeout = "10m"
  tags                   = local.common_tags

  setting {
    name      = "IamInstanceProfile"
    namespace = "aws:autoscaling:launchconfiguration"
    value     = aws_iam_instance_profile.beanstalk_iam_instance_profile.arn
  }

  setting {
    name      = "EnvironmentType"
    namespace = "aws:elasticbeanstalk:environment"
    value     = "LoadBalanced"
  }

  setting {
    name      = "MaxSize"
    namespace = "aws:autoscaling:asg"
    value     = 1
  }

  setting {
    name      = "SecurityGroups"
    namespace = "aws:autoscaling:launchconfiguration"
    value     = join("", [aws_security_group.blog_security.id])
  }

  setting {
    name      = "InstanceTypes"
    namespace = "aws:ec2:instances"
    value     = var.blog_instance_class
  }

  setting {
    name      = "VPCId"
    namespace = "aws:ec2:vpc"
    value     = aws_vpc.blog.id
  }

  setting {
    name      = "Subnets"
    namespace = "aws:ec2:vpc"
    value     = join("", [aws_subnet.blog.id])
  }

  setting {
    name      = "ListenerProtocol"
    namespace = "aws:elb:listener:443"
    value     = "HTTPS"
  }

  setting {
    name      = "InstancePort"
    namespace = "aws:elb:listener:443"
    value     = "80"
  }

  setting {
    name      = "SSLCertificateId"
    namespace = "aws:elb:listener:443"
    value     = data.aws_acm_certificate.domain.arn
  }

  setting {
    name      = "ListenerEnabled"
    namespace = "aws:elb:listener"
    value     = false
  }

  dynamic "setting" {
    for_each = local.env
    content {
      namespace = "aws:elasticbeanstalk:application:environment"
      name      = setting.key
      value     = setting.value
    }
  }
}

This is the part of the OpenTofu configuration that actually creates all the Elastic Beanstalk resources. Those first two resources create the application and application version. The 3rd resource grabs my AWS-issued SSL certificate.

Finally, we have the Elastic Beanstalk environment. This is what is actually running the application. You will notice I have a lot of settings, so let me explain them all:

Networking and Route 53

Since I decided to manage my database myself, I had to make sure that the networking between the database and the EC2 instance created by Elastic Beanstalk was properly setup.

resource "aws_vpc" "blog" {
  cidr_block           = var.vpc_cidr_block
  tags                 = local.common_tags
  enable_dns_hostnames = true
}

resource "aws_subnet" "blog" {
  vpc_id                  = aws_vpc.blog.id
  tags                    = local.common_tags
  map_public_ip_on_launch = true
  cidr_block              = var.blog_cidr_block
  availability_zone       = "us-west-1a"
}

resource "aws_subnet" "db1" {
  vpc_id                  = aws_vpc.blog.id
  tags                    = local.common_tags
  map_public_ip_on_launch = true
  cidr_block              = var.db1_cidr_block
}

resource "aws_subnet" "db2" {
  vpc_id                  = aws_vpc.blog.id
  tags                    = local.common_tags
  map_public_ip_on_launch = true
  cidr_block              = var.db2_cidr_block
  availability_zone       = "us-west-1b"
}

resource "aws_security_group" "blog_security" {
  name        = "${var.naming_prefix}_security"
  description = "Security group for my blog"
  vpc_id      = aws_vpc.blog.id
  tags        = local.common_tags
}

resource "aws_vpc_security_group_ingress_rule" "allow_tls" {
  security_group_id = aws_security_group.blog_security.id
  ip_protocol       = "tcp"
  from_port         = 443
  to_port           = 443
  cidr_ipv4         = "0.0.0.0/0"
}

resource "aws_vpc_security_group_ingress_rule" "allow_http" {
  security_group_id = aws_security_group.blog_security.id
  ip_protocol       = "tcp"
  from_port         = 80
  to_port           = 80
  cidr_ipv4         = "0.0.0.0/0"
}

resource "aws_vpc_security_group_ingress_rule" "db_access" {
  security_group_id            = aws_security_group.blog_security.id
  ip_protocol                  = "tcp"
  from_port                    = var.db_port
  to_port                      = var.db_port
  referenced_security_group_id = aws_security_group.db_security.id
}

resource "aws_vpc_security_group_egress_rule" "db_access" {
  security_group_id            = aws_security_group.blog_security.id
  ip_protocol                  = "tcp"
  from_port                    = var.db_port
  to_port                      = var.db_port
  referenced_security_group_id = aws_security_group.db_security.id
}

resource "aws_security_group" "db_security" {
  name        = "${var.naming_prefix}_db_security"
  description = "Security group for database"
  vpc_id      = aws_vpc.blog.id
  tags        = local.common_tags
}

resource "aws_vpc_security_group_ingress_rule" "allow_db_connection" {
  security_group_id            = aws_security_group.db_security.id
  from_port                    = var.db_port
  to_port                      = var.db_port
  ip_protocol                  = "tcp"
  referenced_security_group_id = aws_security_group.blog_security.id
}

resource "aws_internet_gateway" "gateway" {
  vpc_id = aws_vpc.blog.id
  tags   = local.common_tags
}

An egress and ingress rule from the Django blog security group to the database on the database port, using the database's security group ID as the reference from where the traffic would be coming from. Then the opposite for the database's security group. I also needed to make sure that outside traffic can make it to my application, so there are inbound rules for that as well.

Now that I have a working application running on a server, the next step is set up my pre-purchased domain name. I used Route 53 to do this, and I decided this was one of the few resources I was not going to make part of my OpenTofu configuration. However, I will make sure you understand what steps I took to set this up. First, create a Hosted Zone, which will provide you with the URLs to four name servers. Then apply these to your domain name through your registrar as a "Custom DNS". This can take some time to apply, but you can do the next steps in the meantime.

Now that you have a hosted zone, it is time to add a record.

  1. Add a record name, in my case I added www to .akatama.dev.
  2. Under Record type, select A
  3. Click the slider that says Alias so that it is active.
  4. Under Route traffic select "Alias to Elastic Beanstalk environment" and then your region.
  5. Select the URL you want to alias to. If there are multiple Elastic Beanstalk environments, make sure to select the correct one.
  6. Under Routing Policy, I selected Simple routing.
  7. Keep the default value of Yes for Evaluate target health.

Setting up an SSL Certificate

Like my Route 53 set up, it just didn't make sense to make an OpenTofu configuration for my SSL certificate. This is because I decided it was easier to request the certificate from AWS, and it does take some time for them to grant it to you. I used AWS Certificate Manager, and I waited around a day after setting up my domain name in the previous section. If you can connect to your website via HTTP with that domain name, then you are good to go.

  1. Click Request a certificate
  2. On the Certificate type page, click next. The default of public certificate is what I want.
  3. Under Fully qualified domain name, enter the domain name. For me, this was "www.akatama.dev".
  4. Keep the default of Disable export.
  5. Under Validation method, I went with DNS validation. Either one is fine, however.
  6. Pick the Key algorithm.
  7. Click Request

After this, it is time to wait. I think it took around 20 minutes for my certificate to be approved.

Final Django Changes

Now that I have everything set up on AWS, there are some final changes to my to my Django application. All of these changes are related to the settings.py file. With Django, you can run uv run manage.py check --deploy and it will give you suggestions on what settings you need to add and change in order to properly deploy. You can read more about this in Django's deployment checklist documentation. You can see the full look of my settings.py in my repo.

SECRET_KEY = os.environ.get("SECRET_KEY")
DEBUG = False
ALLOWED_HOSTS = os.environ.get("DJANGO_ALLOWED_HOSTS", "localhost").split(",")
DATABASES = {
    "default": {
        "ENGINE": "django.db.backends.postgresql",
        "NAME": os.getenv("RDS_DB_NAME"),
        "USER": os.getenv("RDS_USERNAME"),
        "PASSWORD": os.getenv("RDS_PASSWORD"),
        "HOST": os.getenv("RDS_HOSTNAME"),
        "PORT": os.getenv("RDS_PORT"),
    },
    "local": {
        "ENGINE": "django.db.backends.sqlite3",
        "NAME": BASE_DIR / "db.sqlite3",
    },
}

First, here are the environment variables I used. Make sure you do not hard code your SECRET_KEY. Also, you will note that I have two databases - one is a test database that I refer to as local. The default database is connecting to the RDS database created above.

CSRF_COOKIE_SECURE = True
SESSION_COOKIE_SECURE = True
CSRF_TRUSTED_ORIGINS = ["https://www.akatama.dev"]

These are the last configurations. CSRF_COOKIE_SECURE makes it so that the CSRF cookie won't be transmitted over HTTP. Since you can only access my website over HTTPS, this probably isn't technically needed, but it is isn't hurting anything and I feel more comfortable with it set. SESSION_COOKIE_SECURE is the same, except it doesn't allow the session cookie to be transmitted over HTTP. Finally, CSRF_TRUSTED_ORIGINS makes it so any unsafe requests must either have an Origin header that matches, or a Referer header that matches. The big one I am missing is SECURE_SSL_REDIRECT. This is because my certificate is not actually on the server my application is hosted on, and instead SSL termination is being handled by the load balancer. The end result was an endless loop. Removing that setting fixed the problem, but if I figure out how to keep that setting on in the future, then I will let you know!

Future Work

There are a few things I would like to add to this website in the future:

Thank you for reading!