Working Alongside an AI Pair Programmer
By Jimmy Lindsey
Dec. 17, 2025 | Categories: AI, LLM, development, thoughtsIntroduction
This post is a detailed write-up of a personal project where I intentionally used an AI pair-programming tool, Aider, across the full lifecycle of a backend system: building APIs with FastAPI, containerizing the application, provisioning infrastructure with OpenTofu, deploying to Azure, and finally creating an MCP server that consumes the APIs.
This is a practical, experience-driven account of what worked, what didn’t, and where I still had to step in and make changes. I include the exact prompts I gave Aider and describe the kinds of mistakes it made, along with how I fixed or worked around them.
This is not a tutorial, a quickstart, or an argument that AI tools can replace engineering fundamentals. The overarching goal of this project was to evaluate how useful LLMs actually are for different technical tasks, while getting more hands-on experience using them. To use these tools effectively, you need to understand their limitations.
Goals
- Create an API with FastAPI that interacts with a database.
- Containerize the FastAPI application.
- Deploy the API + database onto Azure with OpenTofu.
- Create an MCP server that interfaces with Cursor.
- Use LLMs as much as possible for development.
Aider: The AI Pair Programmer
The AI tool I used for this project was Aider with ChatGPT-5.1. Aider is well regarded among LLM aficionados, often compared favorably with GitHub Copilot and Cursor. I have not used Cursor very much, but if you have used GitHub Copilot (including its newly released CLI), then you will feel right at home.
Aider itself doesn't come with any LLMs, instead you need to configure them, which includes providing an API key. For this project, I just set up an OpenAI account, then I put my API key in a .env file for Aider to read whenever I started it. The command for that is aider --model gpt-5.1.
Aider is described as an AI pair programmer, and I think that fits the bill pretty well. Obviously, the more descriptive you are, the more closely Aider will follow exactly what you have in mind. You can also give it more general or vague prompts, but expect that it may not do what you want in that case. After using Aider to create the OpenTofu configuration for goal #3, I asked it this:
This looks great. Take a look at all the files in the opentofu directory and see if there are any resources left behind that we don't need anymore. Keep in mind that I want the database to have public network access enabled.
Aider then asked for access to all the remaining files in my /opentofu folder. It read all of the resources in each file, described what they were doing, then came to a conclusion if it was a resource that needed to be cleaned up. I can see this kind of command being used when refactoring, to clean up any unused functions or classes, and it is a good example of how Aider can be like a pair programmer.
From now on, I will mention other specific situations where Aider was useful, including my input. Most of the time, Aider was making code changes and outputting the diff to my terminal, and that's not something I am going to reproduce here. Instead, I will just explain what kind of changes Aider made.
Database and Initial OpenTofu configuration
When I first started, my first goal was to find some data I could insert into a database. After some research, I found a sample database for a book store called Gravity here. It also comes with a blog post that describes the database schema and how to set up a fresh database with the provided .sql scripts. The original repo contains the .sql scripts for many different databases, but I’ve included the PostgreSQL scripts I used in my repo for reproducibility.
What the database set up breaks down to is this:
-- Create the database
psql -h <your_db_hostname> -U <your_db_username> -f 01_1_postgres_create_database.sql
-- Create the tables
psql -h <your_db_hostname> -U <your_db_username> -d gravity_books -f 01_2_postgres_create.sql
...
-- Populate orders
psql -h <your_db_hostname> -U <your_db_username> -d gravity_books -f 13_postgres_populate_orderhistory.sql
#
I was still a bit nervous to use Aider to do the initial setup for my OpenTofu configuration. So I wrote the initial versions of variables.tofu, network.tofu, database.tofu, dataSources.tofu and providers.tofu. The reason was because I really wanted to make sure my database was working for the next step, which was creating the APIs.
I created two PostgreSQL functions: search_author and search_books. These functions take in a name as a required field, and a published_by_date, which has a default of 1900-01-01.
APIs
I initially built a simple version myself to make sure I understood FastAPI, then reverted to a clean state and finally started using Aider.
in booksapi/main.py, I want you to make the following changes. We need to use FastAPI to create two APIs, first is get_books_by_author, found at /author/{author_name}, which will have an optional argument for publish_by_date. This api will use the Postgresql function search_author, which accepts those two arguments. Next is get_books_by_title, found at books/{book_name}, which will have an optional argument for publish_by_date. This api will use the Postgresql function search_books, which accepts those two arguments. In addition, you should try to parse the strings that the user inputs to these API so things like ` are properly escaped. Finally, we will need to set up an async database connection with AsyncConnectionPool from psycopg_pool. The environment variables for the DB connection are DB_NAME, DB_USER, DB_PASSWORD, DB_HOST and DB_PORT. If you think that the database connection information should go in its own file, then suggest that.
Aider then made the changes, but there were still some problems. It put everything in main.py, which I had done in my simple version, but I knew it would be better to refactor out the code that touched the database into its own file: db.py. Also, it had abstracted the database function calls too much in my opinion. So I asked it to fix those:
Let's extract out the database stuff into db.py. Also please refactor the Postgresql function call. Currently you have abstracted it too much, and it is hard to read. Some abstraction is okay, but in the future we may have future database calls that do not follow this same pattern. Also make sure that we don't pass `publish_by_date` if it is None. Both Postgresql functions have a default value for that argument if it is not passed in, but it won't be happy accepting None.
At this point, I tried running the app with uv run fastapi dev main.py and I got the error ImportError: attempted relative import with no known parent package. Aider had a problem properly importing db.py in main.py, but that was an easy fix.
Now the app was running, but the database connection didn't work because the original way that Aider constructed the DSN (Data Source Name) was incorrect. After a small amount of back and forth because it really wanted to make the construction of the DSN more complicated than it needed to be, the app was working. However, there were still a few small problems.
First was that the code Aider wrote for calling the database functions was a bit too literal. For example, if you remember the first name of the Author you were searching for was "Jane", and so that was all you searched for, the search would fail, because there are no authors that are only named "Jane". The solution was to prepend and append a % to the string after the users input was parsed, which allowed for partial matching.
The next problem was a runtime warning when I ran FastAPI:
I see this warning when I run fastapi: RuntimeWarning: opening the async pool AsyncConnectionPool in the constructor is deprecated and will not be supported anymore in a future release. Please use `await pool.open()`, or use the pool as context manager using: `async with AsyncConnectionPool(...) as pool:
This was a pretty nice fix for Aider to do. I have some experience working with Python, so I could've fixed the import issue faster than Aider did, but I am not familiar with FastAPI at all. Sure, I could've done research and figured it out, but by just giving Aider the error message, it fixed it.
Lastly, there was a error from my linter:
I see this lint error in booksapi/main.py: The method "on event" in class "FastAPI" is deprecated, use lifespan event handlers instead. Can you try to fix that?
Just like the last fix, this really made me impressed with Aider. I only spent around 30 minutes and 20 cents in OpenAI usage, and I had a working API application!
Dockerfile
Using the Dockerfile I created for my website as a base, I created a container image for my FastAPI application. I also created a separate Dockerfile by using Aider as well, but I knew going into it I would be using the Dockerfile I created. Overall, Aider produced a usable Dockerfile, but not one I would ship without manual cleanup. Specifically, the resulting image is about 41 MB larger in size than the one I created, and that was after some additional prompting from myself to improve it. If you don't know much about creating a Dockerfile, then you certainly could do worse than Aider.
You can see the Dockerfile I created or the Dockerfile Aider created for reference.
Here is the prompt I wrote to get started:
Please create a Dockerfile for the project in /booksapi. Note that this project uses uv instead of pip, so all the proper set up for using uv over pip should be done in the Dockerfile. Please also use multi-stage builds to make the smallest container image you can. Also note that /booksapi/Dockerfile already exists, and I don't want you to override that one. Create one called Dockerfile-aider
This created a pretty good Dockerfile, but I knew it could be better:
booksapi/Dockerfile-aider is pretty good, but I see a few improvements you can make. Again, please only make these improvements to Dockerfile-aider. First, there are quite a few lines that are duplicated between the build and runtime images. Make an image called base that houses these duplicate lines, then have builder and runtime use the base image to start with. Second, we are running our application here as the root user, please use a non-root user in the runtime step. Finally, while calling uvicorn directly isn't inherently incorrect, fastapi provides an interface to run uvicorn, which is fastapi run main.py.
This was okay, I probably should've pointed out the specifics a bit more, since I had to be even clearer:
There are still more duplicate lines you can extract out to the base image. Also, the proper starting command with the fastapi cli is fastapi run main.py. We don't need anything else at this moment. Finally, When you are copying from the builder image, you can use the --chown argument to make it owner by appuser:appuser without doing a separate RUN command.
Finally, I ended off with these three prompts:
I believe there is a better way to install uv. We can do this instead: COPY --from=ghcr.io/astral-sh/uv:0.9.11 /uv /bin/uv. Note that means we will not need to install curl anymore.
You do not need to keep activating the venv in the builder step, you can instead just use ENV PATH="/app/.venv/bin:${PATH}"
Expose port 8000, please
uv is pretty new, so it's possible that in the near future Aider (and other CLI LLM tools) will be better at using them in Dockerfiles. I take creating container images pretty seriously, so I am picky. All you really would've needed to do to make the Dockerfile work after the first prompt is expose the port, and to be honest you could've done that with a command like docker run -p 8000:8000 bookapis-aider.
In the end, Aider really believed that we needed ca-certificates to be installed for some reason. I could've forced Aider to remove that dependency, and that probably accounts for the increase in image size. At this point, though, I was already satisfied from what I saw. In the future, I think I will only use Aider to create the skeleton for Dockerfiles. I think that could be especially useful if you are containerizing applications that are using languages or frameworks you aren't familiar with, as it can point you in the right direction. To make the images as small as possible, however, it will still need a human touch.
Finishing the OpenTofu configuration
Now that I had an application and a container image, it was time to deploy onto Azure. First, I needed a place on Azure to store my container image, so it was time to create an Azure Container Registry.
Take a look at the files in /opentofu, which includes an OpenTofu configuration for deploying resources onto Azure. Please make the following changes: 1. Create any needed Resources for Azure Container Registry. Please create a new file for this resource.
That looks good, but can you remove var.ContainerRegistryName and instead just use var.ResourceBaseName + "acr" as the container registry name?
This last line ended up being impressive, because Aider actually remembered this and continued to name resources var.ResourceBaseName + resource name. I was then able to push my container image to Azure.
Now it was time to create an App Service Plan and a Linux Web App to run our container:
Create an azurerm_service_plan and azurerm_linux_web_app that uses the container found in the container registry created in container_registry.tofu
That looks good. Can we move out the azurerm_service_plan and azurerm_linux_web_app into their own file? Also, make sure we add any variables we created to opentofu/variables.tofu.
We need to add a few more app_settings. Here I will list them and where we can grab them, and where we can grab them. DB_NAME is found in var.DBName. DB_USER is found in var.AdminUserName. DB_PASSWORD is found in the azurerm_key_vault_secret resource named my_sql_admin_password, which you can find in opentofu/dataSources.tofu. DB_HOST is from the azurerm_postgresql_flexible_server resource named postgres_server in opentofu/database.tofu. DB_PORT can be found from the same postgres_server resource.
Here I led Aider astray, because I didn't realize that the Azure PostgresSQL resource does not export the port number. It tried to do so anyway, and when I tried to deploy I got an error. In the end, Aider hardcoded the port number. It probably would've been better to make it a variable, but it worked for me.
Can we link our app service to the "app_subnet" subnet found in opentofu/network.tofu? Also, we need to create a public IP in network.tofu that we associate to the app service as well as I can access it. Let me know if this is incorrect, as it has been a while since I have used this resource.
Please add a delegation to the "app_subnet". The delegation should be for Microsoft.Web/serverFarms
Aider did exactly what I asked, and even pointed out that Azure Web Apps do not need a public IP created for them explicitly. That was very helpful! It did forget to add the appropriate delegation to the subnet, but one prompt fixed that.
Can we move out the virtual network swift connection from app_service.tofu to network.tofu? Just so we have the networking resources in the same file.
Please remove all the app_settings that start with DOCKER, as that is causing an error when I try to deploy.
When Aider had created the resource, it added a bunch of app_settings that started with DOCKER. These app_settings do need to exist, but they are created by Azure as the resource is deployed. Filling it in yourself in OpenTofu causes a deployment error.
It seems like the app service we created doesn't have permissions to access the container registry. Do you know how to fix that?
+Self-referential block
+│
+│ on app_service.tofu line 33, in resource "azurerm_linux_web_app" "web_app":
+│ 33: container_registry_managed_identity_client_id = azurerm_linux_web_app.web_app.identity[0].principal_id
+│
+│ Configuration for azurerm_linux_web_app.web_app may not refer to itself.
<Other prompts prompts trying to figure out what was wrong>
I am still seeing an image pull failure. It looks like the container image is "booksapiacr.azurecr.io/booksapiacr.azurecr.io/booksapi:latest", which has a duplicate for the url. It should be "booksapiacr.azurecr.io/booksapi:latest". Can you see where to fix this?
This was probably the biggest mistake Aider made. There was actually a point during this issue where I stopped for the day and came back to it later. It was only then that I realized that it had accidentally used variables to duplicate part of the URL. It was able to fix it once I pointed it out, however.
This works! I can now access my container properly. However, there still seem to be issues connecting to the database. When I try to run a query, I get an error like this "Error querying database using search_author". I know for a fact that the function search_author works and has no problems, because I can call it myself by connecting to the database. Do you think there may still be a networking issue between the webapp and the database?
I think you are on the right track, but please don't disable the public access. Let's just make sure we are connecting to the database through the private endpoint hostname.
Aider wanted to disable public access to the database. Normally, that would be a good idea, but since this is just a personal project, I don't really care about that. Also, disabling public access for these PostgreSQL Azure resources causes them to be recreated, and I spent time setting it up earlier. Aider couldn't quite figure out how to do this, I had to mention that it needed to create a public endpoint hostname.
In the end, Aider made enough mistakes in the OpenTofu configuration that careful review was unavoidable. However, if you even have some knowledge, it can be a good way to get started as HCL isn't exactly the most terse configuration language. It's hard for me to be able to recommend it for production applications by itself. If you are using Aider to create OpenTofu or Terraform configs with any LLM, I would say you need to go over what it has written with a fine tooth comb.
MCP Server
Now that I had my APIs up and running on Azure, it was time to create the MCP server and test it out. Unlike the other parts of this project, I created a whole separate repo for it. I ended up doing this twice, as the first time Aider was acting pretty strange. It turns out that Aider works much better if you have activated the Python virtual environment for your project before you start prompting.
Please create an MCP server with the python package mcp[cli]. The goal of this MCP server is to call APIs that I have created in another git repo. You can find the OpenAPI specification here: https://booksapi-webapp.azurewebsites.net/openapi.json. Please use httpx to scrape that webpage, do NOT use Playwright, I will not allow it to be installed. Make all changes to main.py
Can we change to use from mcp.server.fastmcp import FastMCP, Context intead of the other mcp imports?
RuntimeError: Already running asyncio in this thread
This application should only be run asynchonously, and we don't need to run functions. In fact, just put mcp.run(transport="stdio") under if __name__ == "__main__":
Again, Aider was able to fix the error I reported, and it easily made my requested changes.
I was then able to test out my MCP server with Cursor. I set up my cursor configuration like this:
"gravity-books-server":
{
"command": "/home/jimmy/Documents/repos/MCP-Book-Project/.venv/bin/python",
"args": [
"/home/jimmy/Documents/repos/MCP-Book-Project/main.py"
],
"description": "A set of tools that you can use to look up book information in a database"
},
It works! I will say Cursor itself has a small problem with the publish_by_date parameter. For some reason, it keeps trying to only pass the year, instead of YYYY-MM-DD. When it does call the date correctly, however, it works. Cursor won't just return the data, it will also give you a summary of what was returned. Pretty cool!
Future Work
- Update my APIs to accept just the year for the publish_by_date parameters, with a default month and day (January 1st).
- Deploy my MCP server on Azure so GitHub Copilot and Cursor can connect to it remotely.
Conclusion
After finishing this project, my overall impression of Aider is that it works best as an accelerator, not as a replacement for understanding what you’re building. It was genuinely helpful for scaffolding code and configuration, iterating on fixes, and smoothing over unfamiliar areas like FastAPI and MCP. At the same time, it regularly produced solutions that were either overly abstracted, subtly incorrect, or simply not something I would want to maintain without revisiting.
That pattern showed up across nearly every part of the project. For application code, Aider could get things running quickly, but it often needed refactoring to improve clarity or correctness. For Dockerfiles, it produced something usable, but not something I would ship without manual cleanup. For OpenTofu, it was useful for getting started, but it required careful review, since small mistakes in infrastructure code can be difficult to diagnose later.
I will continue to use Aider, but with clear expectations. It is very good at getting you moving and helping you explore unfamiliar tools or patterns. It is much less reliable when left unchecked, especially for infrastructure and deployment work. Used thoughtfully, it feels less like an autopilot and more like a fast-moving junior pair programmer; one that can save time, as long as you’re willing to slow down and review what it produces.