Akatama's Slice of the Internet

What Co-Intelligence Taught Me About Working with AI

By Jimmy Lindsey

Sept. 17, 2025 | Categories: AI, LLM, development

Like many engineers, I’ve been experimenting with AI tools such as GitHub Copilot. They’re useful, but I’ve often wondered how to get more out of them. Not just at work, but in everyday tasks like writing these blog posts. That curiosity led me to Ethan Mollick’s Co-Intelligence: Living and Working with AI. The book isn’t written just for tech professionals. Mollick isn’t a programmer, but he studies emerging technologies, and he writes for a general audience. That said, the ideas apply just as well to engineers like me.

The Four Rules for Co-Intelligence

1. Always invite AI to the table

The first rule is simple: always use AI. Certainly, there are many tasks that LLMs are not good at, but in order to truly understand that, you first need to understand the capabilities of the LLM you are using. Just like any tool, the more experience you have with it, the better you will be at using it. It can save you from tedious work and free you up for what matters most. However, it is important to keep humans in the loop.

2. Be the human in the loop

LLMs may sound like a person, but they're just predicting the next word, so they don't actually "know" anything. This is why LLMs themselves can't tell if they are lying or hallucinating. There needs to be a human in the loop to make sure what the AI has done is correct. This will take skill, specifically very good fundamentals on potentially a diverse array of topics. It will also take critical thinking.

3. Treat the AI like a person, but tell it what kind of person it is

When you are assigning a task to the AI, give it a person. Don't just tell it, "I want you to add a new blue 'Newsletter' button to the homepage" or "Can you give me suggestions to improve my blog post?". Instead, tell it what kind of person you are trying to reach. For the blog post, you may say something like "You are Michele, an experienced programmer. Can you tell me what she thinks about my blog post?"

When you do this, you will get a better and more tailored response. In this case, it will give me suggestions and thoughts on my blog post from the perspective of an experienced engineer. Since that is the kind of person I want to reach with my blog posts, getting this perspective can be very helpful. I can also see this being applied to code pretty easily. Instead of just waiting until my PR is code reviewed, I can use the LLM to do some code review for the PR. For example, I could ask "Imagine you are my manager who is reviewing the changes in my current branch as a PR. What suggestions for improvement would you make?"

LLMs won't get it right every time, and you won't always prompt them perfectly either. This is why rule #1 is so important, it is not that you expect the AI to solve your problem completely, just that you want to see if you can improve the results you get from it. Also, you want to see if you can use it to improve your work as well.

4. Assume this is the worse AI you will ever use

In the book, he makes a big deal of the rapid improvement in LLMs. Certainly the improvement was rapid, although it looks like has slowed down drastically. Still, with both AI progress and our growing skill at using it, the point holds.

Hallucinations and Creativity

Even if you haven't used an LLM, you've likely heard how they can produce nonsense, which is sometimes harmless and sometimes destructive. I have experienced it myself, and I have always followed the guidance to clear the context so that you can actually get the AI to do what you want it to. However, Ethan Mollick pointed out an interesting truth. While these hallucinations are the biggest weakness of LLMs, they are its biggest strength as well. Hallucinations make AI creative, allowing it to form unique connections from outside its training data.

There is a lot of doom and gloom with AI being used for creative work, and I can understand why. Art is something that makes us human, and giving up that task to AI just seems to lack soul. But maybe AI will allow us to be more creative in the future. Current LLMs certainly are pretty great at writing. They are amazing at summarizing an email or a document, and excellent at suggesting changes to introductions and conclusions. I would say you can certainly use AI for creative work, but make sure you are in the driver's seat. If you let LLMs do all the writing, you miss the benefit of organizing your own thoughts. However, I see no problem writing a first or second draft (minus conclusions and/or introductions) and then passing it to an LLM for improvements.

Centaurs and Cyborgs

Just as nail guns didn’t eliminate roofers, AI won’t eliminate our jobs. Previous advances in programming, for example assembly language and compilers, resulted in programming becoming more accessible. However, it didn't result in there being less Software Engineers, instead there were a lot more. AI will do the same thing, although there may be some subset of jobs that will cease to exist, which has happened before with previous productivity enhancements.

The most important thing is to make sure that you don't let LLMs hurt your learning and skill development. As the human in the loop, you will be in charge of what tasks you will give to AI, and you will be in charge of checking its output. As such, it is still critical that you gain new skills and continue to learn.

He then goes on to suggest ideas for how to approach assigning tasks for the AI. He called the first "Centaur", where there is a clear line between the person and the LLM. This approach keeps some tasks as "me" tasks, which you do on your own. Presumably, these are the tasks you are the strongest at, or the ones you enjoy the most. You might avoid using AI on tasks you're still learning, to ensure you build the skills yourself. The point is that you will be strategic with how you use AI.

The next approach he calls "Cyborg". With this approach, you blend your efforts with AI. You aren't keeping AI out of any tasks, instead you are working in tandem with it. This can be a really powerful method, provided you are experienced enough to know when the AI is wrong, as well as any other limitations it has.

You do not need to pick one approach and stick with it, instead you can choose based on the task at hand. For example, I definitely use the "Centaur" method for when I write blog posts. Yet when I had a long-running task at work to parallelize thousands of tests, I worked in tandem with the AI with the "Cyborg" method.

Conclusion

Overall, I think that Co-Intelligence is worth reading. I didn’t even touch on his chapters about training, ethics, and using LLMs for teaching or mentoring. I personally was already familiar with these topics, but any book on using LLMs would be lacking without them. I think this book is great for people who are just starting to get used to using LLMs, and is still a great read for those who have more experience in that area. He also provides very useful examples of how to use the AI, much more in-depth than I covered in this blog post.

Reading Mollick’s book reminded me that AI isn’t here to replace our judgment, but to challenge and extend it. The more intentional I am about using AI, and when to rely on my own skills, the more I’ll grow as an engineer. I expect my approach will keep evolving, but that balance of human and machine is the real lesson I’m taking away from Co-Intelligence.