← Back to Articles
Preparing Documentation for LLMs: A Better Way to Generate Code

Preparing Documentation for LLMs: A Better Way to Generate Code

LLMDocumentationAI DevelopmentMermaid

You know that feeling when you ask an AI to write code and it gives you something that... works, but doesn't quite fit? Yeah, I've been there too. After many "almost perfect" attempts, I discovered something that changed everything: prepare documentation first, then let the AI do its magic.

Instead of throwing vague requests at the AI and hoping for the best, I now create detailed plan documents and Mermaid diagrams. Think of it like giving directions to someone, the clearer you are, the less likely they'll end up at the wrong destination (or in our case, writing code that needs 47 revisions).

Why Bother with Documentation?

I know, I know, documentation sounds boring. But hear me out. Here's what actually happens when you skip it:

  • You think you know what you want... until you see what the AI built
  • The AI makes assumptions (and trust me, AI assumptions are wild)
  • You end up in a loop of "can you change this?" and "actually, make it like that"
  • Future you (or your teammate) will curse present you

With good documentation, the AI actually understands what you're building. It's like the difference between "make me a sandwich" and "make me a turkey sandwich on whole wheat, no mayo, extra pickles", one gets you exactly what you want, the other gets you... something sandwich-like.

Creating Plan Documents (Yes, It's Worth It)

Here's my secret: I break down big features into small, digestible pieces. Think of it like a recipe, you wouldn't just say "make pasta" and expect perfect carbonara, right? You need ingredients, steps, timing... same thing with code.

My plan documents include: what we're building (overview), what tools we're using (tech stack), how files are organized (structure), who does what (component responsibilities), how things talk to each other (APIs), and what data looks like (models). It sounds like a lot, but honestly, it takes less time than fixing code that went in the wrong direction.

Mermaid Diagrams: Because Pictures Speak Louder

Here's a fun fact: AI is actually pretty good at reading diagrams. Mermaid diagrams let me draw (well, write) pictures of how things connect, and the AI gets it immediately. It's like showing someone a map instead of describing every turn.

I use them to show: how components relate to each other, how data moves around, how databases connect, what users do, and how APIs talk. One diagram can replace three paragraphs of explanation, and the AI actually understands it better.

Plus, they look cool. There's something satisfying about seeing your architecture as an actual diagram instead of trying to visualize it in your head (where it probably looks like spaghetti).

My Actual Workflow (No Magic, Just Process)

So here's what I actually do, step by step:

  • Write the plan document (takes 15-20 minutes, saves hours later)
  • Add Mermaid diagrams for the complex parts (because I'm visual like that)
  • Give it all to the AI and watch it work its magic
  • Review the code (it's usually pretty good at this point)
  • Celebrate not having to rewrite everything

The whole process might take a bit longer upfront, but I've learned that "fast" code that needs 5 rounds of fixes is actually slower than "slow" code that works the first time. Math checks out.

Real Example: Building a Backup Process

Let me show you a real example from my actual work. I needed to build a backup system for a database. My first instinct? Just ask the AI to "make a backup system." But I've been burned before, so I took a deep breath and wrote documentation first.

Step 1: Plan Document Structure

I started with a plan document that included:

  • Overview: Automated daily database backups with retention policy
  • Technology stack: Node.js, PostgreSQL, AWS S3
  • Components: Backup scheduler, compression module, upload service, cleanup service
  • Data flow: Database → Backup → Compress → Upload → Verify → Cleanup
  • Error handling: Retry logic, notification system, logging

Step 2: Mermaid Diagram

Then I created a Mermaid sequence diagram to visualize the backup flow:

mermaid
1sequenceDiagram
2 participant Scheduler
3 participant DB
4 participant Compressor
5 participant S3
6 participant Cleanup
7
8 Scheduler->>DB: Trigger Backup
9 DB-->>Scheduler: Backup File
10 Scheduler->>Compressor: Compress Backup
11 Compressor-->>Scheduler: Compressed File
12 Scheduler->>S3: Upload Backup
13 S3-->>Scheduler: Upload Confirmation
14 Scheduler->>Cleanup: Remove Old Backups
15 Cleanup-->>Scheduler: Cleanup Complete

Visual representation:

CleanupS3CompressorDBSchedulerCleanupS3CompressorDBSchedulerTrigger BackupBackup FileCompress BackupCompressed FileUpload BackupUpload ConfirmationRemove Old BackupsCleanup Complete

Step 3: Let the AI Do Its Thing

Now here's where it gets good. I gave all this documentation to the AI, and it actually understood:

  • What components to build (no guessing!)
  • How everything connects (the flow was clear)
  • Where to put error handling (because I told it where)
  • The overall structure (it followed the blueprint)

The code it generated? Actually good. Like, "I would write this myself" good. Proper structure, good error handling, clean separation, all because I gave it clear instructions instead of hoping it would read my mind.

TDD: Write Tests First (Trust Me on This)

I used to write code first, then tests. Then I tried TDD (Test-Driven Development) and... wow, it's actually better. Who knew? (Everyone, apparently, but I'm slow to catch on sometimes.)

I learned about TDD from Eric Elliott a few years ago, and it completely changed how I approach coding. If you want to dive deeper into TDD, his article on Medium is a great starting point: https://medium.com/javascript-scene/testing-software-what-is-tdd-459b2145405c

Now I include test specs in my plan documents. I tell the AI: "Here's what should be tested, here's how it should behave, here are the edge cases, and here's how it connects to other stuff." Then I ask it to write tests first, then make the code pass those tests.

Why? Because:

  • Tests tell you exactly what the code should do (no ambiguity)
  • You don't break existing stuff (tests catch it immediately)
  • Tests are documentation that never gets outdated (they have to pass!)
  • You can actually sleep at night knowing your code works

The first time I did this with AI, I was skeptical. But seeing it write tests, then code that passes those tests? Chef's kiss. It's like having a safety net that also tells you what you're building.

Real Example: Backup Cleanup with TDD

When I needed to add cleanup to that backup system, I wrote test specs first:

  • Test: Delete old backups (the ones past their expiration date)
  • Test: Keep recent backups (don't delete the good ones!)
  • Test: Handle errors without crashing (because things break)
  • Test: Log what it's doing (so we know what happened)

The AI wrote all the tests first, then wrote code to make them pass. And you know what? It caught an edge case I hadn't thought about (what if there are no backups to delete?). That's the power of TDD, it makes you think through the "what ifs" before they become "oh no" moments.

So there you have it. Documentation + Mermaid diagrams + understanding existing code + TDD = AI-generated code that's actually good. Not perfect, but good enough that you don't want to rewrite it immediately. And honestly, that's a win in my book.