Last month, I built what I thought was a brilliant solution. We had dozens of APIs with outdated or missing documentation. So I wrote a script that used AI to scan our code and generate documentation automatically.
The Setup
It was simple really — feed the AI our endpoint code, request/response schemas, and let it write the docs. Within a week, I had auto-generated docs for 40+ endpoints. It looked amazing. Clean formatting, example requests, response codes. My team was impressed.
The Problem Started Small
A junior developer asked me about a specific endpoint. “The doc says this field is optional, but I’m getting errors when I don’t send it.”
I checked the code. He was right. The field was required in the validation logic, but the AI had marked it as optional based on how it interpreted the schema.
That’s when I went down the rabbit hole.
What I Found Was Scary
I audited 40 endpoints. 12 of them had incorrect documentation. Not minor typos — actual misleading information about:
- Required vs optional fields
- Response status codes
- Error message formats
- Authentication requirements
The AI was confident. It wrote with certainty. That was the worst part — it never said “I don’t know” or “check this manually.”
What I Learned
Here’s the thing: AI is great at generating structure. It’s terrible at understanding context. It doesn’t know about that one edge case your PM added last quarter. It doesn’t see the validation logic buried in a middleware three files away.
Now I use AI as a first draft, not final documentation. It helps me get started, but every single word gets verified by a human who actually understands the code.
The Takeaway
If you’re using AI to generate technical docs: verify everything. Your team will trust those docs, and when they’re wrong, you won’t know until production incidents start happening.
AI can write fast. But it can’t think critically. That’s still your job.
