Writing Code in the LLM Era

Writing Code in the LLM Era
Large Language Models (LLMs) are built to detect patterns and extend them. Because they're trained on vast amounts of publicly available code — much of which did not follow ideal software-engineering practices — they don't always produce “textbook-perfect” solutions. To get the most out of them, developers may need to loosen some traditional software-engineering habits.
For example, maybe you've always avoided repeating yourself and consistently refactored functions to be as abstract and reusable as possible. An LLM, however, often performs better when you keep things explicit and straightforward, even if that means a little repetition. That structure gives it better context and reduces the chance of it “getting creative” in the wrong direction.
For extremely complex products like Chrome, strict best practices remain essential. But for a large portion of software engineering work, prioritizing LLM-friendliness may actually accelerate development without significantly reducing quality.
Reviewing AI-Generated Code vs. Writing Your Own
Pros
- Eliminates repetitive or low-value coding tasks.
- Accelerates development, especially for boilerplate or routine logic.
Cons
- Can waste time if prompts are unclear or if the model is used excessively.
- Reviewing sloppy or misaligned output can be more time-consuming than writing from scratch.
Reality
Most workflows will blend both approaches — write core logic yourself, and let LLMs handle scaffolding and repetition.
How to Optimize Code for LLM Collaboration
To consistently get high-quality output, structure your environment in ways that LLMs understand:
- Use common and consistent project structures
- Keep folder depth reasonable (deep nesting reduces context clarity for models)
- Maintain clear naming conventions
- Reuse the same chat session or project context whenever possible
- Provide explicit instructions for each function (not necessary in one prompt):
- Expected behavior
- Input/output boundaries
- Edge cases and constraints
Coding Style Adjustments That Help LLMs
In practice, LLMs are very good at mirroring clear patterns: consistent naming, clean structure, and predictable formatting. Where they slip is not usually the logic itself — it's when the logic isn't framed with enough context. Since they're not tuned to your personal coding habits, you have two choices:
- Let the model decide the structure (risky and inconsistent), or
- Establish the structure yourself, then guide the model to operate within it
The point isn’t to let the AI run the show — it’s just making your code clear enough so it can actually help instead of guessing.
Practical tips:
- Minimize deep nesting logic
- Break complex tasks into sequenced prompts rather than “write everything at once”
- Ex: 1) Build the function logic, 2) Refactor to match naming conventions, 3) Add tests cases
- Move fast by iterating with the model — small steps, quick feedback, repeat — instead of dumping an entire problem and hoping for the best
The better you structure and communicate your patterns, the more the model behaves like a reliable engineering assistant rather than a creative one.
Looking Ahead: What Will Matter
Future software-development advantages will come from:
- Smart system and code design choices
- Writing and structuring the code that will be both human-readable and LLM-friendly
- The act of typing code will fade — the real work is everything around it: planning, designing, explaining logic, and making sure what gets built actually works
