If you are generating a lot of code with AI and nobody on your team is actively maintaining your linting rules, you are wasting review time and slowly letting AI-generated garbage creep into your codebase.

AI is fast. Ridiculously fast. It will generate components, utilities, hooks, tests, whatever you ask for, in seconds. But it’s also inconsistent. It ignores conventions, reinvents patterns, and subtly drifts over time. Even when it’s “good,” it’s not reliably aligned with your codebase.

And code review? It simply cannot keep up with that pace.

So you need something that pushes back. Not sometimes. Not when a reviewer catches it. Every time.

You need linters.

Use Linters For More Than Parentheses or Quotes

There’s a simple rule of thumb:

If you can describe a problem precisely, you can probably enforce it with a linter.

I’ve implemented a lot of custom lint rules across different projects. And the pattern is always the same: anything that keeps showing up in reviews can usually be automated away.

Let’s start with a category that you might not think of when it comes to linting.

Translations & i18n

If you have JSON or YAML translation files in your codebase, then you will know the following problems:

  • Translation keys get referenced that don’t exist
  • Strings get hardcoded directly into the HTML
  • Translation keys accumulate over time that are not used anymore
  • Formal and informal pronouns are mixed in languages like German or French (“Du” vs “Sie”)

The result?

  • Inconsistent UX
  • Broken translations in production
  • Endless, repetitive review comments
  • A file with 5000 lines that could be half the size if you would just know which keys are used and which aren’t.

All of this is preventable with a linter.

What good lint rules can enforce

Tone consistency in translations.
You can scan translation files and forbid specific phrases. If your product uses “Sie,” then “Du” becomes a lint error.

Or hardcoded user-facing strings. Write a lint rule to prevent untranslated code.

Bad:

<p>Welcome back</p>

Good:

<p>{t('welcome_back')}</p>

A linter can flag every hardcoded string in templates and force everything through your translation layer, even aria-label, title, and alt.

Also invalid translation keys.
You can statically verify that keys actually exist.

t('dashboard.welccome'); // typo

This should fail immediately, not in production, not in QA.

Enforcing Specific Imports

Another subtle problem: having multiple ways to do the same thing. Imagine you’re using lodash in your project, but you’ve also implemented some of its functions yourself for one reason or another.

// ❌
import { debounce } from 'lodash';

// ✅
import { debounce } from '@/utils/debounce';

Both are technically valid, and imports rarely get much attention in code reviews, so the wrong one tends to creep in over time.

AI will usually pick whichever option it has seen more often, or just make up something that looks reasonable.

But with a linter, you can ban specific imports. And with autofixers, you don’t even have to rely on developers to correct them manually.

Other High-Leverage Rules

Once you think about it, opportunities are everywhere:

  • Ban entire libraries (e.g., moment)
  • Enforce architectural boundaries
    → Nothing in shared/ can import from features/
  • Enforce domain-specific naming
    → Import icons from the icon library reliably with the ...Icon suffix.

You should be creative with this. Think about what problems you have regularly.

And then you can even task LLMs to write the rules for you. They are exceptionally good at it, as linter rules are isolated from the rest of the codebase.

The Antidote for Curser & Entropyc (Puns Intended)

You might argue that you are using AI code reviews to solve these issues. And I agree, you should absolutely configure automatic AI code reviews and also build a proper agentic engineering setup. These things help. A lot.

But AI does not follow its prompts 100% of the time, no matter how good the prompt or context.

Linters are different. They either pass or fail. Every time.

Therefore, I highly suggest to go all-in on both. AI code reviews with custom prompts are especially powerful when your expectations are easy to explain, but hard to express in code.

Why have only one silver bullet if you could have two?

Conclusion

Linters won’t replace engineering judgment.
They won’t replace code reviews. (Human & AI)

But they will eliminate entire categories of problems before they ever reach review.

And that’s the point.

If you’re using AI heavily and not evolving your lint rules alongside it, you’re choosing to fight the same problems manually, over and over again.

Or you automate them once, and move on.


Bonus: Configuration Best Practices

If you hate linters, you might have configured them wrong.

The rule is simple: fix issues where they happen, not later.

Linters belong in your editor, including autofixes on save. If AI writes code, it should follow the same rule and lint immediately. Anything else is delayed feedback. Make sure not to run the linter on the entire project here, just the related files.

Pre-commit and pre-push hooks sound useful, but they often punish fast workflows. Frequent commits are a good thing, so they shouldn’t become painful.

The CI in pull requests should run the linter as well. This is your safety net for anything that was overlooked locally.

Get this right, and linting fades into the background. Fast, predictable, and just doing its job.


I’d genuinely love to hear your thoughts. Please drop your feedback on daily.dev or LinkedIn!