I am currently working at the third startup of my career and startups tend to operate in highly competitive markets where speed just isn’t optional. But speed without quality is short-lived. Sooner or later, it turns into bugs, dissatisfied customers, and lost revenue.
The problem is that it is really hard to have both quality and speed. Most teams either deliver a high-quality product or “move fast and break things.”
The reason is simple: many practices that improve quality also add friction. Practices like TDD, broad test coverage, or multiple PR reviewers can be valuable. One could even argue that these practices save time in the long run because they catch issues early. But when product direction changes weekly, some investments simply do not pay off quickly enough. And if a team is already struggling to keep up with one reviewer, requiring two is not realistic.
So the question becomes: how do you deliver quality software while still maintaining high throughput?
1. Pragmatism
Pragmatism means solving the problem in the simplest way possible. It means choosing the simplest approach that keeps the code understandable, safe enough, and easy to change.
Unnecessary complexity kills both speed and quality. If code is hard to understand, changing it takes longer and mistakes become more likely.
A seasoned developer has many tools in their toolbox: complex type systems, microservices, abstractions, SOLID principles, design patterns, and more. These tools are not bad, but using them is expensive. They only pay off when they actually fit the problem and when the problem is stable enough. In a greenfield project, that often is not the case. Requirements may change again next week.
The question you should always ask yourself is: “What is the simplest solution that is good enough?” not “What is the most elegant way to solve this?”
2. Short Feedback Loops
The earlier a problem is detected, the cheaper it is to fix.
A bug caught while a developer is still typing is almost free. The same bug found in code review already costs more. Found in QA, it costs even more. Found by a customer, it is most expensive: the developer has lost context, more people are involved, and the team now pays in communication and trust.
This is why short feedback loops matter so much. They improve quality while actively removing overhead.
That is also why I invest heavily in tooling like strong typing, linting, and static analysis. If a problem can be detected directly in the editor, it should be.
For example, I once added a lint rule that detects when an icon is used as the only child of a button and suggests using the IconButton component instead. That moves feedback from code review or design QA directly into the editor and prevents accessibility and design issues before they ship.
3. Clear ownership
Many quality problems are caused by ambiguity.
A typical example: someone posts in the group chat, “I found bug X.” Then what happens? Sometimes two people start looking into it. Sometimes nobody does. Sometimes everyone assumes someone else will handle it, and in the end it lands in production, moving the feedback loop even further to the right. That is wasted time, avoidable confusion, and lower product quality.
Whether it is a bug, a feature, or just something somebody noticed, if it is worth mentioning, then somebody should be responsible for driving it to the next reliable state: building it, fixing it, assigning it, prioritizing it, or explicitly deciding not to act on it yet.
This does not mean that whoever notices a problem must always solve it personally. It means they should not let it remain unowned. If ownership is handed over, that handoff should be explicit. If you cannot name the person you handed ownership to, then you did not do it properly.
Clear ownership improves quality because issues are less likely to be forgotten, and it improves speed because it prevents duplicate work.
4. Automations
Software drifts unless something pushes back. Standards fade, steps get skipped, and “we usually do it this way” slowly turns into inconsistency.
That is why I do not put too much trust anymore in rules that live only in documentation, PR review reminders, or team habits. If a rule is not enforced, it is only a suggestion. And suggestions do not survive pressure.
Automation is powerful because it does more than save time. It fights entropy. It turns standards from something people should remember into something the system consistently enforces.
This is why I like linters, CI checks, and static analysis tools. They enforce standards reliably: dead code gets removed, IconButton gets used for icon-only buttons, and referenced translation keys must exist.
I also like AI code reviews as an additional layer in pull requests. They are less deterministic than lint rules or CI checks, but they can still catch suspicious patterns, missing context, and edge cases that humans might miss.
Conclusion
Quality and speed are not enemies, but they do require trade-offs. In fast-moving environments, you rarely get quality and speed from heavy engineering processes. You get them by making quality cheap to enforce.
For me, that comes down to four things: being pragmatic, shortening feedback loops, creating clear ownership, and automating what should not depend on memory or discipline.