kelvin.run
Back to blog
12/15/2025

Abstractions That Pay Rent

A practical framework for creating (and removing) abstractions so teams move faster without drowning in indirection.

software-engineeringarchitecturemaintainability

Every team builds abstractions. It’s how we scale codebases and collaboration:

  • Wrap a database behind a repository
  • Create a design system component
  • Introduce a service layer
  • Add a workflow engine
  • Build an internal platform so other teams “just deploy”

At first, abstraction feels like progress: fewer repeated lines, clearer boundaries, easier reuse.

Then one day a “small change” turns into a week of spelunking, and someone says the sentence that should make you pause:

“The system won’t let us do it.”

The abstraction didn’t become bad overnight. It stopped paying rent.

This post is for:

  • PMs who want to move faster without accumulating invisible delivery risk.
  • Senior engineers and architects who need a practical way to decide where boundaries belong.
  • SaaS founders who are accidentally turning today’s code into tomorrow’s bureaucracy.

The goal isn’t “avoid abstractions.” The goal is make abstractions earn their keep.

The rent model: what you pay vs what you get

Abstraction is a trade:

  • You spend extra design effort today.
  • You accept indirection forever.
  • In exchange, you expect to save time, reduce risk, or enable scale.

Thinking in “rent” keeps the conversation grounded. Every layer has ongoing costs that arrive as interest:

  • Cognitive load: extra concepts and vocabulary. Newcomers must learn “the way we do it here.”
  • Debug latency: more hops between intent and effect. You can’t just set a breakpoint in the feature; you need to understand the framework.
  • Change amplification: a simple product change touches more files because behavior is split across layers.
  • Runtime latency / failure surface: more network calls, more retries, more configuration, more things that can be miswired.
  • Organizational latency: “ask Platform” or “file a ticket” becomes part of the critical path.

The return on that rent is real when the abstraction consistently delivers one of these outcomes:

  • Removes repetition of meaningful complexity (not just a couple lines of glue).
  • Stabilizes a volatile dependency (vendors, APIs, infrastructure, regulatory constraints).
  • Enforces an important invariant (security rules, correctness constraints, design consistency).
  • Enables parallel work via a clear seam between teams or components.

If it doesn’t deliver at least one of those reliably, it’s negative value.

What counts as an abstraction (and why it matters)

People often think “abstraction” means a code interface. In practice, teams create layers in multiple forms:

  • API abstractions: repositories, service clients, gateways, SDKs, shared libraries.
  • UI abstractions: design systems, component libraries, form builders, layout systems.
  • Workflow abstractions: rule engines, orchestration pipelines, state machines.
  • Platform abstractions: deploy systems, scaffolding, golden paths, internal CLIs.
  • Product abstractions: feature flags, plans/entitlements, multi-tenant isolation models.

Each category has different failure modes.

  • A UI abstraction usually fails by becoming too configurable or too restrictive.
  • A platform abstraction often fails by turning into a “ticket queue in disguise.”
  • An API abstraction fails when it hides the wrong details, or can’t represent reality without escape hatches everywhere.

When someone says “this abstraction is bad,” ask: bad for whom, in what workflow, under what change?

The two currencies you pay with

You pay for abstraction in two currencies that matter to PMs as much as engineers:

  • Complexity: more moving parts, more concepts, more edge cases.
  • Latency: more layers between intent and effect (both runtime latency and human workflow latency).

Most organizations only budget for runtime latency. The expensive part is often human latency:

  • Time to understand where a behavior lives.
  • Time to make a change without breaking other consumers.
  • Time to roll out safely.
  • Time to recover when something goes wrong.

If you want a practical definition of “maintainability,” it’s this: how quickly can we ship a correct change with high confidence?

Three failure modes you’ll see repeatedly

The classic problems show up across languages, stacks, and organizations.

1) Leaky abstractions

You hide details… until you can’t.

A leaky abstraction forces consumers to learn hidden details anyway, but now they learn them through quirks:

  • “You can’t call this in a transaction, unless you pass skipHooks: true.”
  • “It caches, but only if you set this magic header.”
  • “The component is responsive, except in this layout, unless you pass inline.”

Leaks aren’t inherently evil. Reality leaks. The problem is unowned leakage: when the abstraction provides no clear, honest way to represent the underlying constraints.

What a healthy abstraction does when reality leaks:

  • It makes the constraint explicit.
  • It offers a supported escape hatch.
  • It keeps the default path safe and unsurprising.

2) Overfitting to the present

Some abstractions are just “the current use case with a nicer name.” They look elegant until the second use case arrives.

Signs of overfit:

  • Parameters that feel like toggles: mode, type, variant, strategy.
  • Business rules encoded in surprising places.
  • A ‘generic’ module with a pile of special cases after launch.

Overfit abstractions punish the next feature because they force you to fight yesterday’s assumptions.

A practical countermeasure: name the abstraction after the real job it performs, not the category it belongs to.

  • “RetryingIdempotentHttpClient” is honest.
  • “DataLoader” is a lie.

3) Premature generalization

Trying to predict all future requirements creates the worst APIs.

You’ll recognize it by:

  • “We might need this later.”
  • An interface that’s harder than the concrete implementation.
  • A layer that no one wants to touch.

Premature abstraction slows you down now without guaranteeing flexibility later.

The rule of thumb many experienced teams use is a variant of the “rule of three”:

  • The first time, do it concretely.
  • The second time, copy it (yes, copy) and keep the duplication visible.
  • The third time, you’ve learned enough to extract something real.

This prevents you from designing for imaginary futures.

A decision framework: should we add a layer here?

This is where PMs and architects can collaborate instead of arguing taste.

Before you introduce an abstraction, write down:

  • The customers: who will use it (other engineers, other teams, external devs, designers).
  • The promised benefit: which of the rent-paying outcomes it provides.
  • The success metric: how you’ll know it worked.
  • The cost model: what you’re committing to support.

Then ask these questions.

Question 1: What risk are we trying to reduce?

Good abstractions reduce specific risks:

  • Security mistakes (auth, permissions, encryption).
  • Reliability mistakes (retries, idempotency, circuit breaking).
  • Consistency mistakes (design tokens, validation rules, data contracts).
  • Vendor volatility (payment providers, email services, AI model APIs).

If you can’t name the risk, you’re probably building a layer for aesthetic reasons.

Question 2: Is the complexity “meaningful” or just repetitive?

Don’t abstract “glue” too early.

  • Repeating a few lines is cheap.
  • Repeating a tricky set of invariants is expensive.

Example:

  • Extracting a helper to build consistent pagination parameters can be good.
  • Building a “GenericQueryBuilder” because you saw repetition twice is often a trap.

Ask: If we duplicated this five times, what would actually go wrong?

Question 3: Does this layer have a sharp boundary?

A boundary is “sharp” when you can state:

  • What goes in.
  • What comes out.
  • What is guaranteed.
  • What is intentionally not supported.

If the layer’s boundary is fuzzy, consumers will discover the true boundary by breaking production.

Question 4: Can we evolve it without trapping consumers?

A reusable abstraction is a commitment. The most common failure isn’t that it doesn’t work; it’s that it becomes impossible to change.

Abstractions become traps when:

  • They’re used everywhere.
  • They encode policy decisions.
  • They have no migration story.

If you can’t describe how you’d version and migrate it, keep it concrete longer.

Designing abstractions that stay healthy

If you do decide to add a layer, these principles keep it from metastasizing.

Prefer “one hard problem” abstractions

A good layer often exists to solve one hard problem really well.

Examples:

  • Request retries with backoff + idempotency
  • Authentication and token refresh
  • Design tokens and theming
  • Permission checks that are correct-by-default

Avoid abstractions that claim to solve “everything in this category.” They almost always become dumping grounds.

Optimize for the common case, support the edge case

A usable abstraction feels like:

  • 80% of consumers do the obvious thing.
  • 20% use a documented escape hatch.

A painful abstraction feels like:

  • Everyone needs special parameters.
  • The default path is rarely correct.

This matters for PMs, because “how many escape hatches exist?” is a leading indicator of platform friction.

Make behavior explicit, not magical

Magic is attractive because it makes the demo look clean. It becomes expensive when you debug.

Prefer:

  • Explicit configuration over hidden defaults.
  • Clear error types/messages over silent fallbacks.
  • Observable behavior (logs/metrics) over implicit side effects.

“Boring” is a compliment in infrastructure and architecture.

Keep the escape hatch explicit

Sometimes you need to bypass the layer. That’s fine — as long as it’s intentional.

An escape hatch should be:

  • Explicit (not a hack)
  • Supported (works across versions)
  • Rare in practice (or it’s a sign the layer is wrong)

A useful pattern: make the safe path easy and the unsafe path possible but loud.

Treat internal abstractions like products

This is where a lot of “platform” efforts go wrong. Internal abstractions have users and need product thinking:

  • Onboarding: can a new team use it without a meeting?
  • Documentation: examples, not essays.
  • Support: how do users report issues? what’s the SLA?
  • Roadmap: what will change and when?

If you don’t resource these, your abstraction becomes folklore-driven development.

Make it easy to delete

The ultimate architecture test is: can you delete or replace the layer?

You can increase “deletability” by:

  • Keeping boundaries small.
  • Keeping dependencies flowing one way.
  • Avoiding global singletons and hidden state.
  • Avoiding “magic behavior” that can’t be reproduced in a simpler implementation.

If a layer becomes the only place where knowledge exists, it creates organizational lock‑in.

How to tell an abstraction stopped paying rent

Here are practical signals you can notice without a dissertation.

  • Change requests grow teeth: a small product request touches many modules.
  • Consumers routinely bypass the layer: they call the database directly “just for this one thing.”
  • The API surface grows faster than usage: more options, not more value.
  • On-call pain clusters around it: incidents require deep knowledge of the layer.
  • Engineers hesitate to touch it: PRs that modify the layer are rare and scary.
  • You need to read the implementation to use it: the interface is not explanatory.

These show up in different org roles:

  • PMs hear “we need to coordinate with three teams.”
  • Designers hear “the component can’t do that.”
  • Engineers hear “the framework doesn’t support it.”
  • Founders hear “we can’t ship that this quarter.”

The root cause is often the same: the abstraction encoded constraints that are no longer aligned with the business.

What to do when the layer is wrong (without a rewrite)

The fix is rarely “delete the abstraction tomorrow.” The pragmatic path is usually an evolutionary refactor.

1) Narrow the contract

Start by writing down what the layer actually guarantees today.

Then:

  • Remove or deprecate options that don’t carry their weight.
  • Collapse “strategy” parameters into a few explicit operations.
  • Split one giant interface into a small core + optional add-ons.

A smaller contract is easier to test, version, and reason about.

2) Move policy out of mechanism

Many abstractions rot because they mix:

  • Mechanism: how something works.
  • Policy: the business rules about when and why it should happen.

If you embed policy into a shared layer, every product change becomes an architecture change.

A healthier approach:

  • Keep the shared layer generic and safe.
  • Keep business rules near the domain that owns them.

3) Add observability to the abstraction itself

If a layer is involved in incidents, it should emit signals:

  • Logs that describe decisions it made.
  • Metrics for error rates, retries, cache hit rate, latency distribution.
  • Traces that show internal steps.

The goal is to reduce “debug latency,” which is often the biggest hidden cost.

4) Build a migration path (the strangler pattern)

If the abstraction is widely used, deleting it is a migration problem.

A practical sequence:

  • Introduce a new interface next to the old one.
  • Migrate one consumer at a time.
  • Keep compatibility adapters thin.
  • Measure progress.
  • Remove the old path when usage is near zero.

This avoids the “big bang rewrite” failure mode.

Abstraction should reduce cognitive load

The best abstractions are boring:

  • Few concepts
  • Few parameters
  • Predictable behavior

If using the abstraction requires reading its implementation, it’s not an abstraction. It’s a scavenger hunt.

A useful rule of thumb

If an abstraction is:

  • hard to explain
  • hard to change
  • hard to test

…then it’s probably too big.

You don’t need fewer abstractions. You need smaller ones with clearer jobs.