Zurück zum Blog
42% of Teams Regret Microservices: Why We Build Modular Monoliths for MVPs

42% of Teams Regret Microservices: Why We Build Modular Monoliths for MVPs

Dennis Reinkober18. März 20264 Min. Lesezeit

Microservices won the architecture debate around 2018. Every conference talk, every blog post, every "how we scaled" story said the same thing: break it up, deploy independently, scale horizontally. Netflix does it. Uber does it. You should too.

Then reality happened.

A 2025 CNCF survey found that 42% of organizations that adopted microservices have consolidated services back into larger units. Gartner reports 60% of teams regret the decision for small-to-medium applications. Amazon Prime Video famously cut infrastructure costs by 90% by moving from microservices back to a monolith.

We build MVPs for a living. We've never built one as microservices. Here's why.

The Complexity Tax

Let's be specific about what microservices actually cost for a small team:

ConcernMonolithMicroservices
Deployment1 pipeline, 1 target5–15 pipelines, 5–15 targets
Local developmentdocker compose up8 services + service mesh simulator
DebuggingOne log stream, one debuggerDistributed tracing, log aggregation
Database migrationsOne schema, one migrationPer-service schemas, cross-service consistency
API changesRefactor + compileContract versioning, backward compatibility
TestingIntegration tests hit one processContract tests, end-to-end across services
Onboarding"Here's the repo""Here are the 12 repos and how they talk to each other"

For a team of 3–8 developers — which is every MVP team we've ever worked with — this complexity tax is devastating. You spend more time managing infrastructure than building features.

The Real Cost

A client came to us after spending 4 months setting up a microservices architecture for their MVP. They had 6 services, a message queue, a service mesh, and API gateway. They hadn't shipped a single user-facing feature. We rebuilt the entire thing as a monolith in 3 weeks.

What a Modular Monolith Actually Looks Like

A modular monolith isn't a ball of mud. It's a single deployable application with clear internal boundaries. Django's app-based architecture makes this natural:

project/
├── apps/
│   ├── orders/           # Order management domain
│   │   ├── models.py
│   │   ├── services.py   # Business logic lives here
│   │   ├── api/
│   │   │   ├── views.py
│   │   │   └── serializers.py
│   │   └── tests/
│   ├── inventory/        # Inventory domain
│   │   ├── models.py
│   │   ├── services.py
│   │   ├── api/
│   │   └── tests/
│   ├── payments/         # Payment domain
│   │   ├── models.py
│   │   ├── services.py
│   │   ├── api/
│   │   └── tests/
│   └── notifications/    # Notification domain
│       ├── models.py
│       ├── services.py
│       └── tests/
├── core/                 # Shared utilities, base classes
├── config/               # Settings, URLs, WSGI
└── docker-compose.yml    # One service. That's it.

Each app has its own models, services, API, and tests. They communicate through well-defined Python interfaces — not HTTP calls, not message queues, not gRPC. Just function calls with type hints.

# apps/orders/services.py
from apps.inventory.services import check_stock, reserve_items
from apps.payments.services import charge_customer

class OrderService:
    def create_order(self, customer_id: str, items: list[OrderItem]) -> Order:
        # Check stock — direct function call, no HTTP
        stock_status = check_stock(items)
        if not stock_status.available:
            raise InsufficientStockError(stock_status.missing)

        # Reserve inventory
        reservation = reserve_items(items)

        # Charge customer
        payment = charge_customer(customer_id, calculate_total(items))

        # Create order
        order = Order.objects.create(
            customer_id=customer_id,
            payment_id=payment.id,
            reservation_id=reservation.id,
            status=OrderStatus.CONFIRMED,
        )
        order.items.set(items)
        return order

This is microservices' domain separation with monolith simplicity. One database. One deployment. One debugger. Full type safety across boundaries.

The Deployment Comparison

Here's what deploying the same feature set looks like in both architectures:

Microservices deployment:

# 8 separate docker-compose services
services:
  api-gateway:
    image: gateway:latest
    ports: ["8000:8000"]
  order-service:
    image: orders:latest
  inventory-service:
    image: inventory:latest
  payment-service:
    image: payments:latest
  notification-service:
    image: notifications:latest
  rabbitmq:
    image: rabbitmq:3-management
  redis:
    image: redis:7
  postgres:
    image: postgres:16
# Plus: service discovery, health checks, circuit breakers...

Modular monolith deployment:

# That's it. Really.
services:
  app:
    image: myapp:latest
    ports: ["8000:8000"]
  postgres:
    image: postgres:16
  redis:
    image: redis:7  # for caching, optional

Three containers vs eight. One CI pipeline vs five. One health check vs five. One set of environment variables vs five.

"But What About Scaling?"

This is the argument we hear most. "Microservices let you scale individual services independently."

True. But irrelevant for 95% of applications.

Here's why: most MVPs — and most production applications, honestly — are not CPU-bound on specific services. They're I/O bound on the database. Adding more instances of your order service doesn't help when every request hits the same PostgreSQL instance.

For the rare case where one part of your system genuinely needs independent scaling (say, image processing or PDF generation), you can extract that one thing into a separate worker. That's not microservices — that's pragmatic architecture.

# Heavy processing? Send it to a background worker.
# Still one codebase. Still one deployment.
from django_q.tasks import async_task

async_task("apps.reports.tasks.generate_pdf", report_id=report.id)
The Scaling Rule of Thumb

If you have fewer than 15 developers and fewer than 100K daily active users, a well-built monolith handles everything you need. If you hit those numbers, congratulations — you can afford the engineering team to manage microservices. Until then, it's a solution to a problem you don't have.

When Microservices DO Make Sense

We're opinionated, not dogmatic. Microservices are the right choice when:

  • Multiple teams (15+ developers) need to deploy independently without blocking each other
  • Different technology requirements — one service needs Python ML, another needs Go for performance
  • Regulatory isolation — payment processing must be in a separate security boundary
  • Genuine scaling bottleneck — one component needs 50x the resources of others

If none of these apply, you're paying a complexity tax for architectural bragging rights.

The Migration Path

The beauty of a modular monolith is that extracting a service later is easy. Each app already has clear boundaries, its own models, and a defined API. If order processing genuinely needs to scale independently in Year 2, you:

  1. Copy the orders/ app into its own repo
  2. Replace direct function calls with API calls
  3. Set up its own database and migration
  4. Deploy independently

That's a two-week project, not a six-month rewrite. And you only do it when you have evidence that you need to — not because a conference talk scared you into premature optimization.

The Bottom Line

We build MVPs in 4 weeks. That timeline is only possible because we don't waste time on infrastructure that doesn't serve the product.

A modular monolith gives you everything microservices promise — domain separation, clean boundaries, testable components — without the operational complexity that kills small teams.

Build the monolith. Ship the product. Extract services when the data tells you to, not when a blog post tells you to.

Including this one.


We build modular monoliths that ship in 4 weeks. Learn more about our MVP Development approach.

Sources

Ähnliche Beiträge