← Technical
Technical · Architecture · · 15 min read

Edge-First Architecture for Solo Founders

When you’re a solo founder building a product, every infrastructure decision is a leverage decision. Over-engineered systems consume maintenance time; under-engineered systems collapse under growth. The edge-first approach offers a third path.

The Core Principle

Move as much computation as possible to the network edge — close to users, with near-zero cold start times, globally distributed by default. For solo founders, this means Cloudflare Workers, not Lambda. Cloudflare Pages, not Amplify. D1 for lightweight reads, not RDS.

The economics are compelling: Cloudflare’s free tier covers most early-stage traffic. Workers have no cold start problem (they run V8 isolates, not containers). D1 is SQLite at the edge with automatic replication.

Stack Reference

# Edge-first architecture for solo founders

CDN + DDoS protection    → Cloudflare (free tier)
Static assets            → Cloudflare Pages or Vercel
Edge compute             → Workers (50ms CPU limit)
Lightweight DB           → D1 (SQLite at edge)
Heavy computation        → External API / Queues
Object storage           → R2 (S3-compatible, no egress fees)
Auth                     → Clerk (or Lucia for open-source)
Payments                 → Stripe (always external)
Email                    → Resend or Postmark
Analytics                → Plausible (privacy-first)

Request Flow

User request
  → Cloudflare (DDoS + CDN)
    → Cache hit? → Return immediately
    → Cache miss → Worker
      → Static? → Pages/Vercel
      → Dynamic? → Worker handler
        → Read from D1 (edge-local)
        → Write? → Queue → D1 (async)
        → External service? → fetch() with timeout
      → Return response + cache-control headers

D1 Patterns That Work

D1 is SQLite with some constraints. The patterns that work:

-- Schema design for edge
-- Prefer denormalised reads over joins
-- Use INTEGER PRIMARY KEY for fast lookups
-- Avoid full-table scans

CREATE TABLE entries (
  id INTEGER PRIMARY KEY AUTOINCREMENT,
  slug TEXT UNIQUE NOT NULL,
  title TEXT NOT NULL,
  module TEXT NOT NULL,  -- 'thinking' | 'systems' | 'technical'
  published_at INTEGER NOT NULL,  -- Unix timestamp
  content TEXT,
  metadata TEXT  -- JSON blob for flexible schema
);

CREATE INDEX idx_entries_module ON entries(module);
CREATE INDEX idx_entries_published ON entries(published_at DESC);
// Worker: reading with D1
export default {
  async fetch(request, env) {
    const { pathname } = new URL(request.url);
    const slug = pathname.split('/').pop();

    const entry = await env.DB.prepare(
      'SELECT * FROM entries WHERE slug = ? LIMIT 1'
    ).bind(slug).first();

    if (!entry) return new Response('Not found', { status: 404 });

    return new Response(JSON.stringify(entry), {
      headers: {
        'Content-Type': 'application/json',
        'Cache-Control': 'public, max-age=3600',
      },
    });
  },
};

When This Architecture Breaks Down

Edge-first fails when:

  1. Long-running computations — video processing, ML inference. Workers have a 50ms CPU limit. Use a queue + external processor.
  2. Complex joins at scale — D1 is SQLite. Complex relational queries at 10M+ rows need Postgres (Neon, Supabase).
  3. Team growth — when you hire engineers, standardising on mainstream tooling (AWS, GCP) is often more practical.
  4. Compliance requirements — GDPR data residency, SOC2, HIPAA. The edge is global by default; residency requirements constrain this.

The Actual Savings

Running this stack for Credes for 6 months, the infrastructure cost for a product with ~2,000 MAU is effectively zero (within Cloudflare free tier). The time savings from not managing servers, not configuring autoscaling, and not debugging cold starts compounds into weeks of recovered engineering time per quarter.

The engineering time is the real cost. Minimise it.

```