I. Foundational Design Philosophy

System Overview

                    ┌─────────────┐
                    │  CloudFront  │
                    │   (CDN)      │
                    └──────┬──────┘
                           │
              ┌────────────┴────────────┐
              │                         │
     ┌────────▼────────┐    ┌──────────▼──────────┐
     │  Frontend App   │    │   API Gateway        │
     │  (Next.js/React)│    │  (Auth, Rate Limit)  │
     └─────────────────┘    └──────────┬───────────┘
                                       │
                          ┌────────────┴────────────┐
                          │     Core API Service     │
                          │       (NestJS)           │
                          │                          │
                          │  ┌─────────────────────┐ │
                          │  │  Module: Auth        │ │
                          │  │  Module: Teams       │ │
                          │  │  Module: Tasks       │ │
                          │  │  Module: Contacts    │ │
                          │  │  Module: Documents   │ │
                          │  │  Module: Leads       │ │
                          │  │  Module: Notes       │ │
                          │  │  Module: Notifs      │ │
                          │  │  Module: Finance     │ │
                          │  └─────────────────────┘ │
                          └────────────┬─────────────┘
                                       │
                    ┌──────────────────┼──────────────────┐
                    │                  │                   │
             ┌──────▼──────┐   ┌──────▼──────┐   ┌───────▼──────┐
             │  PostgreSQL │   │    Redis     │   │  S3 / Files  │
             │  (Aurora)   │   │  (Cache +    │   │              │
             │             │   │   Queues)    │   │              │
             └─────────────┘   └─────────────┘   └──────────────┘

1. Cell-Based Architecture

Google's approach to shared-infrastructure isolation.

Every deployment is a "cell" — an independent, hermetically sealed unit containing the full stack. Each product is a cell. Cells share nothing at runtime but share the same codebase modules. A failure in one cell cannot cascade to another.

This is how Google runs Gmail, Maps, YouTube on shared infrastructure without shared fate.

2. Zero Trust Security Model

Based on NIST 800-207.

No implicit trust. Every request is authenticated, authorized, and encrypted — even internal service-to-service calls. Network location (VPC, subnet) grants zero privilege. Identity is the only perimeter.

3. AWS Multi-Account Isolation

AWS Well-Architected Framework.

Separate AWS accounts are hard security boundaries. Each concern gets its own blast radius. You don't share accounts between products.

II. AWS Multi-Account Strategy

This is the single most important infrastructure decision.

Account Topology

AWS Organization (Root)
│
├── Management Account (billing, SCPs, Organization policies ONLY)
│   └── No workloads ever run here
│
├── OU: Security
│   ├── Security Tooling Account
│   │   ├── GuardDuty delegated admin
│   │   ├── Security Hub aggregator
│   │   ├── CloudTrail organization trail (immutable S3)
│   │   ├── AWS Config aggregator
│   │   └── IAM Access Analyzer
│   │
│   └── Log Archive Account
│       ├── Centralized CloudWatch Logs
│       ├── CloudTrail logs (write-once, read-many)
│       ├── VPC Flow Logs & S3 access logs
│       └── Retention: 7 years (compliance)
│
├── OU: Shared Services
│   ├── Network Hub Account
│   │   ├── Transit Gateway (hub-and-spoke)
│   │   ├── Route 53 Hosted Zones
│   │   ├── AWS Certificate Manager
│   │   └── VPN / Direct Connect termination
│   │
│   ├── Shared Services Account
│   │   ├── ECR (container registry)
│   │   ├── Artifact stores (npm, pip)
│   │   ├── Cognito / Keycloak (IdP)
│   │   └── Secrets Manager
│   │
│   └── CI/CD Account
│       ├── GitHub Actions self-hosted runners
│       ├── CDK Pipelines (deploys via cross-account roles)
│       └── Artifact signing (cosign / Sigstore)
│
├── OU: Workloads
│   ├── Product A — Dev / Staging / Prod (3 accounts)
│   ├── Product B — Dev / Staging / Prod (3 accounts)
│   └── ... (each new product gets 3 accounts)
│
└── OU: Sandbox
    └── Developer Sandbox Accounts

Why This Matters

  • Blast radius isolation — a misconfigured IAM policy in Product A's dev cannot touch Product B's prod. Hard AWS boundary.
  • Cost attribution — each product's AWS bill is isolated automatically.
  • Compliance — Security OU locked with SCPs. Even root can't delete logs or disable GuardDuty.
  • Modularity — new product = one CDK script → 3 accounts with standard config. Minutes.

Cross-Account Access Patterns

CI/CD Account                    Product A Prod Account
┌──────────────┐                ┌──────────────────────┐
│ CDK Pipeline │───AssumeRole──▶│ DeploymentRole       │
│              │   (cross-acct) │ (ECS, RDS, S3 only)  │
└──────────────┘                └──────────────────────┘

Shared Services Account         Product A Prod Account
┌──────────────┐                ┌──────────────────────┐
│ Cognito      │◀──────────────│ API Gateway validates │
│ (IdP)        │  JWT issued    │ JWT via JWKS endpoint │
└──────────────┘                └──────────────────────┘

III. Security Architecture

Defense in depth — five layers from edge to data.

Layer 1: Edge — CloudFront + WAF

Internet → CloudFront (TLS 1.3 only)
              │
              ├── AWS WAF v2
              │   ├── Managed Rules (OWASP Top 10)
              │   ├── Rate limiting (2000 req/5min per IP)
              │   ├── Geo-blocking
              │   ├── Bot Control
              │   └── Custom rules (SQLi, XSS)
              │
              └── AWS Shield Advanced (DDoS)

Layer 2: API Gateway

  • Request validation (JSON Schema)
  • Mutual TLS for service-to-service
  • Usage plans + API keys for external consumers
  • Request/response logging → Log Archive Account
  • Lambda Authorizer or Cognito Authorizer

Layer 3: Application — Zero Trust Pipeline

Every request — even internal — goes through:

Request
  → TLS termination (ALB)
  → JWT verification (signature + expiry + audience + issuer)
  → Tenant extraction (org_id from token claims)
  → Permission evaluation (RBAC + ABAC)
  → Rate limiting (per-user, per-tenant, per-endpoint)
  → Input validation (Zod schemas, strict mode)
  → Audit logging (who, what, when, from where)
  → Business logic
  → Output sanitization (strip internal fields)
  → Response

Permission Model — Google Zanzibar

Relationship-based access control at scale. Permissions are stored as tuples and checks are graph traversals.

document:doc_123#viewer@user:alice
document:doc_123#editor@team:engineering#member
team:engineering#member@user:bob
org:acme#admin@user:carol

// "Can Bob edit doc_123?"
// → doc_123#editor includes team:engineering#member
// → team:engineering#member includes user:bob
// → YES

Use SpiceDB or OpenFGA (open-source Zanzibar). Every module calls the permission service. No module implements its own auth logic.

CREATE TABLE permission_tuples (
    id             UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    namespace      VARCHAR(100) NOT NULL,
    object_id      VARCHAR(200) NOT NULL,
    relation       VARCHAR(100) NOT NULL,
    subject_ns     VARCHAR(100) NOT NULL,
    subject_id     VARCHAR(200) NOT NULL,
    subject_rel    VARCHAR(100),
    created_at     TIMESTAMPTZ DEFAULT NOW(),
    UNIQUE(namespace, object_id, relation, subject_ns, subject_id, subject_rel)
);

CREATE INDEX idx_perm_object  ON permission_tuples(namespace, object_id);
CREATE INDEX idx_perm_subject ON permission_tuples(subject_ns, subject_id);
CREATE INDEX idx_perm_check   ON permission_tuples(namespace, object_id, relation);

Layer 4: Data Security

Row-Level Security — even if app code has a bug, the database enforces tenant isolation:

ALTER TABLE tasks ENABLE ROW LEVEL SECURITY;

CREATE POLICY tenant_isolation ON tasks
    USING (org_id = current_setting('app.current_org_id')::UUID);

-- Set on every DB connection:
SET app.current_org_id = 'org_xyz';
-- SELECT * FROM tasks only returns org_xyz's data
  • Encryption: AES-256 at rest, TLS 1.3 in transit, app-level encryption for PII via AWS KMS (per-tenant keys)
  • Data classification: columns tagged PUBLIC / INTERNAL / CONFIDENTIAL / RESTRICTED. Serializers auto-strip based on clearance.

Layer 5: Supply Chain Security

  • Container images scanned with Trivy/Snyk on every build
  • Dependency audit on every PR
  • Image signing with cosign — ECS only runs signed images
  • SBOM generation for every release

IV. Database Architecture

Aurora PostgreSQL — Serverless v2

FeatureBenefit
3-5x faster than standard PGRewritten storage engine, 6-way replication, parallel query
Serverless v2Scales 0.5 → 128 ACUs in seconds. Dev costs near zero.
Up to 15 read replicas<20ms lag. Route dashboards/reports to replicas.
Global DatabaseMulti-region replication <1s lag
100% PG compatibleEvery extension, ORM, and tool works
               Write Path                 Read Path
                   │                          │
                   ▼                          ▼
            ┌──────────────┐        ┌──────────────────┐
            │ Aurora Writer │        │ Aurora Reader x3  │
            │  (Primary)    │        │  (Auto-scaling)   │
            └──────┬───────┘        └────────┬─────────┘
                   │                         │
                   ▼                         ▼
            ┌──────────────────────────────────┐
            │  Aurora Storage (distributed,     │
            │  6-way replicated, auto-healing)  │
            └──────────────────────────────────┘

      ┌──────────────────────────────────────────┐
      │            Redis Cluster                  │
      │  Sessions  │  Query Cache  │  Rate Limits │
      └──────────────────────────────────────────┘

Caching Strategy — Cache-Aside with Invalidation

async function getTask(taskId: string, orgId: string): Promise<Task> {
  const cacheKey = `task:${orgId}:${taskId}`;

  const cached = await redis.get(cacheKey);
  if (cached) return JSON.parse(cached);

  const task = await auroraReader.query(
    'SELECT * FROM tasks WHERE id = $1 AND org_id = $2',
    [taskId, orgId]
  );

  await redis.setex(cacheKey, 300, JSON.stringify(task));
  return task;
}

async function updateTask(taskId, orgId, data) {
  await auroraWriter.query(/* ... */);
  await redis.del(`task:${orgId}:${taskId}`);
  await eventBus.emit('task.updated', { taskId, orgId, changes: data });
}

Data Migration from Legacy Systems

Legacy System                 AWS
┌──────────────┐           ┌───────────────────────────┐
│ SQL Server   │           │  DMS Replication Instance  │
│ Oracle       │──DMS ────▶│  ├── Full Load (bulk)      │
│ MySQL 5.x    │  (CDC)    │  └── CDC (continuous)      │
│ MongoDB      │           │         │                  │
└──────────────┘           │         ▼                  │
                           │  ┌──────────────┐         │
                           │  │  Aurora PG    │         │
                           │  └──────────────┘         │
                           └───────────────────────────┘

Migration Steps

  1. SCT — Schema Conversion Tool analyzes legacy schema, converts 90%+ to PG DDL automatically
  2. DMS Full Load — bulk copies all data (handles type conversions, encoding)
  3. DMS CDC — continuous replication while legacy still runs. Zero downtime.
  4. Validation — row counts + data integrity verification
  5. Cutover — flip DNS, stop CDC, legacy goes read-only

Non-Database Legacy Data (CSV, Excel, XML, APIs)

Source (S3 upload) → Step Function
    ├── Validate schema
    ├── Transform (dates, currencies, encodings)
    ├── Deduplicate
    ├── Map to target schema (configurable)
    ├── Batch insert into Aurora
    ├── Generate migration report
    └── Notify (success/failure + row counts)

V. Application — Module System

Core Services (Always Present)

ModuleResponsibility
AuthSignup, login, logout, password reset, MFA, OAuth, session management
UsersProfiles, preferences, avatars
OrganizationsMulti-tenancy, org settings, billing tier
TeamsCreate teams, add/remove members, team roles
PermissionsZanzibar RBAC + ABAC, permission checks as middleware
Entity LinksUniversal cross-module linking (any entity to any entity)
NotificationsIn-app, email, push, webhooks (event-driven)
Files / MediaS3 upload/download, presigned URLs, file metadata
Audit LogImmutable record of who did what, when (enterprise compliance)
SearchFull-text search via PG tsvector → OpenSearch when needed
Settings / ConfigFeature flags, app config, per-tenant configuration

Core Infrastructure Services

ServiceTechnologyPurpose
Event BusSNS/SQS or Redis StreamsModules communicate via events, not direct calls. When a task is created, an event fires and the notification module picks it up.
Job QueueBullMQ (Redis-backed)Background jobs: emails, report generation, data exports, scheduled tasks
Caching LayerRedis (ElastiCache)Session data, frequently accessed data, rate limiting counters
Logging & MonitoringCloudWatch + structured JSONCentralized, queryable logs. Optionally Datadog or Grafana.
Health ChecksPer-module /healthEvery module exposes a health endpoint for load balancers and orchestration
Monolith-first, modular-ready. Start as a well-structured modular monolith (one deployable, many internal modules). Extract into microservices only when a specific module needs independent scaling. This avoids premature complexity.

Monorepo Structure

platform/
├── packages/
│   ├── core/                     # Shared kernel — NEVER optional
│   │   ├── auth/                 # JWT, sessions, OAuth
│   │   ├── iam/                  # Zanzibar permission engine
│   │   ├── tenancy/              # Org isolation, RLS
│   │   ├── events/               # Event bus abstraction
│   │   ├── storage/              # S3 abstraction
│   │   ├── notifications/        # Multi-channel engine
│   │   ├── audit/                # Immutable audit log
│   │   ├── search/               # Full-text search
│   │   ├── migrations/           # ETL framework
│   │   └── common/               # DTOs, validators, errors
│   │
│   ├── modules/                  # Optional business modules
│   │   ├── teams/
│   │   ├── tasks/
│   │   ├── contacts/
│   │   ├── documents/
│   │   ├── notes/
│   │   ├── finance/
│   │   └── [custom]/
│   │
│   ├── sdk/                      # Auto-generated TS SDK
│   └── ui/                       # Shared UI components
│
├── apps/
│   ├── api/                      # Deployable API server
│   ├── worker/                   # Background jobs
│   └── web/                      # Next.js frontend
│
├── infra/                        # AWS CDK
│   ├── lib/
│   │   ├── account-baseline.ts
│   │   ├── networking.ts
│   │   ├── database.ts
│   │   ├── compute.ts
│   │   ├── cdn.ts
│   │   ├── security.ts
│   │   └── observability.ts
│   └── bin/
│       ├── deploy-shared-services.ts
│       └── deploy-product.ts
│
└── tools/
    ├── migrate/                  # Data migration CLI
    ├── scaffold/                 # New module generator
    └── sdk-gen/                  # OpenAPI → SDK

Module Registration

interface PlatformModule {
  name: string;
  version: string;
  dependencies: string[];

  onRegister(container: DependencyContainer): void;
  onDatabaseSetup(migrator: Migrator): Promise<void>;
  onPermissionsSetup(engine: PermissionEngine): void;
  onEventsSetup(bus: EventBus): void;
  onReady(): Promise<void>;
  onShutdown(): Promise<void>;
}

Example: TasksModule Implementation

export class TasksModule implements PlatformModule {
  name = 'tasks';
  version = '1.0.0';
  dependencies = ['core', 'teams'];

  onRegister(container) {
    container.register(TasksService);
    container.register(TasksController);
    container.register(TaskBoardsController);
  }

  onDatabaseSetup(migrator) {
    return migrator.runModuleMigrations('tasks');
  }

  onPermissionsSetup(engine) {
    engine.defineNamespace('task', {
      relations: {
        org: 'organization',
        owner: 'user',
        assignee: 'user | team#member',
        viewer: 'user | team#member | org#member',
        editor: 'user | team#member',
      },
      permissions: {
        view: 'viewer + editor + owner + org->admin',
        edit: 'editor + owner',
        delete: 'owner + org->admin',
        assign: 'editor + owner',
      }
    });
  }

  onEventsSetup(bus) {
    bus.subscribe('team.member_removed', this.handleTeamMemberRemoved);
    bus.subscribe('contact.deleted', this.handleContactDeleted);
  }
}

Per-Product Configuration

import { TasksModule } from '@platform/modules/tasks';
import { ContactsModule } from '@platform/modules/contacts';
import { DocumentsModule } from '@platform/modules/documents';
import { NotesModule } from '@platform/modules/notes';
// import { FinanceModule } from '@platform/modules/finance';

export const platformConfig = {
  modules: [
    new TasksModule(),
    new ContactsModule(),
    new DocumentsModule(),
    new NotesModule(),
    // FinanceModule — not loaded, endpoints don't exist,
    // DB tables not created, events not subscribed
  ],

  aws: {
    region: 'us-east-1',
    accountId: process.env.AWS_ACCOUNT_ID,
  },

  database: {
    writer: process.env.AURORA_WRITER_ENDPOINT,
    reader: process.env.AURORA_READER_ENDPOINT,
  },

  features: {
    enableRedlining: true,
    enableKanban: true,
    maxTeamSize: 50,
  }
};

Universal linking — any entity to any entity. This is the glue that lets teams, tasks, docs, leads all reference each other.

CREATE TABLE entity_links (
    id            UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    source_type   VARCHAR(50) NOT NULL,
    source_id     UUID NOT NULL,
    target_type   VARCHAR(50) NOT NULL,
    target_id     UUID NOT NULL,
    relationship  VARCHAR(50) NOT NULL,
    org_id        UUID NOT NULL,
    created_at    TIMESTAMPTZ DEFAULT NOW(),
    UNIQUE(source_type, source_id, target_type, target_id, relationship)
);

EntityLinkService Implementation

class EntityLinkService {
  async link(params: {
    source: { type: string; id: string };
    target: { type: string; id: string };
    relationship: string;
    orgId: string;
    actorId: string;
  }) {
    // 1. Verify both entities exist (calls respective module)
    await this.verifyEntity(params.source);
    await this.verifyEntity(params.target);

    // 2. Check permission (actor must have 'link' permission on both)
    await this.permissionEngine.check(params.actorId, 'link', params.source);
    await this.permissionEngine.check(params.actorId, 'link', params.target);

    // 3. Create the link
    await this.repository.createLink(params);

    // 4. Emit event (other modules can react)
    await this.eventBus.emit('entity.linked', params);

    // 5. Audit
    await this.auditLog.record('entity.linked', params);
  }

  async getLinkedEntities(
    source: { type: string; id: string },
    targetType: string,
    orgId: string
  ) {
    return this.repository.findLinks(source, targetType, orgId);
  }
}

Auto-Generated Endpoints

EndpointDescription
POST /api/v1/linksCreate a link between any two entities
DELETE /api/v1/links/:idRemove a link
GET /api/v1/teams/:id/linked/documentsDocs linked to a team
GET /api/v1/teams/:id/linked/tasksTasks linked to a team
GET /api/v1/documents/:id/linked/contactsContacts linked to a doc
GET /api/v1/contacts/:id/linked/*Everything linked to a contact

VI. Business Modules (Deep Dive)

The "Modules Within Modules" Pattern

Business modules aren't flat. A module like Documents contains submodules (Redline, Approvals, Signatures) that have their own tables, endpoints, and events — but can't exist without the parent.

// Submodules declare their parent as a hard dependency
export class RedlineModule implements PlatformModule {
  name = 'redline';
  version = '1.0.0';
  dependencies = ['core', 'documents']; // can't load without documents
  onRegister(container) {
    container.register(RedlineService);
    container.register(DiffEngine);
    container.register(RedlineController);
  }
}

// Parent module optionally loads submodules
export class DocumentsModule implements PlatformModule {
  name = 'documents';
  version = '1.0.0';
  dependencies = ['core', 'files'];
  submodules = ['redline', 'approvals', 'signatures']; // optional
}

If a product needs Documents but not Redline, just don't load the submodule. The parent works without it.

Cross-Module Data Loading Endpoints

Every module exposes standard "loader" endpoints for other modules and frontends — entity link UIs, activity feeds, notifications, global search, dashboard widgets.

interface ModuleLoader {
  resolve(ids: string[], orgId: string): Promise<EntitySummary[]>;
  search(query: string, orgId: string, limit?: number): Promise<EntitySummary[]>;
  count(orgId: string, filter?: Record<string, unknown>): Promise<number>;
}

interface EntitySummary {
  id: string;
  type: string;       // 'task', 'contact', 'document'
  title: string;
  subtitle?: string;  // "Acme Corp", "Due tomorrow"
  status?: string;
  avatar?: string;
  url: string;        // deep link path
}
// Auto-generated for every module:
POST   /api/v1/resolve          — { type: "task", ids: ["id1","id2"] } → summaries
GET    /api/v1/search?type=contact&q=acme   — cross-module search
GET    /api/v1/counts           — { tasks: 12, contacts: 340, documents: 28 }

Tasks Module (Deep Dive)  Optional

Same data powers Kanban boards, traditional lists, calendar views, and Gantt charts. The view_type on boards determines rendering — not data structure.

Core Schema

CREATE TABLE tasks (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    title VARCHAR(500) NOT NULL,
    description TEXT,
    status VARCHAR(50) NOT NULL DEFAULT 'todo',
    priority VARCHAR(20),          -- 'urgent', 'high', 'medium', 'low'
    due_date TIMESTAMPTZ,
    start_date TIMESTAMPTZ,
    board_id UUID REFERENCES task_boards(id),
    column_id UUID REFERENCES task_columns(id),
    position FLOAT,                -- fractional indexing for drag-and-drop
    parent_id UUID REFERENCES tasks(id),  -- subtask (unlimited nesting)
    metadata JSONB DEFAULT '{}',   -- custom fields, labels, story points
    assignee_id UUID,
    created_by UUID NOT NULL,
    deleted_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE task_boards (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    name VARCHAR(200),
    view_type VARCHAR(50) DEFAULT 'kanban',  -- kanban, list, calendar, gantt
    config JSONB DEFAULT '{}'
);

CREATE TABLE task_columns (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    board_id UUID REFERENCES task_boards(id),
    name VARCHAR(200),
    position FLOAT,
    config JSONB DEFAULT '{}'  -- color, WIP limits, automation rules
);

Checklists

CREATE TABLE task_checklists (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
    title VARCHAR(200) NOT NULL,
    position FLOAT,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE checklist_items (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    checklist_id UUID NOT NULL REFERENCES task_checklists(id) ON DELETE CASCADE,
    title VARCHAR(500) NOT NULL,
    done BOOLEAN DEFAULT false,
    done_by UUID,
    done_at TIMESTAMPTZ,
    position FLOAT,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Task Dependencies

Critical for Gantt views and workflow automation. Circular dependency detection via topological sort.

CREATE TABLE task_dependencies (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
    depends_on_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
    type VARCHAR(20) DEFAULT 'finish_to_start',
    -- 'finish_to_start', 'start_to_start', 'finish_to_finish'
    created_at TIMESTAMPTZ DEFAULT NOW(),
    UNIQUE(task_id, depends_on_id),
    CHECK(task_id != depends_on_id)
);

Watchers

CREATE TABLE task_watchers (
    task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
    user_id UUID NOT NULL,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    PRIMARY KEY(task_id, user_id)
);

Recurring Tasks

CREATE TABLE task_recurrence_rules (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    template_task_id UUID NOT NULL REFERENCES tasks(id),
    cron_expression VARCHAR(50) NOT NULL,    -- '0 9 1 * *' (1st of month at 9am)
    next_run_at TIMESTAMPTZ NOT NULL,
    assignee_strategy VARCHAR(20) DEFAULT 'same',  -- 'same', 'rotate', 'round_robin'
    enabled BOOLEAN DEFAULT true,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Time Tracking (Optional Submodule)

CREATE TABLE time_entries (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    task_id UUID REFERENCES tasks(id),
    user_id UUID NOT NULL,
    started_at TIMESTAMPTZ NOT NULL,
    ended_at TIMESTAMPTZ,              -- NULL = timer running
    duration_seconds INT,
    description TEXT,
    billable BOOLEAN DEFAULT true,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Task Automations

Rule engine that fires on task events. Stored as JSON rules, evaluated by the event bus.

CREATE TABLE task_automations (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    board_id UUID REFERENCES task_boards(id),
    name VARCHAR(200),
    trigger_event VARCHAR(100) NOT NULL,  -- 'task.status_changed', 'task.overdue'
    conditions JSONB DEFAULT '[]',
    actions JSONB DEFAULT '[]',
    enabled BOOLEAN DEFAULT true,
    created_at TIMESTAMPTZ DEFAULT NOW()
);
// Example rules:
{ trigger: 'task.status_changed',
  conditions: [{ field: 'status', to: 'done' }],
  actions: [
    { type: 'notify', target: 'creator', template: 'task_completed' },
    { type: 'set_field', field: 'completed_at', value: '{{now}}' }
  ] }

{ trigger: 'task.overdue',
  conditions: [{ field: 'overdue_days', gte: 3 }],
  actions: [
    { type: 'reassign', target: 'team_lead' },
    { type: 'set_field', field: 'priority', value: 'urgent' },
    { type: 'notify', target: 'assignee', template: 'task_escalated' }
  ] }

Full Task Endpoints

// Core CRUD
GET    /api/v1/tasks                          — list (standard pagination/filtering)
POST   /api/v1/tasks                          — create
GET    /api/v1/tasks/:id                      — get (includes subtasks, checklists, watchers)
PATCH  /api/v1/tasks/:id                      — update
DELETE /api/v1/tasks/:id                      — soft delete

// Views
GET    /api/v1/tasks?board_id=X&view=kanban   — grouped by column
GET    /api/v1/tasks?board_id=X&view=list     — flat sorted list
GET    /api/v1/tasks?view=calendar&from=X&to=Y — calendar range
GET    /api/v1/tasks?view=gantt&board_id=X    — with dependencies & timeline

// Board management
POST   /api/v1/boards                         — create board
PATCH  /api/v1/boards/:id/columns             — add/reorder columns
PATCH  /api/v1/tasks/:id/move                 — reorder / move between columns

// Hierarchy & Checklists
POST   /api/v1/tasks/:id/subtasks             — create subtask
POST   /api/v1/tasks/:id/checklists           — add checklist
PATCH  /api/v1/checklists/:id/items/:itemId   — toggle done

// Dependencies & Watchers
POST   /api/v1/tasks/:id/dependencies         — add dependency
POST   /api/v1/tasks/:id/watchers             — add watcher

// Time Tracking
POST   /api/v1/tasks/:id/timer/start          — start timer
POST   /api/v1/tasks/:id/timer/stop           — stop timer
GET    /api/v1/time-entries?task=X&from=Y&to=Z — time report

// Automations
GET    /api/v1/boards/:id/automations         — list rules
POST   /api/v1/boards/:id/automations         — create rule

Contacts / Leads / Client Management (Deep Dive)  Optional

Core Schema

Leads and contacts share one table with a type discriminator. A lead converts to a client by changing type.

CREATE TABLE contacts (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    type VARCHAR(20) NOT NULL,       -- 'lead', 'client', 'vendor', 'partner'
    status VARCHAR(50),              -- 'active', 'inactive', 'churned'
    first_name VARCHAR(100),
    last_name VARCHAR(100),
    email VARCHAR(255),
    phone VARCHAR(50),
    company VARCHAR(200),
    job_title VARCHAR(200),
    avatar_url VARCHAR(1000),
    source VARCHAR(50),              -- 'website', 'referral', 'cold_outreach', 'import'
    custom_fields JSONB DEFAULT '{}',
    pipeline_stage VARCHAR(50),
    assigned_to UUID,
    deleted_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE INDEX idx_contacts_org_type ON contacts(org_id, type);
CREATE INDEX idx_contacts_email ON contacts(org_id, email);
CREATE INDEX idx_contacts_company ON contacts(org_id, company);

Pipelines & Stages

CREATE TABLE pipelines (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    name VARCHAR(200) NOT NULL,        -- 'Sales Pipeline', 'Onboarding Pipeline'
    type VARCHAR(20) DEFAULT 'deal',   -- 'deal', 'lead', 'onboarding'
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE pipeline_stages (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    pipeline_id UUID NOT NULL REFERENCES pipelines(id) ON DELETE CASCADE,
    name VARCHAR(100) NOT NULL,        -- 'Qualified', 'Proposal', 'Negotiation', 'Closed Won'
    position FLOAT,
    color VARCHAR(7),
    probability INT,                   -- 10, 25, 50, 75, 100 — for weighted pipeline
    is_closed BOOLEAN DEFAULT false,
    is_won BOOLEAN DEFAULT false,      -- closed-won vs closed-lost
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Deals

One contact can have multiple deals. Deals have value, probability, expected close date.

CREATE TABLE deals (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    contact_id UUID NOT NULL REFERENCES contacts(id),
    pipeline_id UUID NOT NULL REFERENCES pipelines(id),
    stage_id UUID NOT NULL REFERENCES pipeline_stages(id),
    title VARCHAR(500) NOT NULL,
    value DECIMAL(15, 2),
    currency VARCHAR(3) DEFAULT 'USD',
    probability INT,                   -- auto-set from stage, overridable
    expected_close_date DATE,
    actual_close_date DATE,
    lost_reason VARCHAR(200),
    assigned_to UUID,
    metadata JSONB DEFAULT '{}',
    deleted_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE INDEX idx_deals_pipeline ON deals(org_id, pipeline_id, stage_id);
CREATE INDEX idx_deals_contact ON deals(org_id, contact_id);

Interactions / Touchpoints

Every email, call, meeting logged against a contact. "Last contacted 3 days ago."

CREATE TABLE interactions (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    contact_id UUID NOT NULL REFERENCES contacts(id),
    deal_id UUID REFERENCES deals(id),       -- optionally tied to a deal
    type VARCHAR(20) NOT NULL,               -- 'email', 'call', 'meeting', 'note', 'sms'
    direction VARCHAR(10),                   -- 'inbound', 'outbound'
    subject VARCHAR(500),
    body TEXT,
    duration_seconds INT,                    -- for calls/meetings
    occurred_at TIMESTAMPTZ NOT NULL,
    logged_by UUID NOT NULL,
    metadata JSONB DEFAULT '{}',
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE INDEX idx_interactions_contact ON interactions(org_id, contact_id, occurred_at DESC);

Full CRM Endpoints

// Contacts
GET    /api/v1/contacts                        — list with standard filtering
POST   /api/v1/contacts                        — create
GET    /api/v1/contacts/:id                    — full profile (deals, interactions, links)
POST   /api/v1/contacts/:id/convert            — lead → client (changes type, triggers event)
GET    /api/v1/contacts/:id/timeline           — merged activity + interactions

// Deals
GET    /api/v1/deals?pipeline_id=X             — deal board (grouped by stage)
POST   /api/v1/deals                           — create deal (linked to contact)
PATCH  /api/v1/deals/:id/move                  — move between pipeline stages
GET    /api/v1/deals/forecast                  — weighted pipeline: SUM(value * probability)

// Pipelines
GET    /api/v1/pipelines                       — list org pipelines
POST   /api/v1/pipelines                       — create
PATCH  /api/v1/pipelines/:id/stages            — configure stages

// Interactions
GET    /api/v1/contacts/:id/interactions       — interaction history
POST   /api/v1/contacts/:id/interactions       — log interaction

Documents / Contracts (Deep Dive)  Optional

Core Schema

CREATE TABLE documents (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    title VARCHAR(500),
    type VARCHAR(50),              -- 'contract', 'proposal', 'note', 'template', 'sow'
    content TEXT,                  -- rich text / markdown / structured JSON
    status VARCHAR(50),            -- 'draft', 'internal_review', 'client_review', 'sent', 'signed'
    version INT DEFAULT 1,
    parent_id UUID REFERENCES documents(id),   -- version chain
    template_id UUID REFERENCES documents(id), -- created from this template
    metadata JSONB DEFAULT '{}',
    file_url VARCHAR(1000),
    expires_at TIMESTAMPTZ,        -- contract expiration
    created_by UUID NOT NULL,
    deleted_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE INDEX idx_documents_org_type ON documents(org_id, type, status);

Submodule: Redline / Revision Engine

Key design decision: field-based change tracking, NOT algorithmic diffing.

Generic text diffing (diff, diff-match-patch) produces unreliable results on rich HTML contracts — breaks on tag boundaries, misidentifies changes. Instead: track changes at the field level. Contract HTML uses data-field attributes on editable sections, so changes map cleanly to the document structure.

Two change-tracking modes:
1. Structured edits (Edit Agreement modal) — user changes specific fields (payment terms, duration). Each field change becomes a document_revision_changes row.
2. Free-text edits (TipTap editor) — user edits the body directly. Changes stored inline via custom TipTap marks (data-inserted, data-deleted).
CREATE TABLE document_revisions (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    document_id UUID NOT NULL REFERENCES documents(id) ON DELETE CASCADE,
    revision_number INT NOT NULL,
    content TEXT NOT NULL,             -- full HTML snapshot at this revision
    change_description VARCHAR(500),
    feedback_id UUID,                  -- if created from a suggestion
    created_by_id UUID,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    UNIQUE(document_id, revision_number)
);

-- Structured field changes — the ONLY reliable way to track changes
CREATE TABLE document_revision_changes (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    revision_id UUID NOT NULL REFERENCES document_revisions(id) ON DELETE CASCADE,
    field_name VARCHAR(200) NOT NULL,  -- 'cash_mg_amount', 'payment_terms'
    field_label VARCHAR(200),          -- 'Cash MG Amount', 'Payment Terms'
    old_value TEXT NOT NULL,
    new_value TEXT NOT NULL,
    change_order INT DEFAULT 0,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Revision 0 pattern: On first-ever revision, auto-create revision 0 with original content as baseline. Always possible to diff back to original.

Revision Diff Rendering (Cheerio + data-field)

Highlighting done at render time using stored field changes — NOT by running a diff algorithm.

import * as cheerio from 'cheerio';

const HL_ADD_STYLE = 'background-color: rgba(239, 68, 68, 0.25); color: #dc2626; font-weight: 600;';
const HL_DEL_STYLE = 'background-color: rgba(107, 114, 128, 0.2); color: #6b7280; text-decoration: line-through;';

function highlightByField(html, fieldName, oldValue, newValue, showOldText) {
  let replacement = '';
  if (showOldText && oldValue && oldValue !== newValue) {
    replacement += `<span style="${HL_DEL_STYLE}">${escapeHtml(oldValue)}</span>`;
  }
  if (newValue?.trim()) {
    replacement += `<span style="${HL_ADD_STYLE}">${escapeHtml(newValue)}</span>`;
  }

  // Strategy 1: Find element by data-field attribute (reliable)
  const $ = cheerio.load(html);
  const $el = $(`[data-field="${fieldName}"]`);
  if ($el.length) { $el.html(replacement); return $.html(); }

  // Strategy 2: Direct text search fallback
  return html.split(searchText).join(replacement);
}

Display modes: Cumulative (all changes since original), Historical (one revision's changes), Eye toggle (showOldText shows/hides deleted text).

TipTap Track Changes Editor

For free-text edits — Word-style "Track Changes" in a web editor.

Packages: @tiptap/react, @tiptap/core, @tiptap/pm, @tiptap/starter-kit, @tiptap/extension-underline, cheerio (server-side HTML parsing)
// Custom TipTap marks for tracked changes
const InsertMark = Mark.create({
  name: 'trackInsert',
  parseHTML() { return [{ tag: 'span[data-inserted]' }]; },
  renderHTML({ HTMLAttributes }) {
    return ['span', mergeAttributes({
      style: 'background: rgba(34,197,94,0.25); color: #16a34a; border-bottom: 2px solid #16a34a;',
      'data-inserted': 'true'
    }, HTMLAttributes), 0];
  },
});

const DeleteMark = Mark.create({
  name: 'trackDelete',
  parseHTML() { return [{ tag: 'span[data-deleted]' }]; },
  renderHTML({ HTMLAttributes }) {
    return ['span', mergeAttributes({
      style: 'background: rgba(239,68,68,0.2); color: #dc2626; text-decoration: line-through;',
      'data-deleted': 'true'
    }, HTMLAttributes), 0];
  },
});

How it works:

  • Typing → new text wrapped in <span data-inserted="true"> (green)
  • Backspace/Delete → text wrapped in <span data-deleted="true"> (red strikethrough), skips already-deleted spans
  • Accept all → strip deleted spans, unwrap inserted spans (keep text)
  • Reject all → strip inserted spans, unwrap deleted spans (restore text)
// Accept: remove deleted text, keep inserted text
acceptAll() {
  html = html.replace(/<span[^>]*data-deleted="true"[^>]*>.*?<\/span>/g, '');
  html = html.replace(/<span[^>]*data-inserted="true"[^>]*>(.*?)<\/span>/g, '$1');
}

// Reject: remove inserted text, restore deleted text
rejectAll() {
  html = html.replace(/<span[^>]*data-inserted="true"[^>]*>.*?<\/span>/g, '');
  html = html.replace(/<span[^>]*data-deleted="true"[^>]*>(.*?)<\/span>/g, '$1');
}

Edit Suggestions (External Review)

For external parties reviewing contracts — side-by-side comparison, not inline diffs. Approving creates a new revision.

CREATE TABLE document_edit_suggestions (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    document_id UUID NOT NULL REFERENCES documents(id),
    section VARCHAR(200),
    field_key VARCHAR(200),
    original_text TEXT NOT NULL,
    suggested_text TEXT NOT NULL,
    creator_request TEXT,              -- what the reviewer asked for
    ai_reasoning TEXT,                 -- optional AI analysis
    status VARCHAR(20) DEFAULT 'PENDING', -- 'PENDING','APPROVED','REJECTED'
    reviewed_by UUID,
    reviewed_at TIMESTAMPTZ,
    review_notes TEXT,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Submodule: Clause Library

Reusable contract sections. Orgs build a library; templates are assembled from clauses.

CREATE TABLE clauses (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    title VARCHAR(200) NOT NULL,       -- 'Standard Indemnity', 'Net-30 Payment Terms'
    category VARCHAR(50),              -- 'indemnity', 'payment', 'confidentiality'
    content TEXT NOT NULL,
    version INT DEFAULT 1,
    is_default BOOLEAN DEFAULT false,  -- included in new contracts by default
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE document_clauses (
    document_id UUID NOT NULL REFERENCES documents(id) ON DELETE CASCADE,
    clause_id UUID NOT NULL REFERENCES clauses(id),
    position FLOAT,
    override_content TEXT,             -- NULL = use clause content, set = custom version
    PRIMARY KEY(document_id, clause_id)
);

Submodule: Approval Workflow

Multi-step approval before documents go out. Configurable per org and document type.

CREATE TABLE approval_workflows (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    name VARCHAR(200),
    document_type VARCHAR(50),         -- 'contract', 'proposal'
    steps JSONB NOT NULL,
    -- [{ "order": 1, "role": "legal_reviewer", "action": "approve", "required": true },
    --  { "order": 2, "role": "executive", "action": "sign", "required": true }]
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE approval_requests (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    document_id UUID NOT NULL REFERENCES documents(id),
    workflow_id UUID NOT NULL REFERENCES approval_workflows(id),
    current_step INT DEFAULT 1,
    status VARCHAR(20) DEFAULT 'pending',
    created_by UUID NOT NULL,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE approval_decisions (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    request_id UUID NOT NULL REFERENCES approval_requests(id),
    step_order INT NOT NULL,
    reviewer_id UUID NOT NULL,
    decision VARCHAR(20) NOT NULL,     -- 'approved', 'rejected', 'returned_for_changes'
    comment TEXT,
    decided_at TIMESTAMPTZ DEFAULT NOW()
);

Submodule: Signature Integration

Hooks for DocuSign / HelloSign / Adobe Sign. Platform handles orchestration, signing is external.

CREATE TABLE signature_requests (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    document_id UUID NOT NULL REFERENCES documents(id),
    provider VARCHAR(50) NOT NULL,     -- 'docusign', 'hellosign', 'adobe_sign'
    provider_envelope_id VARCHAR(200),
    status VARCHAR(20) DEFAULT 'pending',
    signers JSONB NOT NULL,
    -- [{ "email": "client@acme.com", "name": "Jane Doe", "order": 1, "signed_at": null }]
    signed_document_url VARCHAR(1000),
    sent_at TIMESTAMPTZ,
    completed_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Full Document Endpoints

// Core CRUD
GET    /api/v1/documents                        — list
POST   /api/v1/documents                        — create
GET    /api/v1/documents/:id                    — get (revision count, approval status)
PATCH  /api/v1/documents/:id                    — update content

// Templates & Clauses
GET    /api/v1/documents/templates               — list org templates
POST   /api/v1/documents/from-template/:id       — create from template
GET    /api/v1/clauses                           — clause library
POST   /api/v1/documents/:id/clauses             — insert clause

// Revisions / Redline
GET    /api/v1/documents/:id/revisions           — history
POST   /api/v1/documents/:id/revisions           — create (computes diff)
GET    /api/v1/documents/:id/redline             — redline view
POST   /api/v1/revisions/:id/accept/:opIndex     — accept change
POST   /api/v1/revisions/:id/reject/:opIndex     — reject change

// Approvals
POST   /api/v1/documents/:id/submit-for-approval
POST   /api/v1/approval-requests/:id/decide

// Signatures
POST   /api/v1/documents/:id/send-for-signature
GET    /api/v1/documents/:id/signature-status

// Versioning
GET    /api/v1/documents/:id/versions
GET    /api/v1/documents/:id/compare?v1=2&v2=3  — diff between versions

Notifications (Deep Dive)  Core

Event-driven. Other modules emit events. The notification module subscribes and routes based on user preferences. Never called directly.

Core Schema

CREATE TABLE notifications (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    user_id UUID NOT NULL,
    type VARCHAR(100) NOT NULL,        -- 'task.assigned', 'document.shared'
    channel VARCHAR(20) NOT NULL,      -- 'in_app', 'email', 'push', 'sms'
    title VARCHAR(200) NOT NULL,
    body TEXT,
    data JSONB DEFAULT '{}',           -- { entityType, entityId, url }
    read_at TIMESTAMPTZ,
    seen_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE INDEX idx_notifications_user ON notifications(user_id, created_at DESC);
CREATE INDEX idx_notifications_unread ON notifications(user_id, read_at) WHERE read_at IS NULL;

Notification Preferences

CREATE TABLE notification_preferences (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    user_id UUID NOT NULL,
    org_id UUID NOT NULL,
    event_type VARCHAR(100) NOT NULL,  -- 'task.assigned', 'document.*', '*'
    channels VARCHAR(20)[] NOT NULL,   -- {'in_app', 'email'} or {'in_app'} only
    muted BOOLEAN DEFAULT false,
    updated_at TIMESTAMPTZ DEFAULT NOW(),
    UNIQUE(user_id, org_id, event_type)
);

Routing Logic

class NotificationRouter {
  async route(event: PlatformEvent, recipients: string[]) {
    for (const userId of recipients) {
      const prefs = await this.getPreferences(userId, event.type);
      if (prefs.muted) continue;
      for (const channel of prefs.channels) {
        switch (channel) {
          case 'in_app':
            await this.createInAppNotification(userId, event);
            await this.websocketGateway.push(userId, event); // real-time
            break;
          case 'email':
            await this.emailQueue.add(userId, event); // batched into digest
            break;
          case 'push':
            await this.pushService.send(userId, event); // FCM/APNS
            break;
        }
      }
    }
  }
}

Digest Aggregation

CREATE TABLE notification_digests (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    user_id UUID NOT NULL,
    org_id UUID NOT NULL,
    frequency VARCHAR(20) DEFAULT 'instant', -- 'instant', 'hourly', 'daily', 'weekly'
    last_sent_at TIMESTAMPTZ,
    next_send_at TIMESTAMPTZ,
    UNIQUE(user_id, org_id)
);

If frequency = daily, the email worker collects unsent notifications since last_sent_at, groups by type, renders a single digest email.

Real-Time (WebSocket)

@WebSocketGateway({ path: '/ws' })
export class NotificationGateway {
  // On connect: authenticate via JWT, join user's room
  async push(userId: string, event: PlatformEvent) {
    this.server.to(`user:${userId}`).emit('notification', {
      type: event.type, title: event.title, body: event.body, data: event.data,
    });
  }
}

Notification Endpoints

GET    /api/v1/notifications                  — list (paginated, newest first)
GET    /api/v1/notifications/unread-count      — badge count
PATCH  /api/v1/notifications/:id/read          — mark read
PATCH  /api/v1/notifications/read-all          — mark all read
GET    /api/v1/notification-preferences        — get my prefs
PATCH  /api/v1/notification-preferences        — update per event type
WS     /ws                                     — real-time (JWT auth)

Finance & Payments (Deep Dive)  Optional

The Boundary — What Belongs vs. What Doesn't

Belongs in the Platform ModuleToo Specific — Leave to Integrations
Invoices (CRUD, line items, status)Tax calculation rules (per jurisdiction)
Payments received / sent ledgerPayroll
Recurring invoicesIndustry-specific billing logic
Expense trackingComplex financial modeling / forecasting
Payment processing (Stripe integration)Accounting standards (GAAP/IFRS)
Basic reporting (revenue, AR, P&L)Multi-currency hedging
Credit notes / refundsERP integration (SAP, Oracle)
QuickBooks / Xero sync hooksCustom chart of accounts

Invoices

CREATE TABLE invoices (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    invoice_number VARCHAR(50) NOT NULL,  -- auto: INV-2025-0001
    contact_id UUID REFERENCES contacts(id),
    deal_id UUID REFERENCES deals(id),
    status VARCHAR(20) NOT NULL DEFAULT 'draft',
    -- 'draft','sent','viewed','partially_paid','paid','overdue','void','refunded'
    issue_date DATE NOT NULL DEFAULT CURRENT_DATE,
    due_date DATE NOT NULL,
    currency VARCHAR(3) DEFAULT 'USD',
    subtotal DECIMAL(15,2) NOT NULL DEFAULT 0,
    tax_amount DECIMAL(15,2) DEFAULT 0,
    discount_amount DECIMAL(15,2) DEFAULT 0,
    total DECIMAL(15,2) NOT NULL DEFAULT 0,
    amount_paid DECIMAL(15,2) DEFAULT 0,
    amount_due DECIMAL(15,2) GENERATED ALWAYS AS (total - amount_paid) STORED,
    notes TEXT,
    terms TEXT,
    is_recurring BOOLEAN DEFAULT false,
    recurrence_rule_id UUID,
    sent_at TIMESTAMPTZ,
    paid_at TIMESTAMPTZ,
    voided_at TIMESTAMPTZ,
    deleted_at TIMESTAMPTZ,
    created_by UUID NOT NULL,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE UNIQUE INDEX idx_invoice_number ON invoices(org_id, invoice_number);

CREATE TABLE invoice_line_items (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    invoice_id UUID NOT NULL REFERENCES invoices(id) ON DELETE CASCADE,
    description VARCHAR(500) NOT NULL,
    quantity DECIMAL(10,2) NOT NULL DEFAULT 1,
    unit_price DECIMAL(15,2) NOT NULL,
    amount DECIMAL(15,2) GENERATED ALWAYS AS (quantity * unit_price) STORED,
    tax_rate DECIMAL(5,2) DEFAULT 0,
    position FLOAT,
    metadata JSONB DEFAULT '{}',
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Payments

CREATE TABLE payments (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    invoice_id UUID REFERENCES invoices(id),
    contact_id UUID REFERENCES contacts(id),
    type VARCHAR(20) NOT NULL,         -- 'payment', 'refund', 'credit_note'
    direction VARCHAR(10) NOT NULL,    -- 'inbound' (received), 'outbound' (sent)
    method VARCHAR(30),                -- 'credit_card','bank_transfer','ach','check','cash'
    status VARCHAR(20) NOT NULL DEFAULT 'pending',
    -- 'pending','processing','completed','failed','refunded','cancelled'
    amount DECIMAL(15,2) NOT NULL,
    currency VARCHAR(3) DEFAULT 'USD',
    fee DECIMAL(15,2) DEFAULT 0,       -- processing fee (Stripe ~2.9%)
    net_amount DECIMAL(15,2) GENERATED ALWAYS AS (amount - fee) STORED,
    provider VARCHAR(30),              -- 'stripe', 'manual', 'bank'
    provider_payment_id VARCHAR(200),  -- Stripe charge ID: ch_xxx
    provider_receipt_url VARCHAR(1000),
    reference VARCHAR(200),            -- check number, wire reference
    description TEXT,
    metadata JSONB DEFAULT '{}',
    paid_at TIMESTAMPTZ,
    failed_at TIMESTAMPTZ,
    refunded_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE INDEX idx_payments_invoice ON payments(org_id, invoice_id);
CREATE INDEX idx_payments_contact ON payments(org_id, contact_id);
CREATE INDEX idx_payments_date ON payments(org_id, paid_at DESC);

Stripe Integration

class PaymentService {
  async createPaymentIntent(invoiceId: string, orgId: string) {
    const invoice = await this.invoiceService.get(invoiceId, orgId);
    const stripeAccount = await this.getStripeAccount(orgId);
    const intent = await stripe.paymentIntents.create({
      amount: Math.round(invoice.amount_due * 100), // cents
      currency: invoice.currency,
      customer: await this.getOrCreateStripeCustomer(invoice.contact_id),
      metadata: { invoiceId, orgId },
    }, { stripeAccount: stripeAccount.id });
    return { clientSecret: intent.client_secret };
  }

  async handleWebhook(event: Stripe.Event) {
    switch (event.type) {
      case 'payment_intent.succeeded':
        await this.recordPayment(event.data.object);
        await this.updateInvoiceStatus(event.data.object.metadata.invoiceId);
        await this.eventBus.emit('payment.received', { ... });
        break;
      case 'payment_intent.payment_failed':
        await this.recordFailure(event.data.object);
        await this.eventBus.emit('payment.failed', { ... });
        break;
    }
  }
}

Recurring Invoices

CREATE TABLE invoice_recurrence_rules (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    template_invoice_id UUID NOT NULL REFERENCES invoices(id),
    frequency VARCHAR(20) NOT NULL,     -- 'weekly','monthly','quarterly','annually'
    next_issue_date DATE NOT NULL,
    end_date DATE,
    auto_send BOOLEAN DEFAULT false,
    enabled BOOLEAN DEFAULT true,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Cron job checks daily for invoices due, clones the template, sets new dates, optionally auto-sends.

Expenses

CREATE TABLE expenses (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    category VARCHAR(50),              -- 'software','travel','office','payroll','other'
    description VARCHAR(500),
    amount DECIMAL(15,2) NOT NULL,
    currency VARCHAR(3) DEFAULT 'USD',
    vendor VARCHAR(200),
    receipt_url VARCHAR(1000),         -- S3 link
    status VARCHAR(20) DEFAULT 'pending', -- 'pending','approved','rejected','reimbursed'
    expense_date DATE NOT NULL,
    submitted_by UUID NOT NULL,
    approved_by UUID,
    approved_at TIMESTAMPTZ,
    metadata JSONB DEFAULT '{}',
    deleted_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Full Finance & Payments Endpoints

// Invoices
GET    /api/v1/invoices                         — list (filter by status, contact, date)
POST   /api/v1/invoices                         — create
GET    /api/v1/invoices/:id                     — get (line items, payments, status)
POST   /api/v1/invoices/:id/send                — send to contact (triggers email)
POST   /api/v1/invoices/:id/void                — void invoice
GET    /api/v1/invoices/:id/pdf                 — generate PDF (presigned S3 URL)
POST   /api/v1/invoices/:id/duplicate           — clone

// Line Items
POST   /api/v1/invoices/:id/line-items          — add
PATCH  /api/v1/invoices/:id/line-items/:itemId  — update
DELETE /api/v1/invoices/:id/line-items/:itemId  — remove

// Payments
GET    /api/v1/payments                          — payment ledger
POST   /api/v1/payments                          — record manual payment
POST   /api/v1/invoices/:id/pay                  — Stripe payment (returns client secret)
POST   /api/v1/payments/:id/refund               — issue refund

// Recurring
GET    /api/v1/invoices/recurring                 — list schedules
POST   /api/v1/invoices/:id/make-recurring        — set up recurrence

// Expenses
GET    /api/v1/expenses                          — list
POST   /api/v1/expenses                          — submit expense
POST   /api/v1/expenses/:id/approve              — approve
POST   /api/v1/expenses/:id/reject               — reject

// Reporting
GET    /api/v1/finance/revenue?from=X&to=Y       — revenue over period
GET    /api/v1/finance/ar-aging                   — accounts receivable aging
GET    /api/v1/finance/expenses-by-category       — expense breakdown
GET    /api/v1/finance/profit-loss?from=X&to=Y    — basic P&L

// Integrations
POST   /api/v1/finance/connect/stripe             — connect Stripe (OAuth)
POST   /api/v1/finance/connect/quickbooks          — connect QuickBooks (OAuth)
POST   /api/v1/finance/sync/quickbooks             — push to QuickBooks

User Flows & API Plans

Every step a user takes through each module, mapped to the exact API call, database write, and event emitted.

Tasks — User Flows

Flow 1: Create a Task

Step 1: User opens board, clicks "New Task"
  API:    POST /api/v1/tasks
  Body:   { title, board_id, column_id, assignee_id?, due_date?, priority? }
  DB:     INSERT INTO tasks (position = last in column via fractional index)
  Event:  'task.created' → activity feed. If assignee set: 'task.assigned' → notify

Flow 2: Assign a Task

Step 1: User selects assignee from dropdown (cross-module loader)
  API:    PATCH /api/v1/tasks/:id  { assignee_id }
  DB:     UPDATE tasks SET assignee_id = ...
  Event:  'task.assigned' → notification to assignee (in-app + email per prefs)
          → activity feed: "Alice assigned this to Bob"

Flow 3: Move Task (Drag & Drop)

Step 1: User drags task to new column/position
  API:    PATCH /api/v1/tasks/:id/move  { column_id, position: 2.5 }
  DB:     UPDATE tasks SET column_id, position (fractional indexing)
  Event:  'task.status_changed' → may trigger automations
          → activity feed: "Alice moved this to In Review"

Flow 4: Subtasks & Checklists

Step 1: Add subtask     → POST /api/v1/tasks/:id/subtasks
Step 2: Add checklist   → POST /api/v1/tasks/:id/checklists
Step 3: Check off item  → PATCH /api/v1/checklists/:id/items/:itemId { done: true }
  Event: 'task.checklist_completed' (if all done) → activity feed

Flow 5: Dependencies

Step 1: Link "B depends on A"
  API:    POST /api/v1/tasks/:id/dependencies { depends_on_id, type }
  DB:     INSERT (after circular dep check — 409 if cycle detected)
Step 2: A completes → B unblocked
  Event:  'task.unblocked' → notify B's assignee

Flow 6: Time Tracking

Step 1: Start timer  → POST /api/v1/tasks/:id/timer/start (one active per user)
Step 2: Stop timer   → POST /api/v1/tasks/:id/timer/stop (computes duration)
Step 3: View report  → GET /api/v1/time-entries?task=X&from=Y&to=Z

Flow 7: Recurring & Automations

Recurring: Cron checks task_recurrence_rules → clones template task → notifies assignee
Automations: Event fires → match trigger → evaluate conditions → execute actions
  Actions: notify, set_field, reassign (each may emit cascading events)

CRM — User Flows

Flow 1: Create Contact → Qualify → Convert

Step 1: Create    → POST /api/v1/contacts { type:'lead', name, email, company, source }
Step 2: Add deal  → POST /api/v1/deals { contact_id, pipeline_id, stage_id, value }
Step 3: Move deal → PATCH /api/v1/deals/:id/move { stage_id } (probability auto-updates)
Step 4: Convert   → POST /api/v1/contacts/:id/convert { new_type:'client' }
  Event: 'contact.converted' → notification + activity feed
  Notes: All linked deals, tasks, documents carry over

Flow 2: Log Interactions

Step 1: Log call/email/meeting
  API:    POST /api/v1/contacts/:id/interactions
  Body:   { type:'call', direction:'outbound', subject, body, duration_seconds, deal_id? }
Step 2: View timeline → GET /api/v1/contacts/:id/timeline
  Returns: Merged activity feed + interactions + comments (chronological)

Flow 3: Pipeline Forecast

API:    GET /api/v1/deals/forecast
Returns: { total_pipeline_value, weighted_value: SUM(value * probability),
           deals_by_stage, expected_revenue_this_month }

Contracts — User Flows

Flow 1: Create → Edit → Review → Send

Step 1: Create from template
  API:    POST /api/v1/documents/from-template/:id { title, type:'contract', contact_id }
  DB:     Clone content + default clauses. Status = 'draft'

Step 2: Edit (structured fields)
  API:    POST /api/v1/documents/:id/revisions
  Body:   { content, fieldChanges: [{ field_name, old_value, new_value }] }
  DB:     Rev 0 auto-created (original baseline). New revision + change rows.

Step 3: Edit (free-text TipTap)
  Typing → <span data-inserted> (green)
  Deleting → <span data-deleted> (red strikethrough)
  Save → POST /api/v1/documents/:id/revisions { content with inline marks }

Step 4: Submit for approval
  API:    POST /api/v1/documents/:id/submit-for-approval
  DB:     Lookup workflow → create approval_request → notify first reviewer

Step 5: Send to client
  API:    POST /api/v1/documents/:id/send { recipient_email }
  DB:     Status → 'sent'. Generate review token.
  Event:  Email with review link → 'document.sent'

Flow 2: Client & Team Revisions

Client reviews via token link:
  API:    GET /api/v1/documents/review/:token (loads contract + revisions)
  API:    POST /api/v1/documents/review/:token/suggest { section, original_text, suggested_text }
  DB:     INSERT edit_suggestion (status = PENDING)
  Event:  'document.suggestion_received' → notify owner

Team reviews suggestions (side-by-side: original vs suggested):
  Approve → PATCH /api/v1/documents/:id/suggestions/:id { status:'APPROVED' }
            Creates new revision with change applied
  Reject  → PATCH { status:'REJECTED', review_notes }

Flow 3: Sign → Deliverables → Track

Step 1: Send for signature
  API:    POST /api/v1/documents/:id/send-for-signature
  Body:   { provider:'docusign', signers:[{ email, name, order }] }
  System: DocuSign API → sends signing emails

Step 2: Provider webhook on completion
  API:    POST /api/v1/webhooks/docusign
  DB:     Status → 'signed'. Store signed PDF in S3.
  Event:  'document.signed' → notify all parties

Step 3: Capture deliverables
  API:    POST /api/v1/documents/:id/deliverables
  Body:   { title, quantity, frequency, platform, due_date }
  DB:     INSERT INTO contract_deliverables

Step 4: Track deliverables
  API:    GET /api/v1/documents/:id/deliverables (progress, is_on_track)
  API:    PATCH /api/v1/deliverables/:id { current_count, status }
  Cron:   Daily check → if behind pace, set is_on_track = false → notify owner

Notifications — User Flows

Flow: Event → Route → Deliver

1. Any module emits event (e.g. 'task.assigned')
2. NotificationRouter determines recipients (assignee + watchers + creator)
3. Check preferences: SELECT channels, muted FROM notification_preferences
4. Create notification per channel (in_app, email, push)
5. Deliver:
   - in_app: WebSocket push to user's room (instant)
   - email: BullMQ queue → respects digest frequency (instant/hourly/daily)
   - push: FCM/APNS (instant)

Reading:
  GET  /api/v1/notifications (paginated)
  GET  /api/v1/notifications/unread-count (badge)
  PATCH /api/v1/notifications/:id/read
  PATCH /api/v1/notifications/read-all

Preferences:
  GET  /api/v1/notification-preferences
  PATCH /api/v1/notification-preferences { event_type, channels, muted }

Finance & Payments — User Flows

Flow 1: Invoice → Send → Get Paid

Step 1: Create invoice
  API:    POST /api/v1/invoices { contact_id, due_date, currency }
  DB:     Auto-generate invoice_number (INV-2025-0001), status = 'draft'

Step 2: Add line items
  API:    POST /api/v1/invoices/:id/line-items { description, quantity, unit_price }
  DB:     amount = quantity * unit_price (computed). Recalculate invoice totals.

Step 3: Send
  API:    POST /api/v1/invoices/:id/send
  System: Email with PDF + payment link → status = 'sent'

Step 4: Stripe payment
  API:    POST /api/v1/invoices/:id/pay → returns { clientSecret }
  Stripe: paymentIntents.create → client pays → webhook fires
  API:    POST /api/v1/webhooks/stripe
  DB:     INSERT payment (completed). UPDATE invoice amount_paid.
          If fully paid: status = 'paid'. Else: 'partially_paid'
  Event:  'payment.received' → notify finance team

Step 5: Manual payment
  API:    POST /api/v1/payments { invoice_id, amount, method:'check', reference }

Step 6: Refund
  API:    POST /api/v1/payments/:id/refund { amount? }
  System: If Stripe → stripe.refunds.create(). Recalculate invoice.

Flow 2: Recurring & Expenses

Recurring:
  POST /api/v1/invoices/:id/make-recurring { frequency:'monthly', auto_send }
  Cron: Daily → clone template → new invoice_number → optionally auto-send

Expenses:
  POST /api/v1/expenses { category, amount, vendor, receipt_url }
  POST /api/v1/expenses/:id/approve (or /reject)
  Events: 'expense.submitted' → notify approver. 'expense.approved' → notify submitter.

Reporting:
  GET /api/v1/finance/revenue?from=X&to=Y
  GET /api/v1/finance/ar-aging
  GET /api/v1/finance/profit-loss?from=X&to=Y
  GET /api/v1/finance/expenses-by-category

VII. Deployment — Cell Provisioning

One command provisions a full product cell in a target AWS account:

npx cdk deploy ProductCell \
  --context productName=acme-crm \
  --context modules=tasks,contacts,documents,finance \
  --context environment=prod \
  --profile product-a-prod-account

What Gets Provisioned

  • VPC (3 AZs, private subnets, NAT Gateways)
  • Aurora Serverless v2 cluster (writer + reader)
  • ElastiCache Redis cluster
  • ECS Fargate service (API + Worker)
  • S3 buckets (files, backups)
  • CloudFront distribution + WAF rules
  • CloudWatch dashboards + alarms
  • Cross-account roles (CI/CD, logging)
  • DNS records in Network Hub account
Each product is completely independent at the infrastructure level but shares the same codebase. Fix a bug in the tasks module → deploys to all products that use it.

VIII. Deployment Strategy — Platform by Platform

No single platform is optimal for every workload. Use the right tool for each layer.

Backend API Servers

OptionBest ForTrade-off
ECS FargateSteady-state workloads, full VPC control, WebSockets, complex networkingMore config, you manage scaling policies
AWS App RunnerSimpler APIs, auto-scaling zero config, small teamsLess VPC control, no WebSocket support
Lambda + API GWEvent-driven, spiky traffic, low-traffic modulesCold starts, 15-min timeout, harder to debug
Recommendation: ECS Fargate for the core API — needs VPC access to Aurora/Redis, persistent connections for WebSockets, and connection pooling. Lambda for event-driven workers. App Runner is too limited for a platform backend.

Background Workers / Jobs

OptionBest For
ECS Fargate (separate service)Long-running job processors (BullMQ), always-on
LambdaShort-lived event handlers (S3 triggers, SQS consumers, cron)
Step Functions + LambdaMulti-step workflows (ETL, document processing)

Frontend Applications

OptionBest ForTrade-off
VercelNext.js apps, best DX, instant previews, edge SSRVendor lock-in, cost at scale, data leaves AWS
Cloudflare PagesStatic sites, simple SPAs, global edge, cheapLimited SSR, no tight AWS integration
AWS Amplify HostingNext.js/React needing tight AWS integrationSlower builds, weaker DX vs. Vercel
CloudFront + S3Pure SPAs (React/Vue), full control, cheapestNo SSR, manual cache invalidation

Frontend Recommendation

  • Customer-facing products → Vercel (Next.js with edge SSR, preview deployments per PR)
  • Internal tools / admin panels → CloudFront + S3 (static React SPA, cheapest, stays in AWS)
  • Architecture docs / marketing → Cloudflare Pages (static HTML, free tier)

The Recommended Mix

EDGE / CDN
  Customer Frontends   -->  Vercel (Next.js, edge SSR)
  Internal Tools       -->  CloudFront + S3 (React SPA)
  Architecture Docs    -->  Cloudflare Pages (static HTML)
                |
                | HTTPS (API calls)
                v
AWS PRODUCT ACCOUNT
  API Gateway   -->  ECS Fargate (Core API, NestJS)
  SQS/SNS      -->  ECS Fargate (BullMQ Worker)
  S3 Events    -->  Lambda (file processing)
  Scheduled    -->  Lambda (cron jobs, cleanup)
  ETL          -->  Step Functions + Lambda
                |
  Aurora PostgreSQL  |  Redis  |  S3

Why NOT All-Serverless for Backend

ConcernVercel / Lambda ServerlessECS Fargate
Timeout10-300s depending on planUnlimited
Cold starts100-500ms per invocationNone (always running)
DB connectionsNew connection per invocation (kills DB)Connection pool (Prisma)
WebSocketsNot supportedFull support
Background jobsNot supportedBullMQ workers
VPC accessNot possible (Vercel) / complex (Lambda)Native (same VPC as Aurora/Redis)
Cost at scaleUnpredictable (per-invocation)Predictable (per-container)

How Vercel Works with This Architecture

Vercel is the frontend deployment platform only. It does NOT run business logic.

User's Browser
     |
     v
  Vercel (Frontend ONLY)
  Next.js App
     |  fetch('https://api.yourproduct.com/v1/tasks')
     |
     v  HTTPS
  AWS (Your Backend)
  CloudFront --> API Gateway --> ECS (NestJS on Fargate)
     |
  Aurora  |  Redis  |  S3
  • Server Components / SSR: Vercel renders HTML by calling your ECS API, sends finished page to browser
  • Client-side fetching: Browser calls your ECS API directly — Vercel not involved
  • Next.js API routes: Only for thin proxies (OAuth callbacks, cookie handling) — never business logic
  • WebSockets: Browser connects directly to ECS — Vercel can't handle these

Per-Environment Strategy

EnvironmentBackendFrontend
DevECS Fargate (min capacity, Aurora at 0.5 ACU)Vercel preview deployments (auto per PR)
StagingECS Fargate (mirrors prod, smaller scale)Vercel staging branch
ProdECS Fargate (auto-scaling 2-10 tasks, multi-AZ)Vercel production

IX. Observability — SRE Approach

SLO-Driven, Not Alert-Driven

SLOTargetBudget
API Availability99.9%43 min downtime/month
API Latency (p99)< 500ms
API Latency (p50)< 100ms
Data Durability99.999999999%Aurora handles this
Error budget consumed too fast → freeze deployments. Error budget healthy → deploy freely.

Three Pillars

PillarToolPurpose
LogsCloudWatch → OpenSearchStructured JSON, cross-account
MetricsCloudWatch + EMFBusiness + infra metrics
TracesX-Ray / OpenTelemetryEnd-to-end request tracing

Every request gets a correlation ID flowing through logs, metrics, and traces.

X. Tech Stack Summary

ConcernChoice
BackendNestJS (TypeScript) — module system built-in
ORMPrisma — type-safe queries, auto-migrations, built-in connection pooling
DatabaseAurora PostgreSQL Serverless v2
Cache / QueueRedis (ElastiCache) + BullMQ
AuthAWS Cognito or Keycloak
PermissionsSpiceDB / OpenFGA (Zanzibar)
File StorageAWS S3 + presigned URLs
SearchPG full-textOpenSearch later
EventsAWS SNS/SQS or Redis Streams
DeploymentECS Fargate + CDK for IaC
FrontendNext.js
MonorepoTurborepo or Nx
CI/CDGitHub ActionsCDK Pipelines
ObservabilityCloudWatch + X-Ray + OpenSearch

XI. Build Order

Module Dependency Map

Core (always loaded):
  ├── Auth
  ├── Users
  ├── Organizations
  ├── Teams
  ├── Permissions (Zanzibar)
  ├── Entity Links (cross-module glue)
  ├── Notifications (event-driven)
  ├── Files
  ├── Audit Log
  └── Search

Optional (plug in per product):
  ├── Tasks        → depends on Core
  ├── Contacts     → depends on Core
  ├── Documents    → depends on Core, Files
  ├── Notes        → depends on Core
  ├── Finance      → depends on Core, Contacts
  └── [Custom]     → depends on Core

Recommended Sequence

  1. Monorepo scaffold — Turborepo + NestJS + shared types package
  2. Core kernel — Auth, Users, Orgs, Teams, Permissions, Entity Links
  3. Event bus + notifications
  4. Tasks — first optional module, proves the architecture
  5. Contacts/Leads, Documents, Notes
  6. Finance — most domain-specific, last
  7. CDK infrastructure — multi-account provisioning
  8. CI/CD pipeline — automated deploy across accounts

XII. Module Update & Versioning Strategy

How to ship changes to shared infrastructure without breaking products already running on it.

Scenario 1 — Updating an Existing Core Module

Say you update Auth to add MFA, change a JWT claim, or refactor the permission middleware. Every product using that module gets the change. If it breaks, everything breaks.

Solution: Semantic Versioning + Changesets

Version BumpWhenRollout
patch (1.2.0 → 1.2.1)Bug fixAuto-deploy to all products
minor (1.2.0 → 1.3.0)New backwards-compatible featureAuto-deploy, products adopt when ready
major (1.x → 2.0.0)Breaking changeEach product opts in on its own schedule

Breaking changes live in a parallel package until products migrate:

packages/core/auth/     ← v1.2.0  (Product A still here)
packages/core/auth-v2/  ← v2.0.0  (Product B migrated)

Use Changesets to manage this in the monorepo:

# Developer describes what changed
npx changeset add
# → "auth: added MFA, new required field mfa_enabled on users table"
# → type: minor

# CI bumps versions and generates changelogs
npx changeset version

In CI, every PR to a shared module runs tests against every product that uses it:

# .github/workflows/test.yml
test-all-products:
  - test product-a against updated auth module
  - test product-b against updated auth module
  - test product-c against updated auth module
  # If any fail → PR is blocked

Scenario 2 — Adding a New Module to an Existing Product

The easy case. Just add it to platform.config.ts:

// Before
modules: [new TasksModule(), new ContactsModule()]

// After — Finance added
modules: [new TasksModule(), new ContactsModule(), new FinanceModule()]

On next cdk deploy: Finance migrations run automatically, new routes register, event subscriptions set up. Nothing else changes — existing data is untouched.

Scenario 3 — Database Schema Changes

The hardest problem. Change a core table and every product on that schema is affected.

Rule 1: Migrations are forwards-only and backwards-compatible

Never rename or delete a column in a single migration. Always do it in phases:

-- Phase 1 (deploy now): Add new column alongside old
ALTER TABLE users ADD COLUMN display_name VARCHAR(200);
-- Application writes to BOTH columns during transition

-- Phase 2 (next deploy): Backfill
UPDATE users SET display_name = name WHERE display_name IS NULL;

-- Phase 3 (after confirming): Drop old column
ALTER TABLE users DROP COLUMN name;

Rule 2: Migrations run automatically on deploy

async onDatabaseSetup(migrator: Migrator) {
  await migrator.runModuleMigrations('auth');
  // Idempotent — safe to run multiple times
}

Rule 3: Separate migration deploys from code deploys

  • Day 1 — Deploy migration (add new column, keep old)
  • Day 2 — Deploy code that uses new column
  • Day 3 — Deploy migration to drop old column after confirming
You can roll back the code without rolling back the DB — the DB is always compatible with the previous code version.

Scenario 4 — API Contract Changes

Backwards-compatible (safe anytime): adding endpoints, adding optional fields, adding optional params. Frontends that don't know about new fields simply ignore them.

Breaking changes — use API versioning:

/api/v1/tasks    ← old behavior, still works
/api/v2/tasks    ← new behavior

Both run simultaneously. Products on v1 keep working. v1 is deprecated with a sunset date announced in advance.

Overall Governance Model

PLATFORM REPO (shared modules)

Core changes go through:
  1. PR with changeset label (patch / minor / major)
  2. Required review from platform team
  3. Automated tests across ALL products in CI
  4. Staged rollout: Dev → Staging → Prod

Breaking changes (major):
  5. Migration guide published
  6. Each product team opts in on their own schedule
  7. Old version supported for defined deprecation window

Decision Matrix

Change TypeStrategy
Bug fix in core modulePatch version, auto-deploy everywhere
New optional feature in coreMinor version, auto-deploy, products adopt when ready
Breaking change to coreMajor version, products opt in independently
New module added to a productAdditive — deploy freely
DB schema change (additive)Run migration, deploy code
DB schema change (breaking)3-phase migration (add → backfill → drop)
API contract changeVersion the endpoint (/v1 → /v2), both run simultaneously
Key principle: The platform and each product deploy independently. A core update never forces an emergency migration across all products simultaneously.

XIII. Baseline Infrastructure Checklist

Everything below lives in packages/core/. It ships with the platform. Product teams never build these — they just use them.

Tier 1 — Every Project, No Exceptions

Non-negotiable. If you don't build them into the platform, every product team will reinvent them poorly.

1. Settings & Configuration Engine

User preferences, org-level config, and runtime feature flags — all in one system.

CREATE TABLE settings (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    scope VARCHAR(20) NOT NULL,       -- 'org', 'user', 'module'
    scope_id UUID NOT NULL,           -- org_id, user_id, or module instance id
    namespace VARCHAR(100) NOT NULL,  -- 'notifications', 'appearance', 'billing'
    key VARCHAR(200) NOT NULL,
    value JSONB NOT NULL,
    updated_by UUID,
    updated_at TIMESTAMPTZ DEFAULT NOW(),

    UNIQUE(org_id, scope, scope_id, namespace, key)
);

CREATE INDEX idx_settings_lookup ON settings(org_id, scope, scope_id, namespace);
class SettingsService {
  async get<T>(orgId: string, scope: Scope, key: string, defaultValue: T): Promise<T>;
  async set(orgId: string, scope: Scope, key: string, value: unknown): Promise<void>;
  async getBulk(orgId: string, scope: Scope, namespace: string): Promise<Record<string, unknown>>;
}

// Usage:
const timezone = await settings.get(orgId, { type: 'user', id: userId }, 'timezone', 'UTC');
const orgBranding = await settings.getBulk(orgId, { type: 'org', id: orgId }, 'branding');

2. Comments & Activity Feed

Universal — attaches to any entity across any module. Every task, document, contact gets comments and an activity timeline.

CREATE TABLE comments (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    entity_type VARCHAR(50) NOT NULL,  -- 'task', 'document', 'contact'
    entity_id UUID NOT NULL,
    author_id UUID NOT NULL,
    body TEXT NOT NULL,
    parent_id UUID REFERENCES comments(id),  -- threaded replies
    edited_at TIMESTAMPTZ,
    deleted_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE INDEX idx_comments_entity ON comments(org_id, entity_type, entity_id, created_at);

CREATE TABLE activity_feed (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    entity_type VARCHAR(50) NOT NULL,
    entity_id UUID NOT NULL,
    actor_id UUID NOT NULL,
    action VARCHAR(100) NOT NULL,      -- 'status_changed', 'assigned', 'commented'
    changes JSONB DEFAULT '{}',        -- { "status": { "from": "draft", "to": "review" } }
    metadata JSONB DEFAULT '{}',
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE INDEX idx_activity_entity ON activity_feed(org_id, entity_type, entity_id, created_at DESC);
CREATE INDEX idx_activity_actor ON activity_feed(org_id, actor_id, created_at DESC);
GET /api/v1/tasks/:id/comments           — comments on a task
POST /api/v1/tasks/:id/comments          — add a comment
GET /api/v1/documents/:id/activity       — activity timeline for a doc
GET /api/v1/activity?actor=user_123      — everything a user did (cross-entity)

3. Tags & Labels

Universal tagging system — any entity, any module. Color-coded, filterable.

CREATE TABLE tags (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    name VARCHAR(100) NOT NULL,
    color VARCHAR(7),                  -- hex: #FF5733
    category VARCHAR(50),              -- 'priority', 'department', 'custom'
    UNIQUE(org_id, name)
);

CREATE TABLE entity_tags (
    entity_type VARCHAR(50) NOT NULL,
    entity_id UUID NOT NULL,
    tag_id UUID NOT NULL REFERENCES tags(id) ON DELETE CASCADE,
    org_id UUID NOT NULL,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    PRIMARY KEY(entity_type, entity_id, tag_id)
);

CREATE INDEX idx_entity_tags_lookup ON entity_tags(org_id, entity_type, entity_id);
CREATE INDEX idx_entity_tags_by_tag ON entity_tags(org_id, tag_id);
GET    /api/v1/tags                              — list org tags
POST   /api/v1/tags                              — create tag
POST   /api/v1/tasks/:id/tags                    — tag a task
DELETE /api/v1/tasks/:id/tags/:tagId             — untag
GET    /api/v1/tasks?tags=urgent,frontend         — filter by tags

4. Soft Deletes, Trash & Archive

Nothing truly deletes. Users get a trash can. Compliance gets retention. Every user-facing table includes deleted_at and archived_at.

class SoftDeleteMiddleware {
  // Automatically appends WHERE deleted_at IS NULL to all queries
  // Unless explicitly requested: findMany({ withDeleted: true })
}

class TrashService {
  async softDelete(entityType: string, entityId: string, orgId: string): Promise<void>;
  async restore(entityType: string, entityId: string, orgId: string): Promise<void>;
  async listTrash(orgId: string, entityType?: string): Promise<TrashedItem[]>;
  async permanentDelete(entityType: string, entityId: string, orgId: string): Promise<void>;
  // Scheduled: auto-purge items in trash > 30 days
}
DELETE /api/v1/tasks/:id                — soft delete (moves to trash)
POST   /api/v1/trash/:id/restore       — restore from trash
GET    /api/v1/trash?type=task          — list trashed items
DELETE /api/v1/trash/:id/permanent      — hard delete (admin only)

5. Standardized Pagination, Filtering & Sorting

Every list endpoint works identically. Frontends build one data-table component that works with every module.

interface ListQuery {
  page?: number;           // default 1
  limit?: number;          // default 25, max 100
  sort?: string;           // 'created_at:desc' or 'title:asc'
  search?: string;         // full-text search across searchable fields
  filter?: Record<string, string | string[]>;
  // filter[status]=active
  // filter[assignee]=user_123,user_456
  // filter[created_at.gte]=2025-01-01
}

interface PaginatedResponse<T> {
  data: T[];
  meta: {
    page: number;
    limit: number;
    total: number;
    totalPages: number;
    hasNext: boolean;
    hasPrev: boolean;
  };
}
// Base repository — every module gets this for free
class BaseRepository<T> {
  async findMany(orgId: string, query: ListQuery): Promise<PaginatedResponse<T>> {
    // Builds Prisma query from standardized params
    // Applies RLS automatically via orgId
    // Excludes soft-deleted by default
  }
}

// Every endpoint works the same:
GET /api/v1/tasks?page=2&limit=10&sort=due_date:asc&filter[status]=in_progress
GET /api/v1/contacts?search=acme&filter[type]=lead&sort=created_at:desc

6. Standardized Error Responses & Exception Handling

Every error — validation, auth, business logic, server — returns the same shape. Frontends build one error handler.

interface ErrorResponse {
  error: {
    code: string;           // 'VALIDATION_ERROR', 'NOT_FOUND', 'FORBIDDEN'
    message: string;        // 'Task not found'
    details?: unknown[];    // field-level errors for validation
    correlationId: string;  // ties to logs and traces
    timestamp: string;
  };
}

// 400 — Validation
{
  "error": {
    "code": "VALIDATION_ERROR",
    "message": "Request validation failed",
    "details": [
      { "field": "email", "message": "Must be a valid email address" },
      { "field": "name", "message": "Required" }
    ],
    "correlationId": "req_abc123",
    "timestamp": "2025-03-30T12:00:00Z"
  }
}
// NestJS global exception filter
@Catch()
export class GlobalExceptionFilter implements ExceptionFilter {
  catch(exception: unknown, host: ArgumentsHost) {
    const correlationId = host.switchToHttp().getRequest().correlationId;
    // Maps any exception to the standard envelope
    // Logs full stack trace internally
    // Returns sanitized error to client (no stack traces)
    // Sets correct HTTP status code
    // Includes rate-limit headers: X-RateLimit-Remaining, Retry-After
  }
}

7. Email & Transactional Messaging

Every product sends email. Template engine + queue + provider abstraction (SES today, SendGrid tomorrow).

CREATE TABLE email_templates (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID,                       -- NULL = system-level template
    slug VARCHAR(100) NOT NULL,        -- 'welcome', 'password_reset', 'invoice_sent'
    subject VARCHAR(500) NOT NULL,     -- 'Welcome to {{org_name}}'
    body_html TEXT NOT NULL,           -- Handlebars / Mjml template
    body_text TEXT,
    variables JSONB DEFAULT '[]',
    updated_at TIMESTAMPTZ DEFAULT NOW(),
    UNIQUE(org_id, slug)
);

CREATE TABLE email_log (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    to_address VARCHAR(255) NOT NULL,
    template_slug VARCHAR(100),
    subject VARCHAR(500),
    status VARCHAR(20) NOT NULL,       -- 'queued', 'sent', 'delivered', 'bounced', 'failed'
    provider VARCHAR(50),              -- 'ses', 'sendgrid'
    provider_message_id VARCHAR(200),
    error TEXT,
    sent_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW()
);
class EmailService {
  async send(params: {
    to: string | string[];
    template: string;
    variables: Record<string, unknown>;
    orgId: string;
  }): Promise<void> {
    // 1. Load template (org-specific override → system default)
    // 2. Render with Handlebars
    // 3. Queue via BullMQ (never block the API request)
    // 4. Worker picks up, sends via SES/SendGrid
    // 5. Logs result to email_log
  }
}

// Usage from any module:
await email.send({
  to: user.email,
  template: 'task_assigned',
  variables: { taskTitle: task.title, assignerName: actor.name },
  orgId,
});

8. Invitation & Onboarding System

Invite users to orgs/teams. Magic links. First-time setup hooks.

CREATE TABLE invitations (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    email VARCHAR(255) NOT NULL,
    role VARCHAR(50) NOT NULL DEFAULT 'member',
    team_ids UUID[] DEFAULT '{}',     -- auto-join these teams on accept
    token VARCHAR(200) NOT NULL UNIQUE,
    invited_by UUID NOT NULL,
    status VARCHAR(20) DEFAULT 'pending',  -- 'pending', 'accepted', 'expired', 'revoked'
    expires_at TIMESTAMPTZ NOT NULL,
    accepted_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW()
);
POST   /api/v1/invitations               — invite user (sends email with magic link)
GET    /api/v1/invitations/accept?token=X — accept invite (creates user if needed, joins org)
GET    /api/v1/invitations                — list pending invitations for org
DELETE /api/v1/invitations/:id            — revoke invitation
// Onboarding hooks — modules register first-time setup steps:
interface OnboardingStep {
  moduleId: string;
  step: string;          // 'create_first_team', 'import_contacts', 'set_timezone'
  required: boolean;
  order: number;
}
// Core tracks completion per user
// Frontend renders a setup wizard from the registered steps

9. Feature Flags (Runtime)

Toggle features per org, per user, or by percentage — without redeploying.

CREATE TABLE feature_flags (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    key VARCHAR(100) NOT NULL UNIQUE,       -- 'kanban_view', 'ai_summary', 'new_billing'
    description TEXT,
    enabled BOOLEAN DEFAULT false,          -- global default
    rules JSONB DEFAULT '[]',
    -- [
    --   { "type": "org", "ids": ["org_123"], "enabled": true },
    --   { "type": "user", "ids": ["user_456"], "enabled": true },
    --   { "type": "percentage", "value": 25, "enabled": true }
    -- ]
    updated_at TIMESTAMPTZ DEFAULT NOW()
);
class FeatureFlagService {
  async isEnabled(key: string, context: { orgId: string; userId: string }): Promise<boolean> {
    // 1. Check cache (Redis, 30s TTL)
    // 2. Evaluate rules: user match → org match → percentage → global default
  }
}

// Usage:
if (await features.isEnabled('kanban_view', { orgId, userId })) {
  // return kanban-specific data
}
GET    /api/v1/admin/feature-flags          — list all flags (platform admin)
PATCH  /api/v1/admin/feature-flags/:key     — update flag rules
GET    /api/v1/features                     — resolved flags for current user

10. Health Checks & Readiness Probes

ECS, ALB, and deploy pipelines all need these. Liveness = "is the process alive?" Readiness = "can it serve traffic?"

@Controller('health')
export class HealthController {
  @Get('/live')
  async liveness() {
    return { status: 'ok', uptime: process.uptime() };
  }

  @Get('/ready')
  async readiness() {
    const checks = await Promise.allSettled([
      this.db.query('SELECT 1'),         // Aurora writer reachable
      this.redis.ping(),                  // Redis reachable
    ]);

    const healthy = checks.every(c => c.status === 'fulfilled');
    return {
      status: healthy ? 'ready' : 'degraded',
      checks: {
        database: checks[0].status,
        redis: checks[1].status,
      },
    };
  }
}

11. Request Context Middleware

Every request gets a context object that propagates everywhere — logs, audit, permissions, downstream calls. No prop-drilling.

interface RequestContext {
  correlationId: string;  // generated or forwarded from X-Correlation-Id header
  userId: string;
  orgId: string;
  roles: string[];
  ip: string;
  userAgent: string;
  timestamp: Date;
}

@Injectable()
export class RequestContextMiddleware implements NestMiddleware {
  use(req: Request, res: Response, next: NextFunction) {
    const ctx: RequestContext = {
      correlationId: req.headers['x-correlation-id'] || randomUUID(),
      userId: req.user?.id,
      orgId: req.user?.orgId,
      roles: req.user?.roles || [],
      ip: req.ip,
      userAgent: req.headers['user-agent'],
      timestamp: new Date(),
    };
    // Available everywhere via AsyncLocalStorage
    requestContextStorage.run(ctx, () => next());
  }
}

// Any service, anywhere:
const ctx = RequestContext.current();
logger.info('Task created', { taskId, correlationId: ctx.correlationId });

Tier 2 — Most Projects Would Need

Not universal, but you'll reach for these in 80%+ of products. Build the abstractions now; flesh out when needed.

12. Webhooks (Outbound)

Let external systems subscribe to platform events. Retry with exponential backoff, HMAC signing, delivery logs.

CREATE TABLE webhook_subscriptions (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    url VARCHAR(1000) NOT NULL,
    events VARCHAR(100)[] NOT NULL,      -- {'task.created', 'contact.updated'}
    secret VARCHAR(200) NOT NULL,        -- HMAC signing key
    active BOOLEAN DEFAULT true,
    failure_count INT DEFAULT 0,         -- auto-disable after N consecutive failures
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE webhook_deliveries (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    subscription_id UUID REFERENCES webhook_subscriptions(id),
    event VARCHAR(100) NOT NULL,
    payload JSONB NOT NULL,
    response_status INT,
    response_body TEXT,
    attempts INT DEFAULT 0,
    next_retry_at TIMESTAMPTZ,
    status VARCHAR(20) DEFAULT 'pending',  -- 'pending', 'delivered', 'failed'
    created_at TIMESTAMPTZ DEFAULT NOW()
);

Delivery: event emitted → BullMQ job → POST to URL with HMAC signature → retry with exponential backoff (1min, 5min, 30min, 2hr) → disable subscription after 10 consecutive failures.

13. Import / Export

Standardized pipeline for bulk data. CSV upload, Excel/JSON export.

class ImportExportService {
  async importCSV(params: {
    file: Buffer;
    entityType: string;        // 'contacts', 'tasks'
    mapping: FieldMapping[];   // maps CSV columns to entity fields
    orgId: string;
    actorId: string;
  }): Promise<ImportJob> {
    // 1. Parse CSV
    // 2. Validate each row against entity schema
    // 3. Return preview (first 10 rows + validation errors)
    // 4. On confirm: queue BullMQ job for bulk insert
    // 5. Emit 'import.completed' event with stats
  }

  async export(params: {
    entityType: string;
    query: ListQuery;          // reuse the standard filtering
    format: 'csv' | 'xlsx' | 'json';
    orgId: string;
  }): Promise<string> {
    // 1. Query data (uses same BaseRepository.findMany)
    // 2. Generate file
    // 3. Upload to S3
    // 4. Return presigned download URL (expires in 1 hour)
  }
}
POST   /api/v1/import/preview     — upload file, get preview + validation
POST   /api/v1/import/confirm     — start import job
GET    /api/v1/import/jobs/:id    — check import status
POST   /api/v1/export             — start export job, returns download URL

14. Scheduled Jobs / Cron

Centralized cron registry with execution tracking and failure alerting.

CREATE TABLE scheduled_jobs (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    name VARCHAR(100) NOT NULL UNIQUE,
    cron_expression VARCHAR(50) NOT NULL,  -- '0 8 * * *' (daily at 8am)
    handler VARCHAR(200) NOT NULL,          -- 'notifications:sendDailyDigest'
    enabled BOOLEAN DEFAULT true,
    last_run_at TIMESTAMPTZ,
    last_status VARCHAR(20),
    last_duration_ms INT,
    next_run_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE job_executions (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    job_id UUID REFERENCES scheduled_jobs(id),
    started_at TIMESTAMPTZ NOT NULL,
    finished_at TIMESTAMPTZ,
    status VARCHAR(20),                    -- 'running', 'success', 'failed'
    result JSONB,
    error TEXT,
    duration_ms INT
);
// Modules register cron jobs during setup:
onReady() {
  this.scheduler.register({
    name: 'notifications:sendDailyDigest',
    cron: '0 8 * * *',
    handler: () => this.notificationService.sendDailyDigests(),
  });

  this.scheduler.register({
    name: 'trash:autoPurge',
    cron: '0 3 * * *',
    handler: () => this.trashService.purgeExpired(30), // 30-day retention
  });
}

15. Dashboard & Analytics Base

Standardized aggregation endpoints every dashboard will call. Not a full BI tool — just the API patterns.

class AnalyticsService {
  async count(params: {
    entityType: string;
    orgId: string;
    filter?: Record<string, unknown>;
  }): Promise<number>;

  async groupBy(params: {
    entityType: string;
    groupField: string;        // 'status', 'assignee_id', 'type'
    orgId: string;
    filter?: Record<string, unknown>;
  }): Promise<{ group: string; count: number }[]>;

  async timeSeries(params: {
    entityType: string;
    dateField: string;         // 'created_at', 'closed_at'
    interval: 'day' | 'week' | 'month';
    orgId: string;
    filter?: Record<string, unknown>;
  }): Promise<{ date: string; count: number }[]>;
}
GET /api/v1/analytics/count?entity=tasks&filter[status]=completed
GET /api/v1/analytics/group?entity=contacts&group=type
GET /api/v1/analytics/timeseries?entity=tasks&date=created_at&interval=week

16. API Keys & External Access

Service accounts and scoped API keys for integrations.

CREATE TABLE api_keys (
    id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
    org_id UUID NOT NULL,
    name VARCHAR(200) NOT NULL,          -- 'Zapier Integration', 'Mobile App'
    key_hash VARCHAR(200) NOT NULL,      -- bcrypt hash (plain key shown once at creation)
    key_prefix VARCHAR(10) NOT NULL,     -- 'pk_live_' — for identification
    scopes VARCHAR(100)[] NOT NULL,      -- {'tasks:read', 'contacts:write'}
    expires_at TIMESTAMPTZ,
    last_used_at TIMESTAMPTZ,
    created_by UUID NOT NULL,
    revoked_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW()
);
POST   /api/v1/api-keys              — create key (returns plain key ONCE)
GET    /api/v1/api-keys              — list keys (prefix, scopes, last_used — never full key)
DELETE /api/v1/api-keys/:id          — revoke key

Auth middleware detects Authorization: Bearer pk_live_... and resolves to the API key's scopes instead of a user JWT.

17. i18n / Localization Framework

Middleware + string extraction pattern — costs almost nothing to add now, painful to retrofit later.

@Injectable()
export class I18nMiddleware implements NestMiddleware {
  use(req: Request, res: Response, next: NextFunction) {
    req.locale = resolveLocale(req);
    // Accept-Language header → user setting → org default → 'en'
    next();
  }
}

class I18nService {
  t(key: string, locale: string, variables?: Record<string, unknown>): string;
  // 'task.assigned' → 'You were assigned to "{taskTitle}"'
  // 'task.assigned' → 'Te asignaron a "{taskTitle}"' (es)
}

class FormatService {
  date(value: Date, locale: string, style?: 'short' | 'long'): string;
  number(value: number, locale: string): string;
  currency(value: number, locale: string, currency: string): string;
}

String tables stored as JSON files in the repo (locales/en.json, locales/es.json). Modules register their own keys.

Updated Module Dependency Map

Core (always loaded):
  ├── Auth
  ├── Users
  ├── Organizations
  ├── Teams
  ├── Permissions (Zanzibar)
  ├── Entity Links
  ├── Notifications (event-driven)
  ├── Files
  ├── Audit Log
  ├── Search
  ├── Settings & Config        ← NEW
  ├── Comments & Activity Feed ← NEW
  ├── Tags & Labels            ← NEW
  ├── Soft Deletes / Trash     ← NEW
  ├── Pagination / Filtering   ← NEW (middleware)
  ├── Error Handling           ← NEW (global filter)
  ├── Email / Messaging        ← NEW
  ├── Invitations / Onboarding ← NEW
  ├── Feature Flags            ← NEW
  ├── Health Checks            ← NEW
  └── Request Context          ← NEW (middleware)

Tier 2 (most projects):
  ├── Webhooks (outbound)      ← NEW
  ├── Import / Export          ← NEW
  ├── Scheduled Jobs / Cron    ← NEW
  ├── Dashboard Analytics Base ← NEW
  ├── API Keys / External Auth ← NEW
  └── i18n / Localization      ← NEW

Optional (plug in per product):
  ├── Tasks        (depends on: Core)
  ├── Contacts     (depends on: Core)
  ├── Documents    (depends on: Core, Files)
  ├── Notes        (depends on: Core)
  ├── Finance      (depends on: Core, Contacts)
  └── [Custom]     (depends on: Core)

XV. Core System User Flows

Step-by-step walkthroughs for every core system. Each step maps to the exact API call, database write, and event emitted.

Auth

Flow 1: Sign Up

Step 1: User submits registration form
  API:    POST /api/v1/auth/register
  Body:   { email, password, first_name, last_name }
  System: Validate email + password strength (12 chars, 1 upper, 1 number, 1 special)
  DB:     INSERT INTO users (password_hash = bcrypt 12 rounds)
          INSERT INTO organizations (auto-create personal org)
          INSERT INTO org_members (role = 'owner')
  Event:  'user.registered' → welcome email
  Returns: { user, tokens: { accessToken (15min JWT), refreshToken (30d httpOnly cookie) } }

Flow 2: Login

Step 1: User submits credentials
  API:    POST /api/v1/auth/login { email, password }
  System: bcrypt.compare → if MFA enabled → { requiresMfa: true, mfaToken }
  DB:     INSERT refresh_token. UPDATE users SET last_login_at, login_count++
  Event:  'user.logged_in' → audit log (IP, UA, geo)
  Notes:  5 failed attempts → 15min lockout. Rate: 10/email/15min.

Flow 3: Token Refresh

API:    POST /api/v1/auth/refresh { refreshToken }
  System: Validate → issue new access token → rotate refresh token (old invalidated)
  Notes:  If revoked token reused → revoke ALL tokens (breach detected)

Flow 4: Logout

API:    POST /api/v1/auth/logout
  System: Revoke refresh token. Blacklist access token in Redis (TTL = remaining life).
  Notes:  /api/v1/auth/logout-all → revoke ALL sessions

Flow 5: Password Reset

Step 1: POST /api/v1/auth/forgot-password { email }
  System: Generate reset token (1hr TTL). Always return 200 (prevent enumeration).
  Event:  'user.password_reset_requested' → email with link

Step 2: POST /api/v1/auth/reset-password { token, newPassword }
  DB:     UPDATE password_hash. DELETE all refresh_tokens (revoke sessions).
  Event:  'user.password_changed' → confirmation email

Flow 6: MFA Enrollment (TOTP)

Step 1: POST /api/v1/auth/mfa/enroll → { qrCodeUrl, secret }
  DB:     Store encrypted TOTP secret (KMS). mfa_enabled = false until verified.

Step 2: POST /api/v1/auth/mfa/verify { code }
  DB:     mfa_enabled = true. Generate 10 hashed backup codes.
  Returns: { backupCodes } (shown once)

Step 3: POST /api/v1/auth/mfa/challenge { mfaToken, code }
  Returns: { accessToken, refreshToken }

Flow 7: OAuth / SSO

Step 1: GET /api/v1/auth/oauth/google → 302 to Cognito/Keycloak (with PKCE)
Step 2: GET /api/v1/auth/oauth/callback?code=X&state=Y
  System: Verify state → exchange code → extract user info → match or create user
  DB:     UPSERT users (link oauth_provider). INSERT refresh_token.
  Returns: Redirect with tokens

Users

Profile & Account Management

View profile:   GET /api/v1/users/me → { id, email, name, avatar, timezone, locale }
Update:         PATCH /api/v1/users/me { first_name, timezone, locale }
                Event: 'user.updated' → audit log

Upload avatar:  POST /api/v1/users/me/avatar → { uploadUrl, key }
                Client PUTs to S3 directly → PATCH /api/v1/users/me { avatar_url }

Change password: POST /api/v1/users/me/change-password { currentPassword, newPassword }
                 Revokes all other sessions. Event: 'user.password_changed'

Deactivate:     POST /api/v1/users/me/deactivate { confirmation: 'DEACTIVATE', password }
                Soft-delete. Revoke tokens. GDPR anonymize after 30 days (cron).

Organizations

Org Lifecycle

Create:    POST /api/v1/organizations { name, slug, plan }
           DB: INSERT org + org_members (creator = owner)

Settings:  PATCH /api/v1/organizations/:orgId { name, logo_url, default_timezone }
           PUT /api/v1/settings { scope:'org', namespace:'branding', key, value }

Members:   GET /api/v1/organizations/:orgId/members (paginated)

Remove:    DELETE /api/v1/organizations/:orgId/members/:userId
           Reassign owned tasks/docs. Remove permission tuples.
           Event: 'organization.member_removed' → notify user

Transfer:  POST /api/v1/organizations/:orgId/transfer { newOwnerId, password }
           Old owner → admin. New owner → owner. Event: 'ownership_transferred'

Teams

Team Management

Create:    POST /api/v1/teams { name, description, visibility }
Add:       POST /api/v1/teams/:id/members { user_ids, role }
           DB: INSERT team_members + permission_tuples
           Event: 'team.member_added' → notify each user

Remove:    DELETE /api/v1/teams/:id/members/:userId
           Event: 'team.member_removed'

Assign to entity: POST /api/v1/links
  { source: { type:'team', id }, target: { type:'document', id }, relationship:'assigned' }
  DB: INSERT entity_link + permission_tuples (all members get view)
  Event: 'entity.linked' → notify team

List work: GET /api/v1/teams/:id/linked/tasks
           GET /api/v1/teams/:id/linked/documents

Permissions (Zanzibar)

Permission Flows

Grant:     POST /api/v1/permissions/grant
           { namespace:'document', object_id:'doc_123', relation:'editor', subject:{ type:'user', id:'user_456' } }
           DB: INSERT INTO permission_tuples

Check (runtime — every request):
  Middleware: PermissionEngine.check(userId, 'view', { type:'document', id:'doc_123' })
  Graph walk: direct → via team → via org role → cached in Redis (30s TTL)
  Denied → 403

My access:   GET /api/v1/permissions/my-access?namespace=document
             Returns: [{ object_id, relations: ['viewer','editor'] }]

Custom role: POST /api/v1/roles { name:'project_manager', permissions:['task:*','document:view'] }

Audit:       GET /api/v1/permissions/audit?namespace=document&object_id=doc_123
             Shows: direct grants + inherited (via team/org)

Universal Linking

Link:      POST /api/v1/links
           { source: { type:'contact', id }, target: { type:'document', id }, relationship:'associated' }
           System: verify both exist → permission check → create
           Event: 'entity.linked' → activity on both entities

View:      GET /api/v1/contacts/:id/linked/documents (bidirectional lookup)
           GET /api/v1/contacts/:id/linked/tasks
           Resolves via ModuleLoader → EntitySummary[]

Remove:    DELETE /api/v1/links/:linkId
           Event: 'entity.unlinked' → audit log

Search:    GET /api/v1/links?source_type=deal&source_id=deal_1
           Powers "Related Items" panel on every detail page

Files

Upload → Download → Organize

Upload (presigned):
  Step 1: POST /api/v1/files/upload-url { filename, contentType, size, entity_type?, entity_id? }
          Validate size < 100MB. Generate S3 presigned PUT (15min). Insert file (status='pending').
  Step 2: Client PUTs binary to S3 directly (no server memory pressure)
  Step 3: POST /api/v1/files/:id/confirm → verify S3 HEAD → status='active' → link to entity
          Event: 'file.uploaded' → activity feed

Download:  GET /api/v1/files/:id/download → presigned GET URL (1hr expiry)
List:      GET /api/v1/documents/:id/files
Delete:    DELETE /api/v1/files/:id → soft delete. S3 object retained 30 days.
Folders:   POST /api/v1/folders { name, parent_id? }
           PATCH /api/v1/files/:id { folder_id }
           GET /api/v1/folders/:id/contents → { folders[], files[] }

Audit Log

Automatic Logging & Compliance

Every mutating API call auto-logged (no module code needed):
  DB: INSERT INTO audit_log (actor_id, action, entity_type, entity_id, changes, ip, correlation_id)
  Append-only: no UPDATE or DELETE allowed on audit_log table.

View:    GET /api/v1/audit?entity_type=document&entity_id=doc_123 (entity trail)
         GET /api/v1/audit?actor_id=user_456&from=2025-01-01 (user trail)
         GET /api/v1/audit?page=1&sort=timestamp:desc (global, admin)

Export:  POST /api/v1/audit/export { from, to, format:'csv' } → queued → S3 download
         GET /api/v1/audit/export/:jobId → { status, downloadUrl }

Retention: Configurable per org. Default 2 years → archive to S3 Glacier.
           Monthly cron: 'audit:archiveOld'

Global & Contextual Search

Global (typeahead):
  GET /api/v1/search?q=acme&limit=10
  PostgreSQL full-text: ts_rank + permission filter (only viewable entities)
  Returns: EntitySummary[] grouped by type. Debounced 300ms. Cached 60s.
  Later: migrate to OpenSearch for fuzzy + better ranking.

Filtered: GET /api/v1/search?q=acme&type=contact&filter[status]=active
In-entity: GET /api/v1/tasks/:id/comments?search=budget

Recent:  Tracked automatically. GET /api/v1/search/recent → last 10
Saved:   POST /api/v1/search/saved { name, query, type, filters }
         GET /api/v1/search/saved/:id/run → re-execute with current data

XVI. Baseline Infrastructure Flows

User walkthroughs for key baseline infrastructure systems that ship with every product.

Comments & Activity Feed

Add comment:   POST /api/v1/{entityType}/{entityId}/comments { body }
               Parse @mentions → notify mentioned users
               Event: 'comment.created' → activity feed + watchers notified

Reply:         POST /api/v1/{entityType}/{entityId}/comments { body, parent_id }
               Event: 'comment.replied' → notify thread participants

Edit:          PATCH /api/v1/comments/:id { body } → sets edited_at
Delete:        DELETE /api/v1/comments/:id → soft delete, shows "[deleted]"

Activity feed: GET /api/v1/{entityType}/{entityId}/activity
               Merged: system events (status changes, assignments) + comments
               Sorted chronologically. Powers the "Activity" tab on every entity.

My activity:   GET /api/v1/activity?actor=me → everything you did, cross-entity

@Mentions:     User types @ → GET /api/v1/search?type=user&q=ali (typeahead)
               Stored as: "@[Alice](user:user_789)"
               On save: emit 'user.mentioned' → in-app + email notification

Tags & Labels

Create tag:   POST /api/v1/tags { name:"urgent", color:"#EF4444", category:"priority" }
              Unique per org.

Apply:        POST /api/v1/tasks/:id/tags { tag_id }
              Event: 'entity.tagged' → activity feed

Bulk apply:   POST /api/v1/tags/:tagId/apply { entities: [{ type, id }, ...] }

Filter:       GET /api/v1/tasks?tags=urgent,frontend
              Works on every entity list (standard ListQuery).

Remove:       DELETE /api/v1/tasks/:id/tags/:tagId
Admin:        GET /api/v1/tags (list all)
              PATCH /api/v1/tags/:id { color, name } (update)
              DELETE /api/v1/tags/:id (cascade removes from entities)

Invitation & Onboarding

Invite:       POST /api/v1/invitations { email, role, team_ids }
              Generate token (7 day expiry). Send email via EmailService.
              Event: 'invitation.sent' → audit log

Accept:       GET /api/v1/invitations/accept?token=X
              If no account → redirect to register (pre-filled email)
              POST /api/v1/auth/register { ..., invitation_token }
              DB: Create user + join org + join teams + permission tuples
              If has account → just join org
              Event: 'invitation.accepted' → notify inviter

Resend:       POST /api/v1/invitations/:id/resend → new token, reset expiry
Revoke:       DELETE /api/v1/invitations/:id → status = 'revoked'

Onboarding:   GET /api/v1/onboarding/status
              Returns: { steps: [{ id, module, required, completed }] }
              Modules register steps. Frontend renders wizard.
              POST /api/v1/onboarding/complete { step:'create_first_team' }
              Wizard dismissed when all required steps done.

Import / Export

Import CSV:
  Step 1: POST /api/v1/import/preview (FormData: file + entityType)
          Parse headers → auto-suggest column mapping → validate 100 rows
          Returns: { headers, suggestedMapping, preview (10 rows), validationErrors, totalRows }

  Step 2: POST /api/v1/import/confirm { previewId, mapping, skipErrors }
          Queue BullMQ job. Returns: { jobId }
          Event: 'import.started'

  Step 3: Worker batch-inserts (100 rows/batch). Tracks progress.
          Event: 'import.completed' → notify: "2,430 of 2,450 imported (20 skipped)"
          Failed rows → downloadable error CSV

  Step 4: GET /api/v1/import/jobs/:id → { status, successCount, errorCount, errorFile }

Export:
  POST /api/v1/export { entityType, format:'csv', query }
  Worker: query → generate file → S3 → presigned URL (1hr)
  GET /api/v1/export/jobs/:id → { status, downloadUrl }

Webhooks (Outbound)

Register:     POST /api/v1/webhooks { url, events:['task.created','contact.updated'] }
              Generate HMAC secret. Returns secret ONCE (admin must save).

Test:         POST /api/v1/webhooks/:id/test
              Send test payload with HMAC signature. Log response.

Delivery:     Event fires → match subscriptions → BullMQ job → POST to URL
              Headers: X-Webhook-Signature (HMAC-SHA256), X-Webhook-Event, X-Webhook-Delivery

Retry:        Exponential backoff: 1min → 5min → 30min → 2hr
              After 10 consecutive failures → auto-disable subscription
              Event: 'webhook.disabled' → notify admin

Log:          GET /api/v1/webhooks/:id/deliveries (status, attempts, response codes)
Manual retry: POST /api/v1/webhooks/deliveries/:id/retry

API Keys & External Access

Generate:    POST /api/v1/api-keys { name:'Zapier', scopes:['tasks:read','contacts:write'], expires_at? }
             Key format: pk_live_ + 32 random bytes. Stored as bcrypt hash.
             Returns: { id, key:'pk_live_a1b2...' } ← shown ONCE, never stored plain

List:        GET /api/v1/api-keys → [{ name, prefix:'pk_live_a1b2', scopes, last_used_at }]
             Full key NEVER returned.

Revoke:      DELETE /api/v1/api-keys/:id → revoked_at = NOW(). Immediate effect.

Auth flow:   Authorization: Bearer pk_live_a1b2...
             1. Detect prefix → API key auth (not JWT)
             2. Lookup by prefix → bcrypt.compare
             3. Check: not revoked, not expired
             4. Validate scope: GET /tasks needs 'tasks:read'
             5. Set context: orgId from key, actorId = 'api_key:{id}'
             6. Apply RLS

Rate limits: Redis counter per key per minute.
             Free: 100/min. Pro: 1000/min. Enterprise: 10000/min.
             Headers: X-RateLimit-Limit, X-RateLimit-Remaining, Retry-After

XVII. Frontend Architecture

The frontend mirrors the backend's modular philosophy. A product picks which modules to load — the frontend only renders what the backend serves.

Shared UI Kit / Design System

All UI components live in @platform/ui. Every product uses the same components.

ComponentPurposeUsed By
DataTableSortable, filterable, paginated (wraps ListQuery)Every list view
KanbanBoardDrag-and-drop column layoutTasks, Deals/Pipeline
TimelineChronological activity feedTask detail, Contact profile
RichTextEditorTipTap with track changesDocuments, Notes, Comments
EntityCardRenders any EntitySummary as linkable cardLinked items, search results
FileUploadDrag-and-drop with presigned URL flowAny entity with files
CommandPaletteCmd+K global search + quick actionsGlobal (always available)
SlideOverSide panel for entity detailQuick view on any entity
TagInputMulti-select tag picker with color pillsAll entity tag fields

Module Registration (Frontend)

Each module declares routes, nav items, widgets, and settings pages. The app loads only what's enabled.

// packages/modules/tasks/module.ts
export const tasksModule: FrontendModule = {
  id: 'tasks',
  name: 'Tasks',
  routes: [
    { path: '/tasks', component: () => import('./pages/TasksPage') },
    { path: '/tasks/:id', component: () => import('./pages/TaskDetailPage') },
  ],
  navigation: { label: 'Tasks', icon: 'CheckSquare', path: '/tasks', order: 20 },
  widgets: {
    'task-list': {
      component: () => import('./widgets/TaskListWidget'),
      title: 'Tasks',
      acceptsContext: ['contact', 'document', 'deal', 'team'],
    },
  },
  searchProvider: { type: 'task', icon: 'CheckSquare' },
};

// Module loading — only active modules get routes/nav/widgets:
const enabledModules = await fetch('/api/v1/features').then(r => r.json());
const activeModules = allModules.filter(m => enabledModules[m.id]);

Cross-Module Widgets

Any module can embed UI into any other module's pages without direct imports.

// How it works:
// 1. Contact profile renders <WidgetSlot entityType="contact" entityId="contact_123" />
// 2. WidgetSlot queries registry: "Which widgets accept context='contact'?"
// 3. Finds: TaskListWidget, DocumentListWidget, DealPipelineWidget
// 4. Renders each as a collapsible section (lazy loaded via React.Suspense)
// 5. Each widget fetches its own data via the SDK independently

// Widget examples:
// Contact Profile  →  TaskListWidget, DocumentListWidget, DealPipelineWidget
// Deal Detail      →  TaskListWidget, DocumentListWidget
// Document Detail  →  ContactCard
// Dashboard        →  TaskCountWidget, DealForecastWidget, RevenueWidget

Routing

Next.js App Router with dynamic module route groups. Route protection via edge middleware.

// Route structure:
// app/(auth)/login, register       ← minimal layout, no sidebar
// app/(dashboard)/tasks/*          ← loaded if tasks module enabled
// app/(dashboard)/contacts/*       ← loaded if contacts module enabled
// app/(dashboard)/settings/*       ← always available

// Edge middleware checks:
// 1. Has access token? No → redirect /login
// 2. Module enabled in JWT claims? No → redirect /dashboard
// 3. Navigation auto-generated from activeModules.map(m => m.navigation)

State Management

Server state: TanStack Query (cache, optimistic updates, invalidation). Client state: Zustand (sidebar, modals, command palette). No Redux.

// Server state — TanStack Query
const { data } = useQuery({ queryKey: ['tasks', query], queryFn: () => api.tasks.list(query) });

// Mutations with optimistic updates:
const createTask = useMutation({
  mutationFn: (data) => api.tasks.create(data),
  onMutate: async (newTask) => {
    await queryClient.cancelQueries({ queryKey: ['tasks'] });
    queryClient.setQueryData(['tasks'], old => ({ ...old, data: [newTask, ...old.data] }));
  },
  onSettled: () => queryClient.invalidateQueries({ queryKey: ['tasks'] }),
});

// Client state — Zustand (sidebar, command palette, slide-over)
const { sidebarCollapsed, toggleSidebar, openSlideOver } = useUIStore();
ConcernTanStack Query + ZustandRedux Toolkit
Server state cachingBuilt-in (stale-while-revalidate)Manual (RTK Query helps)
Optimistic updatesFirst-class supportManual
BoilerplateNear zeroSignificant
Bundle size~13KB combined~40KB+
Module isolationEach module has own hooksGlobal store = coupling

Real-Time (WebSocket)

// WebSocket connects on auth → joins user room
// Server pushes events → client invalidates relevant TanStack Query caches

// What updates in real-time:
// notification         → toast + badge count
// task.status_changed  → kanban card moves column
// deal.stage_changed   → pipeline board updates
// document.revised     → revision indicator
// comment.created      → new comment in timeline
// entity.updated       → detail page refreshes

Layout System

┌──────────────────────────────────────────────────────────┐
│ TopBar                                                   │
│  [≡] [Search... Cmd+K]                  [🔔 3] [Avatar] │
├──────┬───────────────────────────────────────────────────┤
│      │                                                   │
│ Side │  Main Content Area                                │
│ bar  │  ┌──────────────────────────────────────────────┐ │
│      │  │  Page Header (title + actions)               │ │
│ Home │  ├──────────────────────────────────────────────┤ │
│ Tasks│  │                                              │ │
│ CRM  │  │  Page Content                                │ │
│ Docs │  │  (DataTable / KanbanBoard / Form / Detail)   │ │
│ $$$  │  │                                              │ │
│      │  └──────────────────────────────────────────────┘ │
│ ---- │                                                   │
│ Sett │  SlideOver ─── Quick view panel (no page nav) ──┐│
│      │                                                  ││
└──────┴──────────────────────────────────────────────────┘│
                                                           │
  Entity detail: 2/3 main content + 1/3 WidgetSlot sidebar │

Every module page follows the same PageLayout pattern (header + actions + content). Entity detail pages use EntityDetailLayout (main tabs + widget sidebar).

XIV. Standard vs. This Architecture

ConcernStandard ApproachThis Architecture
Multi-tenancyShared DB, app-level filteringRow-Level Security at DB + Zanzibar permission model
Account isolationOne AWS account, IAM policiesMulti-account with SCPs, blast radius isolation
SecurityJWT + middlewareZero Trust, 5 layers, Zanzibar RBAC/ABAC
DatabaseStandard PostgreSQLAurora Serverless v2 (3-5x faster, auto-scaling, multi-AZ)
MigrationManual scriptsDMS with CDC (zero-downtime from any legacy DB)
ModularityFeature flagsTrue module system with deps, migrations, permissions, events
Cross-moduleHardcoded foreign keysUniversal entity linking (any-to-any)
DeploymentManual setup per projectOne CDK command provisions a full product cell
ReliabilityAlerts on errorsSLO-driven error budgets (Google SRE model)
ObservabilityLogs onlyCorrelated logs + metrics + traces

Key Architecture Decisions & Rationale

DecisionWhyAlternatives Considered
Prisma as ORMType-safe, auto-migrations, built-in connection pooling. 3M+ weekly npm downloads. Used by Netflix, Vercel, Priceline.Drizzle (thinner, closer to SQL, less mature ecosystem)
NestJS backendTypeScript full stack, built-in module system matches architecture, strong DI, monorepo-friendlyFastAPI (great perf, but adds Python as second language)
ECS Fargate for backendPersistent connections (Aurora, Redis, WebSockets), no cold starts, connection pooling, predictable costLambda (kills DB with per-invocation connections, 15min timeout, no WS)
Field-based change trackingGeneric diff breaks on HTML tags, misidentifies changes. Field-level tracking is reliable and maps to document structure.diff npm package (explicitly abandoned for contract revisions)
TipTap for rich textProseMirror-based, extensible via custom marks (track changes), HTML content, React integrationSlate (less mature), Quill (less extensible), CKEditor (heavy)
Cheerio for HTML parsingLightweight, jQuery-like API for finding data-field elements and injecting highlights. No browser needed.jsdom (heavier), node-html-parser (less API)
CDK for IaCSame TypeScript as app code, first-class AWS support, composable constructs, type-safeTerraform (better for multi-cloud, but we are AWS-only)
Aurora Serverless v23-5x faster, auto-scales 0.5-128 ACUs, 6-way replicated, read replicas with under 20ms lag, 100% PG compatibleStandard RDS PostgreSQL (cheaper at small scale, no auto-scaling)
Vercel for frontendBest Next.js DX, instant preview deploys per PR, edge SSR, fast buildsAmplify (slower builds), CloudFront+S3 (no SSR)
Media Processing excludedToo niche. Only needed for products with image/video requirements. Not universal.Was considered for Tier 2 baseline but cut

📝 All Notes