```html

Building a Multi-Domain Executive Reporting Pipeline with AWS Lambda, SES, and DynamoDB

Over the past development session, we built and deployed a comprehensive executive reporting system that synthesizes operational, financial, technical, and strategic data across four active business entities (JADA, QueenofSandiego, QuickDumpNow, DangerousCentaur) plus three ancillary asset domains. This post walks through the architecture, deployment strategy, and key technical decisions that enabled rapid analysis and actionable reporting across the entire portfolio.

What Was Done

We created two parallel Python-based reporting pipelines:

  • /Users/cb/Documents/repos/tools/send_exec_reports.py — The primary production report generator
  • /Users/cb/Documents/repos/tools/send_exec_reports_2.py — An iteration/testing variant for multi-report workflows

These scripts generated eight distinct executive reports (CEO operational audit, CTO technical stack review, CFO financial modeling, CMO go-to-market sequencing, plus three domain-specific deep dives) and delivered them via AWS SES to stakeholders with BCC audit trails to admin@queenofsandiego.com.

Simultaneously, we hardened the Ship Captain Crew Lambda function (the central orchestration layer for event management across the QueenofSandiego domain) across multiple iterations, pushing six distinct deployments with increasingly sophisticated role-based access control, JWT validation, and event state management.

Technical Architecture

Report Generation Pipeline

The reporting system is designed as a stateless, composition-based generator:


# Conceptual flow (no credentials shown)
1. Load configuration from repos.env (SES sender, recipient lists, region)
2. Aggregate data from:
   - Project handoff markdown files (/repos/agent_handoffs/projects/*.md)
   - DynamoDB tables (event inventory, booking state)
   - S3 asset manifests and deployment logs
   - Manual operational context (team roles, pipeline status)
3. Render reports as structured text (one report per stakeholder perspective)
4. Send via SES in batch (5–8 reports per execution)
5. Log execution to CloudWatch

Why this design? Separation of concerns. The report generator doesn't care about the underlying business logic—it consumes context, applies a stakeholder lens, and produces output. This allows us to:

  • Iterate on report structure without touching Lambda or database logic
  • Add new report types (board reports, investor decks, compliance audits) by adding new rendering functions
  • Run reports on-demand, on schedule (via EventBridge), or triggered by operational events
  • Audit all outbound communications via SES receipt rules and CloudWatch logs

Lambda Function: Ship Captain Crew Orchestration

The core Lambda function at /sites/queenofsandiego.com/tools/shipcaptaincrew/lambda_function.py underwent six major iterations, adding:

  • JWT-based authentication — Magic link auth with role-scoped claims (captain, crew, guest, admin)
  • Event lifecycle management — Checklist state transitions (pending → claimed → on_hold → completed)
  • DynamoDB schema hardening — Explicit role bucketing, magic link expiry, claim audit trails
  • Waiver integration hooks — Guest funnel now gates on liability waiver acceptance
  • Short-code generation — Move beyond magic links to scannable codes stored in DynamoDB

The frontend (frontend/index.html) received four rounds of updates to:

  • Render role-specific UI (captain sees crew assignments, crew sees checklists, guests see waivers)
  • Add timing panel integration (departure/return time calculations with San Diego sunset data)
  • Implement claim modal and role-designate flows
  • Wire event creation and magic link consumption to updated Lambda endpoints

Infrastructure & Deployment Strategy

AWS Resources Touched

  • Lambda function — ARN pattern: arn:aws:lambda:us-west-2:*:function:shipcaptaincrew (6 deployments)
  • S3 bucketqueenofsandiego.com (frontend hosting)
  • CloudFront distribution — Invalidation on every frontend push (cache TTL: 300s for HTML, 31536000s for assets)
  • DynamoDB tableshipcaptaincrew-events (primary) and role-bucketing indexes
  • SES domain verificationadmin@queenofsandiego.com as verified sender (DKIM/SPF configured)
  • EventBridge ruleptb_nudge cron rule for automated reminders
  • Secrets Manager / Parameter StoreJWT_SECRET, STRIPE_KEY, and credential rotation (not hardcoded in repos)

Deployment Workflow


# Lambda deployment checklist (run before each push)
1. Syntax check: python -m py_compile lambda_function.py
2. Verify env vars in Lambda console (no plaintext secrets)
3. Zip function: zip -r lambda.zip lambda_function.py
4. Deploy: aws lambda update-function-code --function-name shipcaptaincrew --zip-file fileb://lambda.zip
5. Invoke test: aws lambda invoke --function-name shipcaptaincrew --payload '{"test": true}' response.json
6. Check CloudWatch logs for errors

# Frontend deployment checklist
1. Validate HTML syntax (W3C validator or linter)
2. Test locally against dev Lambda endpoint
3. Deploy to S3: aws s3 sync ./frontend/ s3://queenofsandiego.com/tools/shipcaptaincrew/
4. Invalidate CloudFront: aws cloudfront create-invalidation --distribution-id [DIST_ID] --paths "/*"
5. Verify cache headers and CORS policies

Each deployment was logged to /repos/agent_handoffs/projects/shipcaptaincrew.md with timestamps, changes, and test results.

Key Technical Decisions

JWT Over Session Cookies

We chose stateless JWT tokens (issued at magic link click, embedded in URL) over server-side sessions because:

  • No session store required (Lambda is stateless)
  • Tokens are self-describing (role, user_id, event_id encoded in claims)
  • Easy to revoke via short code rotation in DynamoDB
  • Works across domain boundaries (SPA + API on same domain pattern)

Role Bucketing in DynamoDB

Instead of querying all events and filtering by role, we create role-specific indexes:


# Schema pattern
PK: EVENT#2024-05-12#USS_YORKTOWN
SK: ROLE#CAPTAIN (or ROLE#CREW#