```html

Building a Multi-Stakeholder Reporting Pipeline: Executive Dashboard Architecture for Ship Captain Crew and Portfolio Companies

Over the past development cycle, we built and deployed a comprehensive executive reporting system designed to surface critical business intelligence across four distinct portfolio entities (JADA, QueenofSanDiego, QuickDumpNow, DangerousCentaur) plus three supporting infrastructure domains. This post walks through the technical architecture, infrastructure decisions, and deployment patterns we used to deliver simultaneous perspective reports to five C-suite roles.

What We Built

The core deliverable was a batch reporting pipeline that generates five specialized executive reports—each written from a distinct stakeholder perspective—and distributes them via Amazon SES. The reports cover:

  • CEO perspective: Full asset inventory, revenue gaps, equity risks, and 30-day remediation priorities across all portfolio companies
  • CTO perspective: Security audit, infrastructure cost analysis, UX shortfalls, and development cycle maturity assessment
  • CFO perspective: Burn rate modeling, capital deployment framework, monthly revenue targets, and break-even analysis
  • CMO perspective: Channel visibility matrix, blast email readiness analysis, OTA sequencing roadmap, and 30/60/90-day milestones
  • Accounting perspective: Revenue recognition gaps, chart of accounts specification, expense audit, and profitability roadmap through Q1 2027

Technical Architecture

Report Generation Pipeline

The primary implementation lives in two Python scripts:

  • /Users/cb/Documents/repos/tools/send_exec_reports.py — Production report generator
  • /Users/cb/Documents/repos/tools/send_exec_reports_2.py — Secondary implementation variant (likely for A/B testing or multi-format output)

Each script instantiates report objects for all five roles, populates them with data from DynamoDB and CloudWatch metrics, and pipes the formatted output through the SES sender with BCC logging to admin@queenofsandiego.com.

Email Infrastructure

We leveraged Amazon SES with the following configuration:

  • Sender: Verified domain identity at admin@queenofsandiego.com (hardcoded, avoiding environment variable sprawl)
  • Recipient: Primary delivery to c.b.ladd@gmail.com with automatic BCC to archive inbox
  • Region: us-west-2 (matching primary infrastructure footprint)
  • Authentication: IAM role with ses:SendEmail and ses:SendRawEmail permissions scoped to verified sender addresses

The SES credential chain reads from environment variables (stored securely in repos.env), validated prior to send via a pre-flight check that confirms both sender and recipient variables exist before attempting delivery.

Portfolio Data Sources

Report generation aggregates data from multiple infrastructure layers:

  • DynamoDB tables: Event records, checklist state, role assignments (from Ship Captain Crew Lambda environment)
  • Project handoff documentation: /Users/cb/Documents/repos/agent_handoffs/projects/shipcaptaincrew.md and equivalent files for other domains, parsed for operational metadata
  • CloudWatch metrics: Lambda invocation counts, duration histograms, error rates across all deployed functions
  • S3 inventory: Frontend asset deployment counts, asset aging analysis to identify stale configurations
  • Route53 health checks: Domain availability and failover event log for uptime KPI calculation

Data queries are structured to minimize latency: all DynamoDB scans use projection expressions to fetch only required attributes, and CloudWatch metric queries use 5-minute aggregation windows rather than 1-minute to reduce API throttling risk.

Ship Captain Crew Lambda Hardening

Concurrent with report infrastructure, we performed significant updates to the primary Lambda function at:

/Users/cb/Documents/repos/sites/queenofsandiego.com/tools/shipcaptaincrew/lambda_function.py

Key changes:

  • JWT validation: Added explicit token validation on all protected routes. JWT_SECRET is read from Lambda environment variables (never hardcoded or committed to repos).
  • Event creation constraints: Required fields validation (event_id, charter_date, participant_list) prevents orphaned records in DynamoDB.
  • Role state machine: Implemented explicit claim, designate, and release handlers to prevent concurrent role conflicts. Role transitions are logged to CloudWatch with timestamp and actor identity.
  • Waiver flow integration: Added on_hold status tracking for participants awaiting waiver completion before full event access.
  • Departure/return timing hooks: Integrated sunset time calculation (via external API) to auto-populate charter timing fields and validate against harbor constraints.

All Lambda syntax was validated prior to deployment using Python's py_compile module, then the function was zipped and deployed to the prod environment with a work-log entry documenting the changes.

Frontend Deployment and Distribution

Frontend assets live in /Users/cb/Documents/repos/sites/queenofsandiego.com/tools/shipcaptaincrew/frontend/ with multiple iterations on index.html during this cycle:

  • CloudFront distribution: Configured to serve from S3 bucket with automatic cache invalidation on each deploy
  • Asset versioning: Added query-string-based versioning to CSS/JS includes to bypass aggressive browser caching
  • Timing panel elements: HTML components for displaying departure/return times, sunset/sunrise, and waiver deadline countdowns
  • Checklist modal: Modal UI for claim, designate, and release role operations with JWT token inclusion in POST payloads
  • Guest page flow: Magic link redemption landing page that accepts short codes, validates against DynamoDB lookup table, and auto-populates user context

Each frontend deployment follows this sequence:


# Syntax check
python -m py_compile lambda_function.py

# Zip Lambda
zip -r lambda_deployment.zip lambda_function.py

# Deploy to AWS
aws lambda update-function-code \
  --function-name shipcaptaincrew-prod \
  --zip-file fileb://lambda_deployment.zip \
  --region us-west-2

# Deploy frontend to S3
aws s3 sync frontend/ s3://queenofsandiego-shipcaptaincrew-frontend/ --delete

# Invalidate CloudFront cache
aws cloudfront create-invalidation \
  --distribution-id [DISTRIBUTION_ID] \
  --paths "/*"

Authentication and Magic Links

We implemented JWT-based authentication with magic link onboarding:

  • Magic link generation: Short codes (6-8 alphanumeric) are generated, stored in DynamoDB with TTL of 24 hours, and sent via SES email templates
  • Token redemption: Guest