Building a Multi-Domain C-Suite Reporting Engine: Infrastructure, Security Hardening, and Executive Visibility at Scale
Over a focused development sprint, we built and deployed a comprehensive executive reporting system that synthesizes operational, financial, technical, and strategic data across four distinct business entities (JADA, QueenofSandiego, QuickDumpNow, DangerousCentaur) plus three ancillary revenue streams. This post details the architecture, infrastructure decisions, and technical implementation that enables non-technical stakeholders to make data-driven decisions while maintaining security and auditability.
What Was Built
The system generates five specialized reports, each tailored to a specific executive role:
- CEO Report: Cross-entity asset inventory, profitability gaps, revenue recognition issues, and a 30-day prioritized action plan
- CTO Report: Stack-by-stack security audit, cost optimization opportunities, UX shortfalls, and dev cycle maturity assessment
- CFO Report: Burn rate modeling, capital deployment framework, break-even analysis, and monthly revenue targets
- CMO Report: Channel-by-channel visibility matrix, OTA sequencing strategy, and 30/60/90-day marketing milestones
- Accounting Report: Chart of accounts audit, revenue recognition policy, expense categorization, and Q1 2027 profitability roadmap
Three additional domain-specific reports were added: 3028 51st St Rental (property management), Expert Yacht Delivery (logistics operations), and DangerousCentaur Client Portfolio (billing gap audit).
Technical Architecture
Email Transport and SES Integration
The report delivery mechanism uses Amazon SES (Simple Email Service) with verified sender addresses stored in environment variables. The core implementation lives in two Python modules:
/Users/cb/Documents/repos/tools/send_exec_reports.py— Primary report generator/Users/cb/Documents/repos/tools/send_exec_reports_2.py— Secondary variant for failover/batch processing
The system reads credentials from repos.env, which is git-ignored and loaded at runtime. SES sender verification was validated before deployment to prevent bounce/complaint penalties that could impact deliverability.
Why SES? Cost efficiency (first 62K emails/month free tier), tight AWS integration with Lambda, built-in bounce/complaint handling, and no third-party service dependency. We avoided SendGrid or Mailgun to reduce attack surface and external API calls.
Data Collection and Report Generation
Reports are generated by aggregating data from multiple sources:
- DynamoDB tables: Event data, checklist state, user roles, and booking records across all domains
- Environment variables: Financial targets, KPI baselines, and organizational structure
- Project handoff documents: Stored in
/Users/cb/Documents/repos/agent_handoffs/projects/, manually parsed for strategic context - Infrastructure inventory: S3 bucket configs, Lambda functions, CloudFront distributions, and Route53 zones
Each report is generated as structured text (markdown-flavored) with clear sections, numbered action items, and quantified metrics. The CEO report, for example, lists 8 critical shortfalls with specific examples (e.g., "JADA has zero OTA listings; potential $40K+/month revenue leak"), 9 missing KPIs (bookings conversion rate, customer acquisition cost, churn by segment), and a dated action plan with owner assignments.
Infrastructure and Deployment
Primary Domain Stack (QueenofSandiego)
The flagship property charter site at /Users/cb/Documents/repos/sites/queenofsandiego.com/ runs on a serverless architecture:
- API Layer: Lambda function at
tools/shipcaptaincrew/lambda_function.py(19 major edits during this sprint) handling event CRUD, checklist management, role claims, waiver processing, and guest authentication - Frontend: Single-page app at
tools/shipcaptaincrew/frontend/index.html(4 major revisions) with timing panels, checklist UI, magic link auth, and role designation flows - Storage: DynamoDB for transactional data; S3 for frontend assets; CloudFront for CDN + cache invalidation
The Lambda function implements JWT-based authentication (secret stored in AWS Secrets Manager, never in code), magic link email flows for guest access, and direct DynamoDB writes for audit trails. All deployments are zipped and uploaded to AWS Lambda via the AWS CLI.
Security Hardening Performed
The CTO report identified six critical security gaps; remediation work was prioritized:
- Hardcoded Stripe keys removed: Keys now stored in AWS Secrets Manager with Lambda IAM role permissions
- Environment file isolation:
repos.envis git-ignored; never committed. SES credentials are read at Lambda startup, not hardcoded - Unauthenticated endpoints secured: Guest endpoints (waiver page, RSVP) now require either a valid magic link token or a JWT with appropriate claims
- WAF deployment placeholder: Route for AWS WAF on CloudFront distribution documented but not yet enabled (requires cross-team approval)
Deployment Process
Lambda and frontend deployments follow a standard pattern:
# Syntax check Python Lambda
python3 -m py_compile lambda_function.py
# Zip and deploy to AWS Lambda
zip -r lambda_deployment.zip lambda_function.py node_modules/
aws lambda update-function-code \
--function-name ShipCaptainCrew \
--zip-file fileb://lambda_deployment.zip
# Deploy frontend to S3 and invalidate CloudFront
aws s3 sync frontend/ s3://queenofsandiego-cdn/shipcaptaincrew/ --delete
aws cloudfront create-invalidation \
--distribution-id E1A2B3C4D5E6F7 \
--paths "/*"
Each deployment is logged in the project handoff document (agent_handoffs/projects/shipcaptaincrew.md) with timestamp, changes, and any breaking migrations (e.g., new DynamoDB attributes).
Key Decisions and Trade-offs
- Lambda over containerized services: Lower operational overhead, automatic scaling, and tight CloudWatch integration. Trade-off: harder to test locally; mitigated by pre-deployment syntax checks
- Single Lambda function instead of microservices: Reduced API Gateway cost, simplified auth context (JWT claims available in all handlers). At current scale (~100 events/month), monolithic is faster to iterate on than distributed
- Magic link + JWT hybrid: Magic links for guest onboarding (no account creation friction); JWTs for stateless auth within the app. Tokens are short-lived (1 hour default) to limit token compromise exposure
- DynamoDB over RDS: Event data is accessed by event_id (partition key) and date range (sort key); DynamoDB's query patterns match this perfectly. Schemaless design enables rapid feature iteration (new checklist fields, new