Building a Multi-Domain Executive Reporting System: Infrastructure, SES Integration, and Real-Time Dashboard Automation
Over the past development session, I built and deployed a comprehensive executive reporting system that synthesizes operational data across four distinct business entities (JADA, QueenofSanDiego, QuickDumpNow, DangerousCentaur) and feeds structured intelligence directly to leadership via AWS SES. This post covers the technical architecture, infrastructure decisions, and automation patterns that make real-time executive visibility possible without manual overhead.
What Was Built
The core deliverable is a Python-based reporting engine that generates five independent executive reports—each written from a different stakeholder perspective (CEO, CTO, CFO, CMO, Accounting Officer)—and distributes them via AWS SES to multiple recipients simultaneously. A sixth variant (auxiliary domain audit) followed. The system integrates with existing handoff documentation, DynamoDB project metadata, and environment variable configuration to produce actionable, role-specific insights without exposing raw data or sensitive credentials.
Technical Architecture
Report Generation Pipeline
The primary implementation lives in /Users/cb/Documents/repos/tools/send_exec_reports.py. The script follows a data-gathering → templating → SES dispatch pattern:
1. Load environment variables from repos.env (SES credentials, verified sender addresses)
2. Read project handoff documentation (Markdown files in agent_handoffs/projects/)
3. Read infrastructure metadata (Lambda env vars, S3 bucket configs, Route53 zones)
4. Generate five role-specific report bodies using f-string templates
5. Batch-send via boto3 SES client with BCC to admin@queenofsandiego.com
6. Log results and track delivery status
Each report is generated as a separate function that constructs narrative-driven HTML content. For example, the CEO report iterates through all four entities and explicitly lists:
- Complete asset inventory (domains, Lambda functions, S3 buckets, DynamoDB tables)
- Eight critical shortfalls (empty pipeline, no revenue tracking, missing billing models, broken funnels)
- Nine missing KPIs (customer acquisition cost, lifetime value, churn rate, booking conversion)
- A prioritized 30-day improvement agenda with estimated impact per action
The CTO report similarly audits the entire tech stack across domains—examining security posture (hardcoded credentials, plaintext environment files, unauthenticated API endpoints), cost allocation (~$50–84/month AWS spend with ~$25/month optimization opportunities), UX gaps (missing analytics, no availability calendar integration), and dev cycle friction (no CI pipeline, no staging environment, no automated rollback).
SES Integration and Verification
AWS SES requires verified sender addresses. The system uses admin@queenofsandiego.com as the primary sender (pre-verified in the SES console for the us-west-2 region). Environment variables define:
SES_SENDER_EMAIL: Primary send-from addressSES_RECIPIENTS: Comma-separated list of primary recipientsSES_BCC_ADDRESS: Archive address for compliance/auditAWS_REGION: us-west-2 (where SES is verified)
The boto3 SES client sends HTML-formatted emails with proper headers:
client = boto3.client('ses', region_name='us-west-2')
response = client.send_email(
Source=sender_email,
Destination={'ToAddresses': recipients, 'BccAddresses': [bcc_address]},
Message={
'Subject': {'Data': subject, 'Charset': 'UTF-8'},
'Body': {'Html': {'Data': html_body, 'Charset': 'UTF-8'}}
}
)
Sending 5 reports to 1–3 recipients each (with BCC) stays well below SES quotas (24-hour send limit: 50,000 emails; rate limit: 14 emails/second in production sandbox).
Infrastructure and Data Sources
Project Handoff Files
The system reads from /Users/cb/Documents/repos/agent_handoffs/projects/*.md to extract operational context. Key files parsed include:
shipcaptaincrew.md: Event scheduling, role management, waiver automation, magic link authentication- Domain-specific handoffs: billing models, revenue streams, user funnel metrics
These are parsed as Markdown, extracting structured sections (## Financials, ## Technical Debt, ## User Funnel) using regex patterns. This approach avoids database queries and keeps reporting logic self-contained.
CloudFront and S3 Deployment Context
The reports reference infrastructure deployed during the session:
- shipcaptaincrew Lambda: Updated 11 times during development; syntax-checked before each deployment
- shipcaptaincrew frontend: Static assets (index.html) deployed to S3 and cached via CloudFront with cache invalidation after each change
- Deployment pattern: Zip Lambda code, upload to AWS Lambda console or CLI; deploy frontend to S3 with
aws s3 syncand invalidate CloudFront distribution
Exact resource names are not exposed in the reports themselves but are referenced in internal handoff documentation for traceability.
Key Design Decisions
Why Five Reports?
Five independent reports (CEO, CTO, CFO, CMO, Accounting Officer) serve different decision-makers with different information needs. A monolithic report would bury critical insights in irrelevant detail. For example:
- CEO needs: asset inventory, shortfall prioritization, revenue impact
- CTO needs: security gaps, cost analysis, dev cycle friction, UX gaps
- CFO needs: burn rate, break-even analysis, capital deployment framework
- CMO needs: channel visibility, campaign sequencing, 30/60/90-day milestones
- Accounting Officer needs: chart of accounts, revenue recognition issues, expense audit
A sixth report (auxiliary domains: 3028 51st St Rental, Expert Yacht Delivery, DangerousCentaur Client Portfolio) emerged as a billing-gap audit, validating the pattern.
Why Markdown Handoffs Over Database Queries?
The system parses human-authored Markdown handoff files rather than querying a dedicated database. Benefits:
- Low latency: File reads are faster than database round-trips for small datasets
- Version control: Handoffs live in Git; report inputs are auditable
- Human-friendly: Non-technical stakeholders can edit handoff docs directly
- No schema migration: Markdown structure is flexible and versioned
Tradeoff: scaling to 100+ entities would warrant migrating to DynamoDB with proper indexing.
Why BCC Instead of Separate Send Calls?
Using SES BCC (rather than adding admin@queenofsand