Building a Multi-Stakeholder Intelligence System: Executive Reporting Infrastructure for Portfolio Companies
Over the past development session, we implemented a comprehensive executive reporting system designed to provide stakeholder-specific intelligence across a four-entity portfolio (JADA, QueenofSanDiego, QuickDumpNow, DangerousCentaur) plus three ancillary revenue streams. This post details the technical architecture, decision-making process, and infrastructure patterns that enable automated, role-aligned reporting at scale.
What Was Built
The system generates eight parallel executive reports, each tailored to a specific stakeholder persona with distinct information needs:
- CEO Report — Asset inventory, revenue tracking gaps, equity risk vectors, and 30-day priority roadmap
- CTO Report — Stack audits, security hardening checklist, cost optimization analysis, and infrastructure improvements
- CFO Report — Burn rate modeling, capital deployment framework, break-even analysis, and monthly revenue targets
- CMO Report — Channel visibility matrix, marketing sequencing (OTA/email blast), and 90-day campaign milestones
- Accounting/Finance Officer Report — Chart of accounts, revenue recognition policy, expense audit, and Q1 2027 profitability roadmap
- VP of Operations (3028 51st St Rental) — Occupancy rates, maintenance pipeline, tenant transition risk, and unit economics
- VP of Business Development (Expert Yacht Delivery) — Market positioning, B2B channel development, and service delivery KPIs
- Chief Revenue Officer (Client Portfolio Audit) — Billing completeness across all accounts, receivables aging, and contract gaps
Technical Architecture
The reporting infrastructure is built on two primary Python scripts deployed to the development environment, with SES as the delivery mechanism:
Report Generation Pipeline
Primary scripts:
/Users/cb/Documents/repos/tools/send_exec_reports.py— Main reporting engine/Users/cb/Documents/repos/tools/send_exec_reports_2.py— Secondary/supplemental reports
Each script reads from a shared configuration source and generates formatted HTML email bodies. The architecture pattern here is template-driven report generation: rather than hardcoding report structure, we maintain a data model that maps stakeholder roles to information categories, then render each into a templated HTML email.
The key design decision was separation of concerns. Rather than one monolithic script that generates all eight reports, we split functionality:
- Primary script handles the four core business entities (JADA, QOS, QDN, DC)
- Secondary script extends to ancillary operations and specialized audits
- Both read configuration from
repos.envfor sender verification and recipient lists
Email Delivery via SES
Amazon SES was chosen for three reasons:
- Cost — $0.10 per 1,000 emails; at 8 reports weekly, annualized cost is ~$42
- Deliverability — SES maintains reputation with major ISPs when domain/sender are properly verified
- Integration — Already verified sender domain; no additional infrastructure needed
The sender address hardcoded in the script is admin@queenofsandiego.com, which is already registered as a verified SES sender. This avoids credential management complexity—the Lambda execution role or local script environment inherits SES permissions via IAM.
Recipient configuration is stored in repos.env as environment variables (e.g., EXEC_REPORT_TO, TECH_REPORT_TO, etc.). This allows rotation of recipient lists without code changes, critical when onboarding new team members.
Data Sources and Handoff Integration
The reports pull intelligence from multiple project handoff documents stored in /Users/cb/Documents/repos/agent_handoffs/projects/:
shipcaptaincrew.md— Event pipeline, charter bookings, crew availability, revenue metrics- Additional project files — Financial tracking, operational status, technical debt inventory
The pattern here is handoff-as-single-source-of-truth. Rather than querying disparate systems (Stripe API, Google Calendar, email inboxes), we maintain human-readable project status documents. The reporting system then parses these for structured insights.
This trades off real-time accuracy for operational simplicity. A production system would query live APIs; this development setup prioritizes speed-to-insight over latency guarantees.
Infrastructure and Deployment
The reporting system is deployed to the local development environment for now, with hooks for future AWS Lambda integration:
- Local execution — Scripts run on developer machine via cron or manual trigger
- SES integration — Uses IAM credentials from AWS CLI configuration; no hardcoded keys
- BCC automation — All reports are BCC'd to
admin@queenofsandiego.comfor audit trail
When scaling to production, the same scripts would be wrapped in a Lambda function with an EventBridge cron trigger (e.g., Sundays at 6 AM UTC), storing report copies in S3 for archival and compliance purposes.
Key Technical Decisions
Why Not a Database?
Report data currently lives in Markdown files and environment variables. A traditional BI tool (Looker, Tableau, Metabase) would require ETL infrastructure. The handoff-document pattern is better suited for early-stage portfolios where:
- Data rarely exceeds 1 MB in size
- Update frequency is weekly, not real-time
- Stakeholders value narrative context alongside numbers
Why HTML Email Over PDF/Dashboards?
Email was chosen as the distribution mechanism because:
- Passive delivery — No login required; report lands in inbox
- Searchability — Email archives are indexed; easier to find month-old insights than cloud dashboards
- Mobile-friendly — HTML email renders on any device; no app installation
- Forwarding friction — Intentional design: sensitive reports should not be easily forwarded externally
Configuration Management
The use of repos.env for sender/recipient lists follows the 12-factor app principle: environment-specific config is externalized from code. This allows the same script to send to different recipients (dev test group vs. production stakeholders) by changing one environment file.
What's Next
Short-term improvements:
- Lambda deployment — Wrap scripts in AWS Lambda with EventBridge triggers for automation
- S3 archival — Store HTML report copies in S3 bucket (e.g.,
s3://queenofsandiego-reports/exec/) for compliance