Building a Multi-Stakeholder Executive Reporting System: Infrastructure, Security, and Deployment Patterns
Over the past development session, I built and deployed a comprehensive executive reporting infrastructure across four distinct business entities (JADA, QueenofSanDiego, QuickDumpNow, DangerousCentaur) plus three ancillary domains. This required integrating AWS SES email delivery, Lambda-based report generation, S3 artifact storage, and secure credential management—all while maintaining strict separation of concerns and audit trails. Here's how we architected it.
The Reporting Architecture: Five Distinct Perspectives
The core insight was that different stakeholder personas need completely different data cuts from the same underlying asset inventory. We generated five parallel reports:
- CEO Report: Asset inventory, revenue pipeline gaps, equity risk analysis, KPI definitions, 30-day prioritized agenda
- CTO Report: Stack-by-stack security audit, cost analysis, UX shortfalls, CI/CD gaps, 10 prioritized engineering actions
- Accounting Report: Revenue recognition framework, chart of accounts, expense audit, profitability roadmap to Q1 2027
- CMO Report: Channel attribution matrix, blast campaign ROI modeling, OTA sequencing strategy, 30/60/90-day milestones
- CFO Report: Burn rate modeling, capital deployment tiers, break-even analysis, monthly revenue targets
Plus three additional domain-specific audits: 3028 51st St Rental operations, Expert Yacht Delivery billing gaps, and DangerousCentaur client portfolio reconciliation.
Email Delivery: SES Configuration and Verification
Rather than building a custom mail service, we leveraged AWS SES, which was already partially configured in the environment. The key challenge: verifying sender identities and managing credentials securely.
All SES credentials are stored in /Users/cb/Documents/repos/repos.env, referenced as environment variables in the Python execution context. The sender address admin@queenofsandiego.com is verified at the SES account level, which allows the hardcoded address to work without additional per-email verification overhead.
The delivery script (/Users/cb/Documents/repos/tools/send_exec_reports.py) iterates over a recipient list and sends each report as a multi-part MIME message:
# Pseudo-code pattern (no actual credentials shown)
import boto3
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
ses_client = boto3.client('ses', region_name='us-west-2')
def send_report(recipient, report_body, report_title):
msg = MIMEMultipart('alternative')
msg['Subject'] = report_title
msg['From'] = sender_address
msg['To'] = recipient
msg.attach(MIMEText(report_body, 'plain'))
response = ses_client.send_raw_email(
Source=sender_address,
Destinations=[recipient],
RawMessage={'Data': msg.as_string()}
)
return response['MessageId']
This pattern is idempotent—rerunning the script with the same recipient list simply resends all reports. Each delivery generates a CloudWatch log entry and a SES bounce/complaint metric, enabling post-delivery auditing.
Credential and Environment Variable Strategy
A critical security decision: all SES credentials, database connection strings, and API keys live in repos.env, which is:
- Never committed to version control (added to .gitignore)
- Loaded at runtime via
python-dotenvor shell sourcing - Copied to Lambda execution environment only as needed via CloudFormation or SAM templates
- Rotated quarterly and on any suspected compromise
When deploying to Lambda, we explicitly list safe environment variable names (e.g., SES_REGION, SENDER_EMAIL) in the function configuration, rather than wholesale dumping of the .env file. This prevents accidental exposure of unrelated secrets.
Report Generation Logic: Separation of Content from Delivery
To keep the system maintainable, we separated report content generation from delivery mechanics. Each stakeholder report lives as a pure Python function that returns a formatted string:
def generate_ceo_report(asset_inventory, revenue_data, equity_data):
"""
Returns formatted text report ready for email delivery.
Pure function—no side effects, no external calls.
"""
shortfalls = analyze_revenue_pipeline(asset_inventory)
kpis = derive_missing_kpis(revenue_data)
agenda = prioritize_30day_actions(shortfalls, kpis)
return f"""
EXECUTIVE REPORT: CEO PERSPECTIVE
Generated: {datetime.now().isoformat()}
ASSET INVENTORY
{format_inventory(asset_inventory)}
CRITICAL SHORTFALLS ({len(shortfalls)})
{format_shortfalls(shortfalls)}
MISSING KPIs ({len(kpis)})
{format_kpis(kpis)}
30-DAY AGENDA
{format_agenda(agenda)}
"""
This approach allows us to:
- Test report content without touching SES (unit testing)
- Preview reports before sending
- Regenerate and resend reports with zero code changes
- Build a report archive in S3 for audit purposes
S3 Artifact Storage and CloudFront Distribution
Each report is archived to S3 in a timestamped prefix structure:
s3://queenofsandiego-reports/
2025-01-15/
ceo-report.txt
cto-report.txt
accounting-report.txt
cmo-report.txt
cfo-report.txt
3028-51st-st-rental-audit.txt
expert-yacht-delivery-audit.txt
dangerouscentaur-portfolio-audit.txt
S3 bucket policy allows read-only access to a CloudFront distribution (distribution ID: E2ABCD1234EXAMPLE), which caches reports at edge locations and provides HTTPS delivery. Each object is tagged with metadata:
report-type: ceo, cto, accounting, cmo, cfo, or auditentity: jada, qos, qdn, dc, rental, delivery, or portfoliogenerated-date: ISO 8601 timestamp
This tagging enables cost allocation via S3 Intelligent Tiering and enables future report analytics (e.g., "which reports are accessed most often?").
Logging, Monitoring, and Audit Trail
Every report generation and delivery event is logged to CloudWatch Logs under the log group /aws/reports/executive-suite. Each log entry includes:
- Timestamp (microsecond precision)
- Report type and entity
- Recipient email address (SHA-256 hashed for privacy)
- SES