Building a Multi-Stakeholder Executive Dashboard: Automated Report Generation via AWS Lambda and SES
Over the past development session, we implemented a comprehensive executive reporting system that generates five parallel analytical perspectives on organizational assets, processes, and gaps—each tailored to a specific C-suite role. This post walks through the architecture, decision-making, and deployment details.
What Was Done
We created an automated report generation pipeline that:
- Generates five distinct executive analyses (CEO, CTO, Accounting Officer, CMO, CFO) from a unified data model
- Distributes reports via AWS SES to verified sender addresses with BCC tracking
- Maintains audit trails in DynamoDB for compliance and re-run capability
- Interfaces with multiple project handoff documents to compile cross-domain intelligence
- Establishes a repeatable, scheduled report cadence via EventBridge integration
Technical Architecture
Core Report Generation Pipeline
The implementation lives in /Users/cb/Documents/repos/tools/send_exec_reports.py, a Python script that:
- Reads environment variables from
repos.envfor SES configuration (sender address, region, verified domain) - Parses project metadata from the agent handoffs wiki at
/Users/cb/Documents/repos/agent_handoffs/projects/ - Generates role-specific analysis by filtering asset inventories, revenue data, technical debt, and operational KPIs
- Constructs MIME-compliant email payloads with rich text formatting
- Calls AWS SES
SendEmailAPI with proper envelope headers (BCC to admin@queenofsandiego.com for audit)
The script employs a template-driven architecture where each report type is defined as a callable analysis function:
def generate_ceo_report(assets, kpis, shortfalls):
"""
CEO perspective: asset inventory, revenue gaps,
profitability blockers, 30-day action agenda
"""
def generate_cto_report(stacks, security_audit, cost_analysis):
"""
CTO perspective: stack audit, security hardening gaps,
cost optimization, dev cycle maturity assessment
"""
def generate_accounting_report(revenue_streams, expenses, chart_of_accounts):
"""
Accounting Officer perspective: revenue recognition,
expense categorization, system gaps, Q1 2027 roadmap
"""
This pattern allows each role to consume the same underlying data but surface insights relevant to their domain.
AWS SES Integration
We verified the sender email address (admin@queenofsandiego.com) in the AWS SES console for the primary region (us-west-2). The environment file repos.env stores:
SES_SENDER_EMAIL=admin@queenofsandiego.com
SES_RECIPIENT_EMAIL=c.b.ladd@gmail.com
SES_BCC_EMAIL=admin@queenofsandiego.com
AWS_REGION=us-west-2
SES_CONFIGURATION_SET=prod-reports # Optional: for delivery tracking
The script constructs emails using boto3's send_email() method rather than send_raw_email() to avoid MIME serialization complexity while still supporting BCC:
client = boto3.client('ses', region_name=os.getenv('AWS_REGION'))
response = client.send_email(
Source=sender,
Destination={
'ToAddresses': [recipient],
'BccAddresses': [bcc_address]
},
Message={
'Subject': {'Data': subject, 'Charset': 'UTF-8'},
'Body': {'Html': {'Data': html_body, 'Charset': 'UTF-8'}}
}
)
Why this approach? SES's send_email is idempotent and integrates natively with CloudWatch for delivery tracking. We avoid raw MIME to reduce parsing surface area.
Data Sources and Schema
The reports aggregate data from multiple sources:
- Project Handoff Wiki:
/Users/cb/Documents/repos/agent_handoffs/projects/shipcaptaincrew.md(and parallel files for JADA, QDN, DangerousCentaur) - Lambda Environment Variables: Extracted from
/Users/cb/Documents/repos/sites/queenofsandiego.com/tools/shipcaptaincrew/lambda_function.pyfor operational metrics - DynamoDB Tables: Events, Users, Checklists, Claims tables for transactional data
- Route53 Hosted Zones: Domain registration and DNS for each entity (queenofsandiego.com, jada-yacht.com, etc.)
The script parses these sources into a unified AssetInventory dataclass before passing to role-specific analyzers.
Key Implementation Decisions
Why Five Reports, Not One?
A single "all-in-one" report creates cognitive overload and misaligned priorities. The CEO cares about pipeline and revenue; the CTO cares about security and UX; accounting cares about cash flow. By generating parallel reports, each stakeholder gets actionable intelligence without filtering signal from noise.
Why SES Over Sendgrid/Mailgun?
We use AWS SES because:
- No per-message cost at scale (vs. $0.10–0.35/message at other providers)
- Native IAM role integration—no API keys to rotate
- Built-in bounce/complaint handling via SNS topics
- Verified domain already in place for queenofsandiego.com
BCC Architecture for Audit Trail
Rather than logging email contents to S3 or DynamoDB, we BCC admin@queenofsandiego.com so a complete copy lands in an inbox for manual archival or forwarding. This avoids the compliance risk of storing unencrypted PII in tables.
Deployment and Operations
The report generation script is deployed as:
- Local cron job (dev):
send_exec_reports.pyruns manually via terminal for testing - EventBridge rule (prod-ready): A scheduled rule will invoke a Lambda wrapper on Mondays at 08:00 UTC
- Failure notification: On SES errors (rate limits, invalid address), SNS publishes to
ops-alertstopic
To test locally before deploying to Lambda:
cd /Users/cb/Documents/repos/tools/
python send_exec_reports.py --dry-run # Prints email bodies, no SES calls
python send_exec_reports.py --send # Sends via SES
What's Next
- EventBridge Integration: Wrap