Multi-Domain Event Orchestration: Automating Charter Booking Infrastructure Across S3, CloudFront, and Lambda
What We Built
This session established a full event-to-infrastructure pipeline for a Boatsetter charter booking. A single charter request triggered four parallel infrastructure operations:
- Created a JADA Internal Calendar entry (DynamoDB-backed)
- Provisioned a ShipCaptainCrew (SCC) event with automatic crew notifications via magic links
- Generated a guest-facing HTML page served from
queenofsandiego.com/g/with time-aware photo upload capability - Created a crew-facing checklist dashboard accessible through SCC's event deep-linking
The challenge: these systems span three separate AWS accounts, two S3 buckets (sailjada.com and queenofsandiego.com), two CloudFront distributions, and multiple Lambda functions with different authentication models. We needed to bridge them without exposing service credentials to the browser.
Architecture: Cross-Domain Event Flow
Entry Point: The Boatsetter booking data (amount $840.75, 3-hour charter, May 30) landed in a memory file during development. Rather than manual entry, we automated three simultaneous writes:
JADA Calendar → DynamoDB (sailjada.com account)
SCC Event → DynamoDB + SNS (ShipCaptainCrew account)
Guest Page → S3 + CloudFront invalidation (queenofsandiego.com account)
Why separate accounts? Ship Captain Crew is a multi-tenant SaaS platform. Isolating it from the personal charter business prevents credential sprawl and simplifies audit trails. The guest-facing site lives under the personal domain, reducing mental overhead for booking recipients.
Technical Implementation
Calendar Entry Creation
The JADA Internal Calendar is backed by a DynamoDB table in the sailjada.com account. Entry required the X-Dashboard-Token header, which gets injected by a CloudFront Function before requests reach the dashboard Lambda.
POST /calendar/events (via dashboard Lambda endpoint)
Headers: X-Dashboard-Token: [service key hash]
Body: {
"date": "2024-05-30",
"type": "boatsetter_charter",
"guest_count": capacity,
"notes": "Charter details"
}
The dashboard Lambda (source in /Users/cb/.claude/projects/memory/) validates the token by hashing the incoming service key and comparing it against the Lambda environment variable SERVICE_KEY_HASH. This allows calendar writes without embedding plaintext credentials.
ShipCaptainCrew Event with Crew Notifications
SCC event creation presented the most complex authentication challenge. The SCC Lambda runs in a different AWS account and uses service key authentication. CloudFront strips custom headers by default, breaking auth when requests route through the distribution.
Solution: Direct API Gateway URL
Rather than routing through CloudFront (which strips authorization headers), we discovered the SCC Lambda is invoked via API Gateway directly. This bypasses CF header filtering:
POST https://[scc-apigw-id].execute-api.us-west-2.amazonaws.com/events
Headers: Authorization: [service key]
Body: {
"event_name": "Boatsetter Charter - May 30",
"start_time": "2024-05-30T10:00:00Z",
"duration_hours": 3,
"crew_needed": ["captain", "crew_1", "crew_2"],
"notes": "Guest count: X, Revenue: $840.75"
}
The SCC Lambda's hash_password function hashes the incoming service key, validates it against the Lambda environment's SERVICE_KEY_HASH, and if valid, creates the event in DynamoDB. Event creation triggers an SNS topic, which fans out to SQS queues for each crew member. Each crew member receives an email with a magic link—a signed, time-bound JWT that grants access to the event without requiring login.
Critical discovery: The SCC Lambda environment was missing the SERVICE_KEY_HASH variable, causing auth failures. We retrieved all Lambda environment variables from the AWS Secrets Manager reference in the deployment configuration, added the hash, and redeployed. This is a pattern worth documenting: SCC events will only auto-notify crew if the Lambda is properly configured with the service key hash.
Guest-Facing Page: Multi-Domain Asset Strategy
We initially created the guest page at /tmp/jada-guest-xhqgmdh.html and uploaded to the sailjada.com S3 bucket. However, this created a brand confusion problem: guests expect their charter details to come from the personal domain, not a boat-business domain.
Decision: Move to queenofsandiego.com
The queenofsandiego.com CloudFront distribution includes a custom CF Function that rewrites paths. Files matching the pattern /g/* are rewritten to fetch from /{filename}.html in S3. This allows clean URLs without exposing the underlying bucket structure:
Request: queenofsandiego.com/g/XHQGMDH
CF Function rewrites to: queenofsandiego.com/XHQGMDH.html
S3 serves: /XHQGMDH.html from queenofsandiego.com bucket
The guest page includes:
- Time-aware photo upload: The page calls
/g/presignin the SCC Lambda, which validates the booking ID and returns a pre-signed S3 URL. This allows guests to upload charter photos directly without touching our servers. - Minimal styling: No external dependencies—just HTML5 + inline CSS for load speed and offline resilience.
- Crew confirmation flow: A simple form that posts confirmation back to SCC, which updates the event and notifies the captain.
Crew-Facing Checklist Dashboard
Rather than build a separate crew application, we leveraged SCC's existing event routing. The SCC Lambda has a frontend entrypoint that accepts a deep-link parameter:
GET /events/:event_id?magic_link=[jwt]
The SCC frontend HTML validates the JWT against the embedded SCC secret, and if valid, renders the full event checklist (fuel checks, guest count confirmation, safety briefing boxes, post-charter photo upload). Crew receive this link via email after event creation.
Infrastructure Decisions & Trade-offs
Why CloudFront Functions over Lambda@Edge for path rewriting? Functions execute at edge locations with minimal latency (~1ms), while Lambda@Edge requires a full cold-start. For frequent requests (every guest page load), the performance difference is measurable. Functions also cost less ($0.10 per million executions vs. $0.60 per million for Lambda@Edge).
Why direct API Gateway URLs instead of CloudFront-proxied APIs? CloudFront strips most custom headers by default (only whitelist-able headers pass through). Rather than manage a whitelist, we call the API Gateway URL directly from the backend. This is fine because crew notifications happen server-side; browsers never touch SCC APIs.
Why SNS → SQS → Email for crew notifications? This creates a reliable queue