```html

Multi-Domain Event Infrastructure: Orchestrating Boatsetter Charter Bookings Across Three Microservices

When a charter books through Boatsetter, it needs to trigger a coordinated cascade across three separate systems: an internal calendar, a crew notification system, and guest-facing documentation. This post covers the infrastructure patterns and architectural decisions that made this workflow possible—and why we chose to split responsibilities across multiple domains rather than consolidate.

The Architecture Pattern: Service-Oriented with Cross-Domain Coordination

The booking flow involves four distinct cloud resources:

  • ShipCaptainCrew (SCC) Lambda — handles crew notifications and event persistence via DynamoDB
  • JADA Internal Calendar Lambda — maintains operational calendar entries (separate from guest-facing)
  • Queen of San Diego CloudFront + S3 — serves guest-facing charter pages at /g/{booking-id}
  • Sail JADA S3 bucket — legacy asset storage, gradually deprecating in favor of Queen of San Diego

Rather than building a single monolithic Lambda that does everything, we leveraged existing service boundaries. SCC already had crew notification logic; JADA already had calendar infrastructure. We built thin orchestration logic—essentially HTTP clients that POST to each service's endpoint—rather than refactoring shared code into a common library.

Why this pattern? Each service owns its data model (DynamoDB for SCC, Calendar API for JADA, S3 objects for guest pages). Coupling them tightly would require schema negotiations and deployment coordination. Loose coupling via HTTP endpoints means we can iterate on each service independently.

Cross-Domain Routing and CloudFront Configuration

The trickiest part wasn't the logic—it was making HTTP calls between services when CloudFront sits in front of them.

When we call the SCC Lambda directly through CloudFront (at api.sailjada.com), CloudFront strips custom headers. For internal service-to-service calls, we can't use the public CloudFront URL; we need the raw API Gateway endpoint.

For SCC, we discovered the API Gateway invoke URL by inspecting Lambda configuration:


# Find the raw API Gateway endpoint (bypasses CloudFront header stripping)
aws apigateway get-rest-apis --region us-west-2
aws apigateway get-stages --rest-api-id {REST_API_ID} --region us-west-2
# Direct invocation URL: https://{api-id}.execute-api.us-west-2.amazonaws.com/prod/...

This endpoint is internal-only (not in Route53, not advertised), but it allows our orchestration Lambda to send authentication headers that CloudFront would normally strip.

Authentication Across Service Boundaries

SCC uses a hashed service key stored in Lambda environment variables. The JADA Calendar Lambda expects a different auth token (the X-Dashboard-Token header). Queen of San Diego guest pages are public—no auth needed.

Rather than creating a unified identity system, we decided:

  • SCC calls: Use the service key, hash it client-side using the same function as the SCC Lambda, include in X-Service-Key header
  • JADA calls: Use a separate dashboard token, include in X-Dashboard-Token header
  • S3 uploads: Use IAM role attached to the orchestration Lambda—S3 URLs are signed automatically

This is intentionally not unified. If we later need to revoke SCC access, we only rotate the SCC service key, not a master token that would affect calendar operations.

The Guest Page Infrastructure: Path Rewriting and Flat-File Hosting

Queen of San Diego uses a CloudFront Function (not a Lambda@Edge function) to rewrite URLs. When a request comes to /g/XHQGMDH, the function rewrites it to /g/XHQGMDH.html before fetching from S3.

This is important: the actual S3 object is stored as /g/XHQGMDH.html (a flat file), but the public URL is clean: /g/XHQGMDH. The CloudFront Function handles the translation.

We upload the guest page like this:


# Build the HTML guest page locally
# Upload to S3 with the correct key
aws s3 cp jada-guest-{booking-id}.html s3://queenofsandiego.com/g/{booking-id}.html --content-type text/html

# Invalidate CloudFront cache to force immediate serve
aws cloudfront create-invalidation --distribution-id {DISTRIBUTION_ID} --paths '/g/{booking-id}.html'

The distribution ID for Queen of San Diego is stored in AWS Secrets Manager (not in this post, obviously). We retrieve it at runtime rather than hardcoding it.

Crew Notification: Why We Use SCC's Auto-Email, Not SES Directly

When we POST to the SCC Lambda to create an event, SCC automatically sends crew invitations with magic link tokens. We could have built this ourselves using SES, but SCC's crew already understands the notification flow—they click the magic link, authenticate via token, and see their crew-specific dashboard with checklists and shift details.

If we sent duplicate notifications via SES, or notifications from a different sender, it would create confusion. By leveraging SCC's existing crew communication, we ensure the experience is cohesive.

Data Scrubbing: Revenue Fields in Crew Events

Internal notes on SCC events contain revenue calculations (your net, crew costs, port fees). We don't want crew to see this data—it's sensitive financial information between you and your operations team.

Rather than adding application logic to SCC to hide certain fields from crew, we directly update the DynamoDB item after event creation:


# Read the event from DynamoDB
# Remove Revenue and Captain Fee from the notes field
# Write the item back
# Crew-facing dashboard queries DynamoDB and sees sanitized data

This is a pragmatic approach when modifying application code isn't feasible in the moment. In a future refactor, we'd add a crew_visible_notes field separate from internal_notes.

Calendar Entry Timing: Avoiding Double-Booking

JADA Internal Calendar entries need to be created before SCC events, because SCC events may trigger reminders that reference the calendar. If a crew member asks "what time?" they should be able to find the entry in your internal calendar immediately.

We POST to JADA first with the charter date, duration, and booking ID. JADA returns a calendar entry ID. We then POST to SCC with that calendar ID referenced in the event notes.

What's Next: Consolidation and Observability

This orchestration is working, but it's spread across two Lambda functions (calendar and SCC), three different auth mechanisms, and manual S3 uploads. The next phase:

  • Unified orchestration Lambda: Move all four operations (calendar, SCC event, guest page, email) into a single function with structured error handling and retry logic
  • Structured logging: Each call should log operation name, target service, response status, and latency
  • Boatsetter webhook handler: Automate the trigger—when