```html

Multi-Service Event Pipeline: Coordinating Calendar, Guest Pages, and Crew Notifications Across Lambda, S3, and DynamoDB

Last week I orchestrated a complex booking workflow for a 3-hour charter that required coordinating five distinct systems: a Boatsetter booking intake, DynamoDB event storage, Lambda-based APIs, S3-hosted guest pages, CloudFront distribution, and automated crew notifications. This post details the architecture decisions, infrastructure wiring, and the debugging work required when CloudFront headers get stripped unexpectedly.

The Problem: A Fragmented Booking Workflow

When a charter books through Boatsetter, the information exists in exactly one place: Boatsetter's dashboard. Our internal systems—crew schedules, guest communications, checklists—live elsewhere across three separate domains and databases. The challenge: create a unified event that:

  • Appears in JADA Internal Calendar (date/time, crew assignments, notes)
  • Triggers crew notifications with magic links (via ShipCaptainCrew Lambda)
  • Generates a guest-facing page at /g/BOOKINGID (photo uploads, payment confirmation)
  • Provides crew with a checklist and day-of details

All of this needed to happen through automated scripts, with minimal manual intervention.

Architecture Overview: Five Systems, One Event

The solution required touching three S3 buckets, two CloudFront distributions, one DynamoDB table, and four Lambda functions:

  • sailjada.com S3 bucket — Originally hosted the guest page (later migrated)
  • queenofsandiego.com S3 bucket — Final destination for guest-facing pages
  • ShipCaptainCrew DynamoDB table — Event storage with crew auto-notification triggers
  • JADA Calendar Lambda — Internal calendar entry creation with token auth
  • SCC Lambda — Event creation, crew notification, and DynamoDB writes
  • CloudFront distributions — Two separate distributions with different function rewrites

Technical Implementation

Step 1: Calendar Entry Creation

The JADA Internal Calendar runs on a separate Lambda endpoint protected by X-Dashboard-Token header authentication. The command to create a calendar entry:

curl -X POST https://dashboard-api.example.com/calendar \
  -H "X-Dashboard-Token: [TOKEN]" \
  -H "Content-Type: application/json" \
  -d '{
    "date": "2024-05-30",
    "event_name": "Boatsetter Charter - 3 hours",
    "crew": ["crew_id_1", "crew_id_2"],
    "captain": "captain_id_1"
  }'

Key decision: Dashboard Lambda uses a simple header-based token rather than OAuth or JWT. This was fine for internal infrastructure but required careful token management in scripts.

Step 2: ShipCaptainCrew Event Creation and Crew Notification

ShipCaptainCrew (SCC) handles crew management, scheduling, and event notifications. When an event is created via the SCC API, a Lambda trigger automatically sends crew notifications with magic links. The SCC Lambda is deployed at /tmp/scc-lambda-src/lambda_function.py and exposed via API Gateway.

Initial attempt hit a critical issue: CloudFront was stripping the X-Service-Key header. The sailjada.com CloudFront distribution has a function that rewrites paths for guest pages (converting /g/BOOKINGID to /g/BOOKINGID.html), but it was also configured to strip custom headers as a security measure.

Solution: Hit the API Gateway endpoint directly, bypassing CloudFront. The actual endpoint format:

curl -X POST https://api-gateway-endpoint.execute-api.region.amazonaws.com/prod/events \
  -H "X-Service-Key: [SERVICE_KEY]" \
  -H "Content-Type: application/json" \
  -d '{
    "event_date": "2024-05-30",
    "event_duration_hours": 3,
    "guest_name": "Guest Name",
    "crew_ids": ["crew_1", "crew_2"],
    "captain_id": "captain_1",
    "notes": "3 hour charter via Boatsetter"
  }'

The SCC Lambda validates the service key by hashing it with SHA-256 and comparing against the SERVICE_KEY_HASH environment variable. Code location: lambda_function.py, function hash_password().

Step 3: Guest Page Generation and S3 Deployment

Guest pages are static HTML files uploaded to S3 with a CloudFront distribution in front. The page filename follows a specific convention: /g/BOOKINGID.html for queenofsandiego.com.

The queenofsandiego.com CloudFront distribution has a function (deployed at the distribution level) that handles path rewriting:

// Simplified CloudFront function logic
if (request.uri.startsWith('/g/')) {
  request.uri = request.uri + '.html';
}
return request;

This means uploading to S3 as a flat .html file (e.g., XHQGMDH.html in the /g/ folder) allows requests to /g/XHQGMDH to resolve correctly without the user seeing the .html extension.

The guest page includes:

  • Photo upload endpoint (presigned S3 URLs generated by SCC Lambda at /g/{booking_id}/photo)
  • Time-aware upload validation (prevents uploads outside the charter window)
  • Payment confirmation display
  • Crew information and safety briefing links

Why separate S3 buckets? sailjada.com is the main site; queenofsandiego.com is a subdomain used for guest-facing pages. Separating them allows different CloudFront behaviors, cache strategies, and CDN invalidation patterns without affecting the primary site.

Step 4: Crew Notification and Confirmation

When the SCC event is created, a Lambda trigger fires automatically. This trigger reads the crew IDs from the event record, generates magic links (JWT tokens with a 48-hour expiration), and sends emails via SES.

The magic link format: https://queenofsandiego.com/crew/confirm?token=[JWT]

No separate crew page needed to be created—SCC Lambda already generates a crew-facing URL that includes a checklist, safety briefing, weather, tidal information, and a "Confirm Availability" button.

Key Infrastructure Decisions

Why Direct API Gateway vs. CloudFront?

CloudFront provides caching and DDoS protection, but it also strips custom headers by default for security. For internal APIs that require custom authentication headers, hitting the API Gateway endpoint directly (and rate-limiting at the API Gateway level) is faster and avoids header-stripping bugs. For external APIs, CloudFront remains the correct choice.

Why Two S3 Buckets for Different Domains?