```html

Building HELM: An Interactive Operations Graph for JADA's Multi-Platform Booking Ecosystem

Over the past development session, I built HELM — a real-time, self-contained visualization dashboard that maps the entire JADA booking and operations ecosystem. HELM stands as a single-page HTML application deployed to helm.queenofsandiego.com, designed to give engineers and operators a bird's-eye view of how money, bookings, and operational workflows flow through multiple booking platforms, email campaigns, referral partners, and crew dispatch systems.

What Problem Does HELM Solve?

JADA's operation spans multiple revenue streams (Viator, Boatsetter, GetMyBoat, direct bookings), multiple campaign channels (email, web search, referral codes), and complex operational workflows (crew dispatch, scheduling, health monitoring). Understanding the full system required jumping between dashboards, Google Sheets, Lambda functions, and GAS scripts. HELM consolidates this into a single, explorable force-directed graph where each system component is a node, and relationships are edges showing data flow and dependencies.

Architecture: Why a Single HTML File?

I chose to build HELM as a single, self-contained HTML file rather than a multi-file SPA for several reasons:

  • Zero build step: No webpack, no npm install, no bundling. The file is immediately deployable and debuggable.
  • Rapid iteration: Editing and re-deploying is a single command. During this session, I pushed 20+ iterations to production with cache invalidation.
  • Offline capability: The entire graph engine and styling is embedded; it works even if CDN links are slow.
  • Vis-network library: I chose vis-network (a mature, physics-based graph visualization library) via CDN because it handles force-directed layout, zoom/pan, and node clustering without heavy dependencies.

The file at /Users/cb/Documents/repos/sites/helm/index.html contains embedded CSS, JavaScript, and inline data structures defining all nodes, edges, and interactive behaviors.

Technical Implementation Details

Node Data Structure

HELM defines three categories of nodes:

  • Revenue platforms: Viator, Boatsetter, GetMyBoat, direct bookings. Live nodes are styled in navy/gold; greyed-out nodes represent known platforms not yet integrated.
  • Campaign channels: Email campaigns, organic search, referral partner codes, direct traffic.
  • Internal systems: Each internal dashboard (progress tracker, ops tracker, expense tracker, crew dispatch) is a node. These nodes are queried via Google Apps Script (GAS) functions to report real-time health status.

Each node has metadata including:

id, label, type, color, size, title, gasFunctionName, drilldownPanel

The gasFunctionName field maps nodes to actual GAS functions (e.g., getExpenseTrackerHealth(), getCrewDispatchStatus()) that are called via HTTP POST to a deployed Google Apps Script endpoint.

Health Diagnostics

I integrated live health probing. Every 30 seconds, HELM polls a central GAS endpoint, which in turn calls functions in the crew dispatch, ops, and expense tracker spreadsheets. Node colors update in real-time:

  • Green: System healthy, recent data, no critical issues.
  • Yellow: Degraded; slow responses or stale data.
  • Red: Critical failure; API down or data unavailable.

A system health bar at the top of the page aggregates all node statuses, giving operators an instant overview before drilling into details.

Drill-Down Detail Panel

Clicking any node opens a right-side panel showing:

  • Node metadata and real-time health status.
  • Related GAS function signatures extracted from source files.
  • Links to source dashboards (e.g., https://docs.google.com/spreadsheets/d/{SHEET_ID}).
  • Raw response data from the last health check, formatted as JSON.

This eliminates context-switching: you can investigate a slow crew dispatch system without opening a separate tab.

Infrastructure: S3 + CloudFront + Route53

S3 Bucket Creation

I created a dedicated S3 bucket for HELM:

Bucket name: helm.queenofsandiego.com
Region: us-west-2
Versioning: Enabled

The bucket is private (no public ACL). Access is granted exclusively via a CloudFront Origin Access Control (OAC), preventing direct S3 URL access.

CloudFront Distribution

A CloudFront distribution was created with:

  • Origin: S3 bucket helm.queenofsandiego.com
  • OAC: Restricts S3 access to CloudFront only.
  • Cache behavior: TTL set to 60 seconds for index.html to allow rapid iteration, and 86400 seconds for static assets.
  • SSL/TLS: ACM certificate for helm.queenofsandiego.com (issued and managed by existing certificate infrastructure).
  • Custom domain: Distribution CNAME mapped to helm.queenofsandiego.com.

DNS (Route53)

Added an ALIAS record in the queenofsandiego.com hosted zone:

Record name: helm.queenofsandiego.com
Type: ALIAS
Target: CloudFront distribution domain name
Evaluate target health: No

Deployment Workflow

I created a repeatable deployment pipeline (leveraging existing tooling in /Users/cb/Documents/repos/tools/):

# Upload to S3
aws s3 cp /Users/cb/Documents/repos/sites/helm/index.html \
  s3://helm.queenofsandiego.com/index.html \
  --profile [profile-name] \
  --content-type "text/html; charset=utf-8"

# Invalidate CloudFront cache
aws cloudfront create-invalidation \
  --distribution-id [DISTRIBUTION_ID] \
  --paths "/*" \
  --profile [profile-name]

Because index.html was edited 20+ times during development, cache invalidation became critical. I added a small polling script to wait for CloudFront deployment before smoke testing, preventing false negatives from stale cached versions.

Key Technical Decisions

Password Protection

HELM contains sensitive operational data (crew locations, booking pipelines, expense details). I implemented client-side password protection using SHA-256 hashing:

// Password is hashed client-side; hash is embedded in HTML
const requiredHash = "..."; // SHA-256 hash of admin password
const inputHash = await crypto.subtle.digest('SHA-256', 
  new TextEncoder().encode(userInput));
if (hashesMatch) {