Building HELM: A Self-Contained Operations Graph Visualization for Complex Service Networks
We recently deployed HELM—an interactive, force-directed graph visualization of JADA's entire operations network—to helm.queenofsandiego.com. This post covers the architecture, infrastructure decisions, and implementation details of a single-file HTML application that renders dozens of interconnected systems and provides real-time health diagnostics.
What Was Built
HELM is a browser-based operations dashboard that visualizes the complete flow of JADA's business: from marketing channels (email campaigns, web search, referral codes) through booking platforms (Viator, Boatsetter, GetMyBoat) into financial systems, and finally into crew dispatch and operations management. The graph is force-directed using the vis-network library, meaning nodes repel and attract each other based on physics simulation, creating an organic, explorable layout.
Key features:
- Multi-layer node system: Marketing acquisition channels, booking platforms, dashboards, financial flows, and operations nodes
- Live health probes: Nodes glow red or green based on real-time system status checks
- Drill-down detail panel: Click any node to see associated GAS functions, endpoints, or operational details
- Greyed-out future nodes: Platforms like additional booking channels are visualized but marked as inactive
- Dark navy/gold theme: Professional aesthetic designed to be presentation-ready for stakeholders
File Structure and Organization
The entire application lives in a single self-contained HTML file:
/Users/cb/Documents/repos/sites/helm/index.html
This decision—consolidating CSS, JavaScript, and HTML into one file—was intentional. For a visualization tool like this, a single file eliminates deployment complexity, reduces HTTP requests, and makes the artifact portable. The file includes inline styles, embedded vis-network library via CDN, and approximately 3,500 lines of custom JavaScript implementing the graph logic, health probes, and detail panel.
Technical Architecture
Node Data Structure
The application defines a central NODE_DATA object mapping system names to node metadata:
const NODE_DATA = {
"email_campaigns": { label: "Email Campaigns", category: "acquisition" },
"web_search": { label: "Web Search / Organic", category: "acquisition" },
"viator": { label: "Viator", category: "booking", active: true },
"boatsetter": { label: "Boatsetter", category: "booking", active: true },
"gas_crewdispatch": { label: "GAS Crew Dispatch", category: "operations", functions: [...] },
...
}
Each node stores its display label, category (for styling), activation status, and—for operational nodes—an array of associated GAS function signatures extracted from the Google Apps Script codebase.
Edge (Connection) Mapping
Edges represent data or control flow between systems. For example:
edges: [
{ from: "email_campaigns", to: "booking_platform", label: "lead" },
{ from: "viator", to: "payment_processor", label: "revenue" },
{ from: "crm", to: "gas_crewdispatch", label: "crew_availability" }
]
This edge list creates the visual flow graph. The label on each edge describes the type of data or transaction flowing between nodes.
Health Probing System
Each operational node can define a probe function that returns a health status. The application polls these asynchronously:
function probeNodeHealth(nodeId) {
// Probes might check:
// - GAS script last execution timestamp
// - Google Sheet row counts (for data validation)
// - API endpoint availability
// - Error log counts
return { healthy: true, lastCheck: Date.now() }
}
Nodes render with a glowing border: green for healthy, red for degraded. A system health bar at the top aggregates all probe results into a single percentage score.
Drill-Down Detail Panel
Clicking a node opens a detail panel showing:
- Function signatures from associated GAS files (extracted at build time)
- Data flow inputs and outputs
- Last probe result and timestamp
- Related dashboards or endpoints
This was implemented by pre-computing function signatures from key GAS files during development and embedding them in the node metadata.
Infrastructure: S3, CloudFront, and DNS
S3 Bucket Creation
We created a dedicated S3 bucket for HELM:
aws s3api create-bucket \
--bucket helm.queenofsandiego.com \
--region us-west-2 \
--create-bucket-configuration LocationConstraint=us-west-2
The bucket name mirrors the intended subdomain, making DNS and CloudFront configuration straightforward.
CloudFront Distribution
Rather than expose the S3 bucket publicly, we created a CloudFront distribution with an Origin Access Control (OAC):
aws cloudfront create-distribution \
--distribution-config file://helm-cf-config.json
Key configuration points:
- Origin: S3 bucket endpoint with OAC (not public ACLs)
- Default Root Object:
index.html - Caching: Short TTL (300 seconds) for index.html to allow quick updates; longer TTL for static assets
- Viewer Protocol Policy: Redirect HTTP to HTTPS
The CloudFront distribution ID was stored for cache invalidation during deployments.
S3 Bucket Policy
We set a bucket policy allowing CloudFront's OAC principal to read objects:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "cloudfront.amazonaws.com"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::helm.queenofsandiego.com/*",
"Condition": {
"StringEquals": {
"AWS:SourceArn": "arn:aws:cloudfront::ACCOUNT_ID:distribution/DISTRIBUTION_ID"
}
}
}
]
}
Route53 DNS
We created an ALIAS record pointing helm.queenofsandiego.com to the CloudFront distribution:
aws route53 change-resource-record-sets \
--hosted-zone-id ZONE_ID \
--change-batch '{
"Changes": [{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "helm.queenofsandiego.com",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "Z...",
"DNSName": "d123.cloudfront.net",
"EvaluateTarg