Deploying a Multi-Site Infrastructure: CloudFront + S3 Routing, Port Sheet Automation, and Receipt Management

This session involved coordinating deployments across two separate domains—quickdumpnow.com and queenofsandiego.com—while automating a port sheet logging system for charter operations. The work required careful management of S3 object keys, CloudFront cache invalidation, and Google Apps Script integration to handle both static content deployment and dynamic data ingestion.

Problem Statement

Two distinct requirements emerged:

  • quickdumpnow.com: The /books endpoint was returning the homepage instead of a dedicated receipt/ledger page for the trailer rental business.
  • queenofsandiego.com: The port sheet logging system needed to handle new charter entries (specifically a $1,845.72 payment) and maintain a structured multi-tab spreadsheet for port operations.

Part 1: Fixing CloudFront + S3 Routing for /books

Root Cause Analysis

The /books path was resolving to the homepage due to a CloudFront custom error response configuration. When S3 returned a 404 (because books/index.html hadn't been deployed), CloudFront's error page rule redirected the request to /index.html. This is a common pattern for single-page applications, but it breaks content-specific routes that should have their own dedicated pages.

Solution: Multi-Key S3 Deployment Strategy

Rather than fighting the CloudFront configuration, we deployed the books page to two S3 keys:

  • s3://quickdumpnow.com/books/index.html — Traditional nested structure for clarity
  • s3://quickdumpnow.com/books — Bare key matching the URL path for CloudFront's object-level caching

This dual-key approach ensures CloudFront can find a matching S3 object regardless of how the request is routed. The second key is critical: when a user requests https://quickdumpnow.com/books, CloudFront looks for an object literally named books (no trailing slash), not a directory with an index.html` file.

Deployment commands executed:

# Verify local file structure
ls -la /Users/cb/Documents/repos/sites/quickdumpnow.com/books/

# Upload both keys to S3
aws s3 cp books/index.html s3://quickdumpnow.com/books/index.html --content-type "text/html"
aws s3 cp books/index.html s3://quickdumpnow.com/books --content-type "text/html"

# Invalidate CloudFront cache for both paths
aws cloudfront create-invalidation --distribution-id [DIST-ID] --paths "/books" "/books/*"

robots.txt Configuration

The robots.txt was updated to block search engine indexing of the /books endpoint. Since this is an internal ledger page (not a public product listing), SEO exposure is unnecessary and could expose business-sensitive receipt data.

User-agent: *
Disallow: /books
Disallow: /admin

This was uploaded to s3://quickdumpnow.com/robots.txt and also invalidated in CloudFront to ensure search crawlers receive the updated directives immediately.

Why This Matters

CloudFront's custom error responses are powerful for SPA fallback patterns, but they introduce routing ambiguity. By ensuring S3 contains the actual object, we remove the ambiguity and avoid the error-page redirect entirely. The dual-key strategy is a safety mechanism: if one key fails to resolve, the other provides a fallback, and CloudFront's object-level caching ensures both are served with identical headers and TTLs.

Part 2: Port Sheet Automation via Google Apps Script

Architecture Overview

The port sheet system is a Google Sheets workbook with multiple monthly tabs (e.g., March 2026, April 2026) backed by a Google Apps Script project deployed to PortSheetReporter.gs. The script provides:

  • Sheet tab enumeration and validation
  • Row-level data append with timestamp and formula inheritance
  • Drive file interaction via the Google Sheets API
  • Local development mirroring to test format changes before production deployment

Challenge: Schema Mismatch

The port sheet uses a specific tabular format with columns like Month, Date, Captain, Charter Amount, and other operational fields. The initial attempt to append the Zurek charter entry ($1,845.72) failed because:

  • Row indices were incorrect (using zero-indexed Python against one-indexed Google Sheets)
  • Sheet IDs needed to be resolved dynamically rather than hard-coded
  • The entry needed to respect existing row height and column width formatting

Solution: XLS Reverse-Engineering

To understand the exact format, we downloaded the original Excel template and analyzed its structure using Python's xlrd library:

import xlrd
workbook = xlrd.open_workbook('Port_Sheet_JADA_Template.xls')

# Inspect all sheets
for sheet_name in workbook.sheet_names():
    sheet = workbook.sheet_by_name(sheet_name)
    print(f"Sheet: {sheet_name}")
    print(f"Dimensions: {sheet.nrows} rows x {sheet.ncols} cols")
    for row_idx in range(min(5, sheet.nrows)):
        print(sheet.row_values(row_idx))

This revealed the exact column ordering, header format, and data types. We then built an updated .xlsx using openpyxl, preserving March data and adding an April 2026 tab with the Zurek entry in the correct structure:

from openpyxl import Workbook
from openpyxl.utils import get_column_letter

wb = Workbook()
# Remove default sheet
wb.remove(wb.active)

# Create April tab with headers and Zurek entry
april_sheet = wb.create_sheet("April 2026")
headers = ["Month", "Date", "Captain", "Charter Type", "Charter Amount", ...]
april_sheet.append(headers)

# Append entry with correct data types
april_sheet.append([
    "April 2026",
    "2026-04-XX",  # Date format
    "Joseph Zurek",
    "Charter",
    1845.72,
    ...
])

Google Sheets Sync

Once the local .xlsx` was validated, it was uploaded back to the same Drive file ID using the Google Drive API. The Apps Script project reads from this file and can now reliably append future entries without schema conflicts.

Why This Two-Stage Approach?

Direct Google Sheets API calls for formatting (row heights, merged cells, number formatting) are verbose and fragile. By working with the Excel format locally, we leverage Excel's native tools to understand structure, then sync the result back. This is especially useful when reverse-engineering legacy