Deploying a Receipt Management System for quickdumpnow.com and Automating Port Sheet Generation for JADA

This session tackled two distinct infrastructure challenges: standing up a receipt management landing page for a trailer rental business, and hardening the automated port sheet generation pipeline for JADA sailing operations. Both required careful consideration of CDN caching behavior, OAuth token refresh cycles, and cross-service orchestration.

The quickdumpnow.com Books Page Deployment

Problem: The /books endpoint on quickdumpnow.com was returning the homepage instead of a dedicated receipt management page, despite the HTML file existing locally.

Root Cause Analysis: CloudFront distributions with custom error responses configured to redirect all 404s to the origin's homepage were masking the real issue: the S3 object for the books page didn't exist at the expected key path. When a request came in for /books, CloudFront would check S3, find nothing, and trigger the custom error response, ultimately serving the homepage.

Solution Architecture:

  • Local file structure: /Users/cb/Documents/repos/sites/quickdumpnow.com/books/index.html contained the receipt management landing page HTML
  • S3 deployment strategy: Upload to both books/index.html and the bare books key to support clean URL routing. This ensures requests to /books (without trailing slash) resolve correctly
  • robots.txt exclusion: Updated /Users/cb/Documents/repos/sites/quickdumpnow.com/robots.txt to block crawlers from the receipt area, preventing indexing of business-sensitive documents
  • CloudFront invalidation: Issued invalidation patterns for both /books and /books/* to clear the CDN cache

Deployment Commands (Redacted):

# Verify S3 bucket structure
aws s3 ls s3://quickdumpnow-site-bucket/books/

# Upload the index file to both keys
aws s3 cp books/index.html s3://quickdumpnow-site-bucket/books/index.html --content-type text/html
aws s3 cp books/index.html s3://quickdumpnow-site-bucket/books --content-type text/html

# Create CloudFront invalidation
aws cloudfront create-invalidation --distribution-id [DIST_ID] \
  --paths "/books" "/books/*" "/robots.txt"

Why This Approach: Double-keying the object (both books/ and books/index.html) handles the ambiguity in how browsers request clean URLs. CloudFront's origin path configuration doesn't automatically resolve index files the way a traditional web server does, so explicit uploads to both keys eliminate routing edge cases. The robots.txt update prevents search engines from crawling receipt data, reducing SEO pollution and keeping the receipt area semi-private.

JADA Port Sheet Automation: OAuth Token Refresh and Entry Submission

Problem: The automated port sheet generation system, built around Google Apps Script and Python tooling, required a functioning Google Calendar API connection to fetch sailing event data. The existing OAuth refresh token had expired or become invalid, breaking the data pipeline.

Architecture Overview:

  • Primary script: /Users/cb/Documents/repos/tools/jada_port_sheet.py reads from the JADA Port Log 2026 spreadsheet (stored as Excel .xls), fetches calendar events via Google Calendar API, and generates formatted port sheets
  • Apps Script component: /Users/cb/Documents/repos/sites/queenofsandiego.com/PortSheetReporter.gs handles server-side email delivery of port sheets to stakeholders
  • Calendar integration: A separate OAuth token file (outside the repo) stores the Google Calendar refresh token, scoped to read calendar events and manage calendar data
  • Credential management: Two distinct OAuth clients exist — one for Google Drive (file manipulation) and one for Google Calendar (event access)

Token Refresh Solution:

Created /Users/cb/Documents/repos/tools/reauth_jada_calendar.py to handle OAuth token refresh without user interaction in production environments. The script:

  • Inspects existing credential files to verify client ID consistency between Drive and Calendar OAuth clients
  • Attempts to refresh the existing token using the stored refresh_token
  • Falls back to the OAuth authorization code flow if the refresh fails (requires manual browser login once per deployment)
  • Persists the new access and refresh tokens to the credential file
  • Tests the refreshed token by making a live Calendar API call before considering the authentication successful

Credential Structure: Both OAuth clients store credentials in JSON format with fields for client_id, client_secret, access_token, refresh_token, and expiration timestamps. The Calendar client must have https://localhost:8765/ as an authorized redirect URI to support local testing.

Port Sheet Entry Submission:

Once OAuth was functional, the system added a charter entry for a vessel that generated $1,845.72 in revenue:

  • Spreadsheet structure: JADA Port Log 2026 has tabs per month (e.g., "March", "April"). Each tab contains columns for vessel name, charter date, amount, and notes
  • Data transformation: The Python script reads the Excel template structure using openpyxl, preserves existing month data, and appends new charter entries in the correct row format
  • File upload: The updated spreadsheet is uploaded back to the same Google Drive file ID, overwriting the previous version
  • Email delivery: The Apps Script reads the updated sheet, formats the port sheet as HTML, and sends it to Maria, Edwin, and Caro via the PortSheetReporter

Example Port Sheet Entry (Structure):

Month: April
Year: 2026

| Vessel Name     | Date       | Amount   | Notes        |
|-----------------|------------|----------|--------------|
| Joseph Zurek    | 2026-04-15 | $1845.72 | Charter      |

Infrastructure and Lambda Integration

The port sheet pipeline is triggered by AWS Lambda, which needs the current Google Calendar refresh token to execute. The deployment process:

  • Stored the refreshed token in environment variables (NO hardcoded secrets in code)
  • Updated the Lambda function environment with GCAL_REFRESH_TOKEN via AWS CLI
  • Verified the token works by running a dry-run of the port sheet script before pushing to production
  • Configured CloudWatch logs to capture any failures during automated runs

Key Technical Decisions

S3 Double-Keying: Rather than relying on CloudFront origin path rewrite rules, uploading to both books and books/index.html is more explicit and easier to debug. It trades minimal storage overhead for maximum reliability.

Spreadsheet Format Preservation: The Excel .xls file uses row heights and column widths that conv