Deploying a Receipt Management System for quickdumpnow.com and Automating Port Sheet Reporting

This session involved two parallel infrastructure projects: deploying a receipts page for a trailer rental business and automating port sheet generation for a charter operation. Both required careful coordination of static site hosting, content delivery, and API integration patterns.

Project 1: Receipt Management Portal at quickdumpnow.com/books

The quickdumpnow.com domain needed a dedicated landing page for customers to submit trailer rental receipts. The requirement was simple but the deployment path revealed important lessons about CloudFront caching behavior and S3 directory structure.

File Structure and Deployment

The local repository structure mirrors the production deployment:

/Users/cb/Documents/repos/sites/quickdumpnow.com/
├── books/
│   └── index.html          # Receipt submission form
└── robots.txt              # SEO and crawler directives

The books/index.html file implements a simple receipt upload interface. However, deploying this required uploading to two S3 keys to support both URL patterns:

  • s3://quickdumpnow.com/books/index.html — explicit file reference
  • s3://quickdumpnow.com/books — directory-style "pretty URL"

Why both? CloudFront's behavior depends on how the S3 origin is configured. When accessing /books, CloudFront attempts to serve /books as an object first, then falls back to error handling. By uploading to both keys, we ensure that /books resolves directly without triggering 404 logic.

CloudFront and Custom Error Responses

The CloudFront distribution for quickdumpnow.com was configured with a custom error response redirecting all 404s to the homepage. This is a common pattern for single-page applications but problematic for multi-page sites. The distribution had:

  • Origin: S3 bucket quickdumpnow.com
  • Custom error response: 404 → / (homepage redirect)
  • TTL: 3600 seconds for successful responses

After uploading the books page, an immediate CloudFront cache invalidation was necessary:

aws cloudfront create-invalidation \
  --distribution-id [DISTRIBUTION_ID] \
  --paths "/books" "/books/*" "/robots.txt"

The invalidation propagated within 30-60 seconds across CloudFront edge locations.

robots.txt Configuration

The robots.txt was updated to block automated crawlers from indexing the receipts page, preventing potential data exposure:

User-agent: *
Disallow: /books/

Project 2: Automated Port Sheet Generation and Email Distribution

The second project automated the generation and distribution of port sheets for charter operations. This required integrating Google Sheets, Google Apps Script, Google Calendar API, and Lambda functions across multiple credential scopes.

Architecture Overview

The system consists of several components:

  • Source of Truth: JADA Port Log 2026 — a Google Sheet tracking all charter operations
  • Aggregation: PortSheetReporter.gs — Google Apps Script that reads the port log and formats monthly port sheets
  • Python Automation: jada_port_sheet.py and reauth_jada_calendar.py — scripts handling spreadsheet manipulation and API authentication
  • Deployment: AWS Lambda executing scheduled port sheet generation

Google Sheets Integration

The JADA Port Log 2026 sheet maintains multiple tabs for different months (March, April, etc.). Each tab follows a consistent template structure with columns for:

  • Charter date
  • Captain/operator name
  • Vessel name
  • Port information
  • Revenue and expenses

The Python script jada_port_sheet.py reads from this sheet using the Google Sheets API, processes entries, and formats them into an Excel workbook matching a legacy template structure. The challenge was preserving the original Excel format (including row heights and column widths) while injecting new charter data.

Excel Format Preservation

The port sheet template was maintained in Excel (.xls) format. To programmatically update it while preserving formatting, we used the openpyxl library:

pip install openpyxl

The script downloads the existing port sheet from Google Drive, parses its structure with openpyxl, injects new charter entries (like the Joseph Zurek entry valued at $1,845.72), and reuploads the modified file to the same Drive file ID.

OAuth Token Management

The system required valid OAuth2 refresh tokens for both Google Drive (reading/writing sheets) and Google Calendar (scheduling operations). These credentials have different scopes:

  • Drive scope: https://www.googleapis.com/auth/drive — read/write access to spreadsheets
  • Calendar scope: https://www.googleapis.com/auth/calendar — read event details for scheduling

Refresh tokens degrade over time due to Google's token rotation policy. A new script, reauth_jada_calendar.py, implements a local OAuth2 flow:

python reauth_jada_calendar.py
# Launches local web server on port 8765
# Opens browser for user consent
# Captures authorization code and exchanges for new tokens

The script handles the complete flow:

  • Starts local HTTP server listening on localhost:8765
  • Constructs OAuth authorization URL with required scopes
  • Captures the authorization code from the redirect
  • Exchanges code for access and refresh tokens
  • Saves new tokens to local credential file

Port Management and Process Handling

During development, port 8765 became stale due to previous process termination. Resolution required:

lsof -i :8765                    # Identify process holding port
kill -9 [PID]                     # Force kill the process
python reauth_jada_calendar.py   # Restart successfully

AWS Lambda Deployment

The new refresh token was pushed to AWS Lambda environment variables:

aws lambda update-function-configuration \
  --function-name [FUNCTION_NAME] \
  --environment Variables={GCAL_REFRESH_TOKEN=[NEW_TOKEN]}

The Lambda function jada_port_sheet.py executes on a CloudWatch Events schedule (typically monthly), generates the port sheet, and triggers email distribution to stakeholders (Edwin, Maria, Caro).

Key Decisions and Rationale

  • Dual S3 key upload: Ensures both /books and /books/ URLs work despite CloudFront's custom error responses