Deploying a Receipt Management Interface for Trailer Rental Operations: CloudFront + S3 Architecture
This post documents the deployment of a new receipt management system for a trailer rental business, built on top of existing static site infrastructure. The work involved creating a dedicated receipt upload interface at quickdumpnow.com/books, managing CloudFront caching behavior, and automating a Google Sheets-based port/charter logging system.
What Was Done
- Created and deployed a new receipt management page at
/bookson quickdumpnow.com - Configured S3 and CloudFront to properly serve the new page with correct content-type and caching behavior
- Updated
robots.txtto block search engine indexing of the receipt directory - Automated charter entry logging into a Google Sheets port sheet with openpyxl for Excel file sync
- Resolved CloudFront 404 behavior caused by default error page configuration
Technical Details: S3 and CloudFront Configuration
The Problem: The books page existed in the local repository at /Users/cb/Documents/repos/sites/quickdumpnow.com/books/index.html, but accessing https://quickdumpnow.com/books returned the homepage instead of the receipt management interface. The CloudFront distribution was configured with a custom error response that redirects all 404s to the root homepage—a common pattern for single-page applications, but problematic for multi-page static sites.
The Solution: Deploy the page to S3 with multiple key variations to ensure CloudFront finds it regardless of how the URL is constructed.
S3 Upload Strategy
We uploaded books/index.html to the quickdumpnow S3 bucket using two distinct S3 keys:
s3://quickdumpnow-site/books/index.html
s3://quickdumpnow-site/books
The dual-key approach handles both the "pretty URL" request (/books/) and the explicit file request (/books/index.html). While S3 doesn't have true directory logic, CloudFront's origin request behavior can route /books requests to /books/index.html through origin request policies, but the simplest reliable approach is uploading to both keys with identical content.
Each upload set:
Content-Type: text/html; charset=utf-8Cache-Control: max-age=3600(1-hour browser cache, respecting CDN TTL)- Gzip compression enabled (CloudFront handles this automatically with S3)
CloudFront Invalidation
After upload, we invalidated the CloudFront distribution cache:
aws cloudfront create-invalidation \
--distribution-id [DIST_ID] \
--paths "/books" "/books/*" "/robots.txt"
The invalidation request targeted three paths:
/books— The directory-style request/books/*— Any nested paths (future-proofing for additional receipt categories)/robots.txt— Updated crawl directives (see below)
CloudFront invalidations typically propagate across all edge locations within 30–60 seconds. We verified deployment by checking the live URL after waiting for the invalidation status to reach Completed.
robots.txt Configuration
The receipts directory should not be indexed by search engines, both for privacy (client financial data) and to avoid exposing internal business operations. We added the directive:
User-agent: *
Disallow: /books/
This blocks all crawlers from the /books/ directory while allowing the rest of the site to be discoverable. The updated robots.txt was uploaded to S3 and invalidated through CloudFront.
Automated Charter Logging: Google Sheets Integration
Context: The business tracks charter bookings in a Google Sheets port sheet. Manual entry is error-prone. A recent charter generated $1,845.72 in revenue, and we needed to automate logging to reduce data entry overhead.
Google Apps Script Implementation
We modified /Users/cb/Documents/repos/sites/queenofsandiego.com/PortSheetReporter.gs to provide a reporting interface backed by the Apps Script API.
The workflow:
- User provides charter details (name, amount, date)
- Apps Script authenticates to the Google Sheets API
- Script appends a row to the
Port Logsheet with formatted entries - A secondary Python script syncs the sheet data to an Excel workbook for offline access
Python Port Sheet Sync Tool
We created /Users/cb/Documents/repos/tools/jada_port_sheet.py, which went through multiple iterations:
# Pseudocode workflow
1. Authenticate to Google Drive API
2. Download JADA Port Log 2026 sheet (Excel format)
3. Read existing month tabs using openpyxl
4. Validate and append new charter entries
5. Preserve existing data (March 2026 entries)
6. Create April 2026 tab with new charter entry
7. Upload updated workbook back to Drive
Why openpyxl? The port sheet is maintained in Excel format for offline accessibility and to support complex formatting (merged cells, conditional formatting) that Google Sheets can't replicate 1:1. The openpyxl library provides native Python support for reading/writing .xlsx files while preserving formatting and sheet structure.
Installation required the --break-system-packages flag due to Python virtual environment constraints:
pip install --break-system-packages openpyxl
The tool reads the existing workbook structure, identifies the correct sheet tab (accounting for template vs. data sheets), and appends the charter entry in the expected format:
- Column A: Date (formatted as M/D/YYYY)
- Column B: Name (e.g., "Joseph Zurek")
- Column C: Amount (currency format, e.g., "$1,845.72")
- Column D: Notes (charter type, details)
Sheet ID and Tab Resolution
Early iterations failed because we were appending to the wrong sheet tab. Google Sheets API requires explicit sheetId integers, which don't match the human-readable tab names. We resolved this by:
- Listing all tabs in the spreadsheet via
spreadsheets().get() - Filtering for the
Port Logtab by name - Extracting the numeric
sheetIdproperty - Using that ID in subsequent append operations
The correct sheet structure includes a Month label cell and formatted entry rows, which the Python script reads and preserves.