Deploying a Receipt Upload Portal for quickdumpnow.com: Static Site Architecture with CloudFront Custom Error Handling

What Was Done

Built and deployed a dedicated receipt upload landing page for a trailer rental business at https://quickdumpnow.com/books. The implementation involved:

  • Creating a static HTML receipt portal page
  • Deploying to S3 with proper object key structure for pretty URLs
  • Configuring robots.txt to block indexing of the receipt directory
  • Managing CloudFront distribution settings to serve the new content while preserving existing error handling
  • Validating DNS and invalidating edge caches for immediate propagation

Additionally, initiated automated port sheet generation for a sailing charter business, handling OAuth token refresh and processing a $1,845.72 charter payment entry.

Technical Details: Static Site Deployment Pattern

The receipt portal follows a static site with pretty URLs architecture. Rather than serving from a traditional directory structure, we deployed two S3 object keys to ensure browser compatibility:


/Users/cb/Documents/repos/sites/quickdumpnow.com/books/index.html
     → S3: s3://quickdumpnow.com/books/index.html
     → S3: s3://quickdumpnow.com/books (CloudFront bare path)

This dual-key approach accommodates:

  • Pretty URL access: https://quickdumpnow.com/books resolves via CloudFront's /books object
  • Explicit file requests: https://quickdumpnow.com/books/index.html serves the same content
  • Backward compatibility: Existing 404 error handling remains intact

Infrastructure: S3, CloudFront, and DNS Integration

S3 Bucket Structure:


Bucket: quickdumpnow.com
├── index.html (homepage)
├── robots.txt (updated)
└── books/
    └── index.html (new receipt portal)
    └── (bare key also created for pretty URL routing)

The robots.txt update blocks search engine indexing of the receipt directory, preserving privacy for business records:


User-agent: *
Disallow: /books/

CloudFront Distribution Configuration:

The distribution is configured with custom error responses that redirect 404s back to the homepage. This prevents broken links from exposing directory structure. When we deployed /books, we needed to ensure the object existed before the edge caches became authoritative; otherwise, the custom error would redirect to /index.html.

Invalidation Strategy:


CloudFront Invalidation Paths:
- /books
- /books/*
- /robots.txt

These paths were invalidated to force immediate cache refresh across all edge locations. The propagation typically takes 30–60 seconds globally. Invalidations are tracked via CloudFront Distribution ID to monitor completion status.

Key Decisions and Architectural Rationale

Why Two S3 Keys?

CloudFront's index.html` automatic routing applies only when the request path ends with /. However, browser requests to /books (without trailing slash) don't always append it. By uploading to both books/index.html and a bare books key, we ensure both request patterns are served.

Why Block /books/ in robots.txt?

Receipt uploads contain sensitive business financial data. Blocking indexing prevents Google, Bing, and other crawlers from discovering and caching this directory. The robots.txt entry is deployed as a regular S3 object with Cache-Control: public, max-age=86400 to ensure search engines re-check daily.

Why Preserve Custom Error Responses?

The existing quickdumpnow.com site redirects 404s to the homepage for SEO and user experience. If we had misconfigured the CloudFront origin or missing the S3 objects, users would still land on a valid page rather than seeing 404 errors. This required careful validation that the books objects were actually present before invalidating the distribution.

Deployment Commands and Validation

Local verification:


ls -la /Users/cb/Documents/repos/sites/quickdumpnow.com/
# Verify books/index.html and robots.txt exist locally

S3 upload (via build/deploy script):


# Upload books page to pretty URL key
aws s3 cp books/index.html s3://quickdumpnow.com/books \
  --content-type text/html \
  --cache-control "public, max-age=300"

# Upload with explicit index.html key for fallback
aws s3 cp books/index.html s3://quickdumpnow.com/books/index.html \
  --content-type text/html

# Update robots.txt
aws s3 cp robots.txt s3://quickdumpnow.com/robots.txt \
  --content-type text/plain

CloudFront cache invalidation:


aws cloudfront create-invalidation \
  --distribution-id [DIST_ID] \
  --paths "/books" "/books/*" "/robots.txt"

The invalidation returns an ID that can be polled to confirm completion. All edge locations must receive the invalidation before the new content is fully live.

Port Sheet Automation: Google Sheets + Apps Script Integration

In parallel, work began on automating the monthly port sheet for sailing charters. This involved:

  • Updating /Users/cb/Documents/repos/tools/jada_port_sheet.py to parse charter entries
  • Creating /Users/cb/Documents/repos/tools/reauth_jada_calendar.py to handle Google Calendar API OAuth token refresh
  • Modifying the PortSheetReporter.gs Apps Script to read updated entry data

The port sheet entry for the charter payment ($1,845.72) is now staged for insertion into the April 2026 port log. This required:

  • Reading the existing Port Log spreadsheet structure from Google Drive
  • Mapping the template format (columns, row heights, cell formatting)
  • Building an XLSX writer to preserve Excel formatting and cell properties
  • Managing OAuth credentials across multiple Google APIs (Drive, Sheets, Calendar)

What's Next

Immediate next steps:

  • Test receipt upload functionality: Verify users can actually submit receipts to the books portal
  • Finalize port sheet generation: Send the April 2026 port sheet email with the charter entry to stakeholders
  • Monitor CloudFront metrics: Check byte transfer rates and