Deploying a Receipt Management System for Trailer Rentals: S3 + CloudFront Static Site Pattern
Overview
This post covers the deployment of a new receipt management landing page for a trailer rental business at quickdumpnow.com/books. The work involved updating the CloudFront-fronted S3 static site infrastructure, handling pretty URL routing, and managing search engine crawling behavior—all while maintaining an existing site structure.
What Was Done
We deployed a dedicated receipt upload interface for the trailer rental business by:
- Creating and updating an HTML landing page at
/Users/cb/Documents/repos/sites/quickdumpnow.com/books/index.html - Uploading the page to S3 under both
books/index.htmland a barebooks/key to support pretty URLs - Updating
robots.txtto explicitly block this path from search engine indexing - Invalidating the CloudFront distribution cache for the
/bookspaths - Verifying the CloudFront custom error response configuration
Technical Architecture
S3 Static Site Structure
The quickdumpnow.com site is hosted as a static website in S3. The repository structure mirrors the S3 bucket layout:
/Users/cb/Documents/repos/sites/quickdumpnow.com/
├── index.html (homepage)
├── robots.txt
└── books/
└── index.html (receipt management page)
S3 is configured for static website hosting, which means:
- Index documents are automatically served for directory requests
- Requests to
/booksare routed tobooks/index.html - A 404 error page redirects to the homepage (configured in the CloudFront distribution)
CloudFront Distribution Configuration
The CloudFront distribution sits in front of the S3 bucket origin, providing:
- Edge caching for performance and reduced S3 API costs
- Custom error responses that redirect 404s to the homepage (this is why the old
/bookspage returned the homepage before deployment) - Pretty URL support by routing directory requests to index documents
When we tested the page before deployment, CloudFront returned the homepage because the S3 object didn't exist yet and the custom 404 handler was active. After uploading the files and invalidating the cache, CloudFront began serving the new receipt page.
Deployment Process
File Upload Strategy
We uploaded the receipt page to two S3 keys to ensure both URL formats work:
s3://quickdumpnow.com/books/index.html— serves when requesting/books/index.htmls3://quickdumpnow.com/books/— serves as the directory index when requesting/booksor/books/
This dual-key approach is necessary because S3 static website hosting requires an explicit index document object; it doesn't automatically serve index.html from directory keys. By uploading to both keys, we ensure the page is accessible via the prettier URL.
Search Engine Visibility Control
We updated robots.txt to block crawler access to the /books path:
User-agent: *
Disallow: /books
Rationale: The receipt management page is internal tooling, not public-facing content. Blocking it from indexing prevents search engines from crawling it and potentially diluting SEO signals. This also reduces unnecessary CloudFront cache hits from bots.
Cache Invalidation
After uploading, we invalidated the CloudFront distribution cache for:
/books/books//books/index.html/robots.txt
CloudFront invalidations propagate globally to all edge locations within ~30–60 seconds. This ensures users see the new page immediately rather than hitting stale cached content.
Infrastructure Details
S3 Bucket Configuration
The S3 bucket is configured with:
- Static website hosting enabled on the bucket (index document:
index.html) - CloudFront as the origin pointing to the S3 static website endpoint
- No public ACLs — the bucket is not publicly readable; access only flows through CloudFront
CloudFront Custom Error Response
The distribution has a custom error response configured:
- Error code: 404
- Response page path:
/index.html - HTTP response code: 200
This configuration masks 404 errors by silently serving the homepage, which is useful for catch-all behavior but requires careful testing when deploying new pages. Before the /books/index.html object existed in S3, requests to /books would 404, and CloudFront would respond with the homepage instead—masking the missing page.
Key Decisions and Rationale
Why Block /books in robots.txt?
The receipt page is internal administrative tooling, not customer-facing content. Blocking it prevents:
- Search engines indexing non-public pages
- Wasting crawler budget on internal tools
- Unintended exposure if the page URL leaks publicly
Why Upload to Two S3 Keys?
S3's static website hosting doesn't automatically serve index documents from directory paths the way a traditional web server does. By uploading to both books/index.html and books/, we ensure the page is accessible via both /books and /books/index.html—providing flexibility for future URL changes or redirects.
Why Invalidate CloudFront?
CloudFront caches content at edge locations globally. Without invalidation, old cached content would persist until the TTL expires (potentially hours). Invalidation forces immediate cache purge, ensuring the new page is live globally within minutes.
Related Work: Automated Port Sheet Generation
During this session, we also advanced the automated port sheet system for the sailing business. A new charter entry (Joseph Zurek, $1,845.72) was added to the JADA Port Log spreadsheet. The infrastructure for this includes:
- Google Apps Script functions in
PortSheetReporter.gsfor reading/writing port data - Python tooling in
jada_port_sheet.pyandreauth_jada_calendar.py