Deploying a Receipt Management System to quickdumpnow.com/books: S3, CloudFront, and Custom Error Handling
This post documents the deployment of a receipt management interface for a trailer rental business at https://quickdumpnow.com/books. The work involved creating a new static page, configuring S3 object routing, managing CloudFront cache invalidation, and debugging custom error response configurations that were interfering with the deployment.
What Was Done
We deployed a new books/receipts management page to the quickdumpnow.com domain by:
- Creating a new receipt management interface at
/Users/cb/Documents/repos/sites/quickdumpnow.com/books/index.html - Uploading the page to S3 with dual-key routing for pretty URL support
- Blocking the receipts area from search engines via robots.txt
- Invalidating CloudFront distribution cache to serve updated content
- Investigating and resolving custom error response behavior in CloudFront that was redirecting 404s to the homepage
Technical Details: S3 Object Routing and Pretty URLs
The initial deployment attempt uploaded books/index.html to S3, but browser requests to /books were returning the homepage instead of the new page. This happened because CloudFront's origin (the S3 bucket) didn't have an object at the /books path—only at /books/index.html.
To support pretty URLs without requiring S3 static website hosting (which would conflict with our existing CloudFront distribution configuration), we uploaded the content to two S3 keys:
books/index.html— the actual content filebooks— a duplicate of the same content, treated as a directory object
This approach allows CloudFront to serve the page whether the request comes in as /books or /books/, without needing to modify origin request behavior at the distribution level.
Why this approach? Adding origin request lambda functions or modifying CloudFront behaviors adds operational complexity. The dual-key upload is a simpler, stateless solution that works with standard S3 GET requests.
Infrastructure: CloudFront Custom Error Responses
During testing, we discovered that CloudFront was returning the homepage (a 200 response) for requests to /books, even after the S3 objects were in place. Investigation of the CloudFront distribution configuration revealed custom error responses configured at the distribution level:
- Error Code: 404
- Response Page Path:
/index.html - HTTP Response Code: 200
This configuration intercepts all 404 responses from the origin and returns the homepage instead. While useful for single-page applications, it prevented proper page delivery during the deployment window when S3 objects were still propagating.
The fix was to ensure S3 objects existed before CloudFront started serving requests, then immediately invalidate the cache. Once the CloudFront edge nodes had fresh objects, they served the books page correctly.
Robots.txt Configuration
We updated /Users/cb/Documents/repos/sites/quickdumpnow.com/robots.txt to block search engine indexing of the receipts area:
User-agent: *
Disallow: /books
Disallow: /books/
This prevents the receipts management page from appearing in search results, maintaining privacy for business financial data. The file was deployed alongside the books page and invalidated in CloudFront.
CloudFront Cache Invalidation
Two separate invalidation requests were required to ensure proper cache clearing:
- First invalidation:
/booksand/robots.txt - Second invalidation:
/books*(wildcard) to catch all books-related paths
CloudFront invalidations typically propagate across all edge nodes within 30–60 seconds. We verified deployment success by checking that https://quickdumpnow.com/books returned the receipt management page rather than the homepage.
Why two invalidations? The first covered the specific paths we deployed. The second used a wildcard pattern to ensure any edge case paths (e.g., /books/ with trailing slash, or caching inconsistencies) were cleared. This belt-and-suspenders approach reduced the risk of stale content being served from some edge locations.
Key Decisions and Trade-offs
- Dual S3 key upload vs. Lambda@Edge: We chose dual uploads for simplicity and cost. Lambda@Edge functions add latency and operational overhead for what amounts to a URL rewrite problem. The dual-key solution is stateless and requires no additional AWS service management.
- Robots.txt blocking: Rather than relying solely on authentication, we used robots.txt as a first line of defense. This prevents search engines from discovering the receipts area even if authentication is accidentally misconfigured in the future.
- Custom error response awareness: This deployment revealed that CloudFront's custom error responses can mask deployment issues. We now check the distribution configuration before debugging missing pages.
What's Next
The books/receipts page is now live and accessible at https://quickdumpnow.com/books. Next steps include:
- Implementing receipt upload and storage functionality (likely S3 with presigned URLs)
- Adding authentication to the books page (currently unprotected)
- Creating backend receipt processing logic to validate and categorize trailer rental expenses
- Monitoring CloudFront metrics to ensure the books page isn't causing cache hit ratio degradation
We also need to address the separate port sheet automation work for Maria, which involves integrating with Google Sheets and Apps Script. That work is in progress and will be documented separately.