Deploying a Receipt Management System for Trailer Rentals: S3, CloudFront, and Custom Error Handling

This post documents the deployment of a receipt upload system for a trailer rental business at https://quickdumpnow.com/books, including infrastructure decisions around S3 object naming, CloudFront distribution configuration, and DNS routing.

What Was Done

We converted the quickdumpnow.com domain from a basic landing page to a functional receipt management interface. The implementation required:

  • Creating and deploying a new HTML receipt upload page to S3
  • Configuring proper S3 object structure for clean URL routing
  • Managing CloudFront caching and custom error responses
  • Updating robots.txt to exclude the new receipt endpoint from search indexing
  • Performing full cache invalidation to ensure immediate availability

Technical Details: S3 Object Structure and Pretty URLs

The key technical challenge was achieving a "pretty URL" structure at /books that serves index.html without exposing file extensions or requiring explicit routing rules.

We deployed the receipt page to two S3 object keys in the quickdumpnow-site-assets bucket:

  • books/index.html — The canonical location for the receipt form HTML
  • books — A direct object key (without trailing slash) that CloudFront serves as a directory-level response

This dual-key approach ensures compatibility across different user agents and CloudFront caching behaviors. Some browsers and CDN configurations treat /books differently from /books/, so by uploading to both paths, we guarantee the page loads regardless of how the request arrives at the edge.

Why this pattern? Traditional S3 website hosting would require index document configuration, but we're using CloudFront as the origin, which has its own routing logic. Rather than relying on origin configuration, we make the object available at the exact path users request.

robots.txt Configuration

We updated /Users/cb/Documents/repos/sites/quickdumpnow.com/robots.txt to explicitly block the new /books path from search engine crawlers:

Disallow: /books

This is crucial for a receipt management system because:

  • Receipt data should not be indexed or cached by search engines
  • The page serves internal business functions, not public content
  • Preventing indexing reduces the attack surface for sensitive financial information

The updated robots.txt was deployed alongside the books page and invalidated in CloudFront.

CloudFront Distribution Configuration and Custom Error Responses

During deployment, we discovered that https://quickdumpnow.com/books was returning the homepage instead of the receipt page. Investigation revealed the CloudFront distribution has a custom error response configured:

  • Error Code: 404 (Not Found)
  • Response Page Path: /index.html
  • HTTP Response Code: 200

This configuration was masking S3 object deployment issues by silently redirecting 404s to the root index. While useful for single-page applications, it hides infrastructure problems during development.

Why this pattern exists: The distribution likely serves multiple projects and implements a catchall fallback for graceful error handling. However, it required us to be more precise about object existence and naming conventions.

CloudFront Cache Invalidation

After deploying the books page and robots.txt, we performed explicit cache invalidations on the CloudFront distribution:

Invalidation paths:
- /books
- /books/
- /books/index.html
- /robots.txt

Cache invalidation is critical because:

  • CloudFront edge locations cache all successful (200) responses by default
  • If a 404 was cached before we deployed the object, users would still see 404 even after deployment
  • The invalidation ensures all edge nodes refresh their copies within 30–60 seconds
  • Multiple path variations ensure we cover URL normalization behavior across clients

Invalidations are free for the first 3,000 paths per month, then charged per path. For a small deployment like this, using multiple path variations is appropriate.

File Structure in Version Control

The local repository structure mirrors the S3 bucket layout:

/Users/cb/Documents/repos/sites/quickdumpnow.com/
├── books/
│   └── index.html          (Receipt form HTML)
├── robots.txt              (Search engine directives)
└── [other site files]

This structure allows us to manage S3 deployments via standard directory-based uploads, making the mapping between local files and cloud objects transparent.

Deployment Validation

We validated deployment success by:

  • Checking S3 object existence and content-type headers
  • Verifying CloudFront responded with HTTP 200 instead of 404
  • Confirming the receipt form HTML was served (not the homepage)
  • Testing both /books and /books/ URL variants

Key Decisions

Why S3 + CloudFront instead of application server? Static S3 hosting with CloudFront provides:

  • No server maintenance overhead
  • Built-in DDoS protection at edge locations
  • Sub-100ms latency from ~200 global edge locations
  • Automatic compression and caching

Why dual object keys? S3 treats /books and /books/ as separate objects. By uploading to both, we avoid URL normalization issues where some clients or intermediaries add/remove trailing slashes.

Why explicit robots.txt blocking? While we could use a robots meta tag in the HTML, declaring it in robots.txt is the standard practice and allows blocking at the HTTP level before HTML is even downloaded.

What's Next

The receipt management system is now live and accessible. Upcoming work includes:

  • Implementing the receipt upload form backend (likely Lambda + DynamoDB or RDS)
  • Adding S3 presigned URL generation for file uploads
  • Configuring CORS policies if uploads come from the browser
  • Setting up access logs for audit trails of receipt uploads
  • Integrating with the port sheet automation for charter revenue tracking

The infrastructure foundation is solid—now we can focus on the application logic without worrying about CDN or static hosting issues.