```html

Deploying a Dynamic Charter Proposal Page: Infrastructure, S3 Distribution, and CloudFront Invalidation Strategy

What Was Done

Created and deployed a new charter proposal landing page at queenofsandiego.com/proposals/jada-charter-proposal-sue.html, implementing a complete infrastructure workflow from local development through S3 distribution to CloudFront cache invalidation. This post covers the technical decisions, deployment pipeline, and architecture patterns used to bring a complex proposal form to production.

Development Workflow and File Structure

The proposal system uses a straightforward but scalable directory structure within the main site repository:

/Users/cb/Documents/repos/sites/queenofsandiego.com/
├── proposals/
│   └── jada-charter-proposal-sue.html
├── assets/
│   └── images/
│       └── interior/
├── publish_static_site.sh
└── [other site files]

The HTML file was created and iterated through multiple refinement cycles, with each save triggering a local validation step. Rather than using a complex build system, the site uses a shell-based publish script that orchestrates the S3 sync and CloudFront invalidation.

Infrastructure: S3, CloudFront, and Route53

The deployment pipeline follows a standard static site architecture:

  • S3 bucket: queenofsandiego.com (origin bucket for static assets and HTML)
  • CloudFront distribution: Fronts the S3 bucket with edge caching and compression
  • DNS: Route53 handles CNAME resolution for the apex domain and subdomains

The proposal file is deployed to the proposals/ prefix in S3, making it publicly accessible via the CloudFront distribution. This approach decouples content updates from infrastructure changes—new proposals can be added without modifying DNS records or distribution configurations.

Deployment Pipeline: Shell Scripting and AWS CLI

The publish_static_site.sh script orchestrates the deployment in three stages:

  1. Environment initialization: Load AWS credentials from a sourced secrets file
    set -a
    source /path/to/.secrets/repos.env
    set +a
    This pattern uses set -a to automatically export all sourced variables, ensuring AWS CLI can access credentials without explicit exports.
  2. S3 sync: Copy the proposal HTML to the S3 bucket
    aws s3 cp proposals/jada-charter-proposal-sue.html \
      s3://queenofsandiego.com/proposals/jada-charter-proposal-sue.html
    Using cp rather than sync provides fine-grained control over individual file updates and reduces risk of unintended deletions.
  3. CloudFront invalidation: Clear the edge cache for the new content
    aws cloudfront create-invalidation \
      --distribution-id [DIST_ID] \
      --paths "/proposals/jada-charter-proposal-sue.html"

The script sources environment variables from a centralized secrets file, keeping credentials out of version control and enabling different configurations per environment (dev, staging, production).

Cache Invalidation Strategy

Rather than invalidating the entire CloudFront distribution (which costs nothing but affects all users globally), we use path-specific invalidation:

  • Granular paths: Only /proposals/jada-charter-proposal-sue.html is invalidated, leaving other cached assets untouched
  • Immediate propagation: CloudFront invalidations propagate within seconds, ensuring users see fresh content
  • Cost efficiency: Path-level invalidation is free for the first 3,000 per month across all distributions

After deployment, we verify cache status using the CloudFront API:

aws cloudfront get-invalidation \
  --distribution-id [DIST_ID] \
  --id [INVALIDATION_ID]

This returns the invalidation status and confirms the cache has cleared.

Content Delivery and Edge Behavior

CloudFront is configured to serve the HTML with appropriate cache headers:

  • Origin: S3 bucket with no custom cache control headers (CloudFront defaults apply)
  • Compression: Gzip compression enabled for HTML, reducing file size by 60-70%
  • Default TTL: Typical CloudFront configuration uses 24-hour caching for HTML, with manual invalidations triggering fresh fetches

The HTML file itself includes embedded form logic and references to static assets (images, stylesheets) stored in the assets/ directory. Asset versioning via timestamped filenames or cache-busting parameters is not used here—instead, we rely on CloudFront's path-based invalidation when assets change.

Lessons and Design Decisions

Why shell scripting over a build system? For small static sites with infrequent deployments, a shell script is more maintainable than Webpack, Gulp, or Hugo. It's transparent, requires no build dependencies, and integrates naturally with AWS CLI tooling. As the site scales, this could graduate to a proper CI/CD pipeline (GitHub Actions, CodePipeline), but today it's overkill.

Why path-specific invalidation? Invalidating `/proposals/*` or `/` would work but is wasteful. Specific paths ensure only changed content is refreshed at edges, reducing latency for other parts of the site.

Why S3 + CloudFront over a traditional origin server? This architecture eliminates origin availability concerns, provides built-in DDoS mitigation via CloudFront, and scales to millions of concurrent users with zero configuration changes.

Deployment Verification

After deployment, we verify end-to-end delivery:

curl -s "https://queenofsandiego.com/proposals/jada-charter-proposal-sue.html" | head -50

This confirms the HTML is served via CloudFront (check response headers for X-Cache: Hit from cloudfront or Via: 1.1 cloudfront) and contains the expected content.

What's Next

Future improvements could include:

  • Automated testing: Add a step to validate HTML structure and required form fields before S3 upload
  • CI/CD integration: Use GitHub Actions to trigger deployments on commits to main branch
  • A/B testing: Deploy multiple proposal variants and use CloudFront functions to route traffic based on query parameters
  • Analytics: Embed event tracking to monitor proposal engagement and conversion rates

The current pipeline is production-ready and maintainable. It demonstrates how even small teams can leverage AWS's global infrastructure to deliver fast, reliable content at scale.

```