Building an Auto-Generated Technical Blog Pipeline for Multi-Site Infrastructure Tracking
This session implemented a comprehensive system to automatically generate detailed technical blog posts documenting infrastructure changes and development work across four distinct websites: queenofsandiego.com, sailjada.com, dangerouscentaur.com, and burialsatseasandiego.com. The goal was to create a granular, transparent record of all technical work that could be reviewed by stakeholders without exposing sensitive credentials.
What Was Done
- Created an automated blog generation system that captures Claude Code session transcripts and converts them into structured technical blog posts
- Established S3 buckets and CloudFront distributions for four new tech blog subdomains
- Configured DNS records across three different providers (Route53, Namecheap, GoDaddy) to point to the appropriate tech blog endpoints
- Integrated the blog generator into Claude Code's session lifecycle via a Stop hook
- Added navigation links in Ship's Papers menus across all primary sites
- Created a centralized memory system to track infrastructure configurations and project state
Technical Implementation Details
Blog Generator Architecture
The core system consists of two primary Python modules:
/Users/cb/Documents/repos/tools/tech_blog_generator.py— Parses Claude Code session transcripts (JSONL format) and generates HTML blog posts with syntax highlighting and structured sections/Users/cb/Documents/repos/tools/tech_blog_init.py— Handles infrastructure provisioning: creates S3 buckets with proper bucket policies, provisions CloudFront distributions, manages ACM certificate validation, and updates DNS records
The generator extracts command execution history and tool use blocks from session transcripts, filters out sensitive data patterns (API keys, passwords, credentials), and renders them into semantic HTML with proper code block formatting. Posts are automatically categorized by site and timestamped.
Infrastructure Provisioning
Four distinct tech blog endpoints were created:
- tech.queenofsandiego.com — S3 bucket with CloudFront distribution (Route53 hosted zone)
- tech.sailjada.com — S3 bucket with CloudFront distribution (Route53 hosted zone, wildcard cert: *.sailjada.com)
- tech.dangerouscentaur.com — CNAME record pointing to existing wildcard CloudFront distribution
E2Q4UU71SRNTMBondc-sitesS3 bucket (Namecheap DNS) - tech.burialsatseasandiego.com — S3 bucket with CloudFront distribution, CNAME validated at GoDaddy DNS provider
Each CloudFront distribution was configured with:
- S3 origin with restricted public access (origin access identity)
- Default index document:
index.html - Cache behaviors optimized for HTML documentation (TTL: 300 seconds default, 3600 for versioned assets)
- HTTPS-only redirect from HTTP
- Compression enabled for text/HTML assets
SSL/TLS Certificate Strategy
Existing wildcard certificates were leveraged where available (*.queenofsandiego.com, *.sailjada.com
- dangerouscentaur.com — Wildcard distribution already in place; tech subdomain reused existing infrastructure
- burialsatseasandiego.com — New ACM cert provisioned, DNS validation CNAME added to GoDaddy-managed zone
Session Lifecycle Integration
A Stop hook was created at /Users/cb/.claude/hooks/tech_blog_stop.sh and registered in Claude Code settings. This hook executes at the end of each session and:
- Reads the session transcript from Claude's session directory
- Invokes
tech_blog_generator.pywith the transcript path and site identifier - Sanitizes output to remove any credential patterns (regex-based filters for AWS keys, API tokens, passwords)
- Uploads the generated HTML to the appropriate tech blog S3 bucket
- Invalidates the CloudFront distribution cache to serve fresh content immediately
- Logs execution details to
~/.claude/logs/tech_blog_generation.log
The hook is invoked conditionally: only when TECH_BLOG_SITE environment variable is set to one of: qos, jada, dc, or bats.
Navigation Integration
Updated Ship's Papers dropdown menus in all primary sites to include links to respective tech blogs:
/Users/cb/Documents/repos/sites/queenofsandiego.com/index.html— Added "Tech Blog" link in Ship's Papers- Similar updates to sailjada.com, dangerouscentaur.com, and burialsatseasandiego.com navigation structures
Links are styled to match existing navigation patterns and appear in context-sensitive Ship's Papers dropdowns.
State Management and Documentation
A centralized memory system was established in ~/.claude/projects/memory/ to track:
project_tech_blogs.md— Project overview, site mappings, and infrastructure statusreference_godaddy_credentials.md— GoDaddy API integration notes (credentials stored securely outside repo)MEMORY.md— Running project state, decisions made, and outstanding tasks
Key Decisions
- Transcript-based approach: Rather than instrumenting application code, we capture the actual session transcript. This is language-agnostic and captures the full context of decisions and commands without requiring code modifications.
- Provider diversity: Maintained existing DNS provider relationships (Route53 for AWS-managed zones, Namecheap for dangerouscentaur, GoDaddy for burialsatseasandiego) rather than consolidating, since each domain has distinct hosting history.
- Wildcard certificate reuse: Leveraged existing wildcard certs to reduce ACM certificate provisioning overhead and validation cycles.
- Post-session generation: Blog posts are generated at session end via Stop hook rather than real-time streaming. This ensures complete session context and allows for comprehensive sanitization before publication.
- CloudFront caching strategy: Aggressive cache TTL (300 seconds) balances freshness with cost, since tech blog posts are typically referenced hours or days after generation.
What's Next
- Test blog generation end-to-end with actual session transcripts across all four sites
- Verify CloudFront distributions are healthy and DNS propagated globally