Multi-Site Technical Blog Infrastructure: Auto-Generated Posts from Claude Sessions
What Was Done
Built an automated system to capture granular technical work across four domains (queenofsandiego.com, sailjada.com, dangerouscentaur.com, and burialsatseasandiego.com) and publish real-time blog posts to their respective tech subdomains. This system transforms Claude session transcripts into detailed, credentials-scrubbed technical documentation automatically.
Core Components Built
1. Session Capture and Processing
Created tech_blog_generator.py which:
- Parses Claude session JSONL transcripts to extract tool use patterns, file modifications, and command execution history
- Identifies which domain each session touches via file path analysis (e.g., paths containing
/sites/queenofsandiego.com/) - Scrubs all credentials, API keys, tokens, and sensitive data before post generation
- Extracts meaningful technical context: exact S3 bucket names, CloudFront distribution IDs, Route53 hosted zone identifiers, file paths, and architectural decisions
- Generates HTML blog posts with proper semantic structure (
<h2>,<h3>,<code>blocks)
2. Infrastructure Initialization
Wrote tech_blog_init.py to provision:
- S3 Buckets: Four separate buckets for blog content storage:
tech-qos-blog(queenofsandiego.com)tech-jada-blog(sailjada.com)tech-dc-blog(dangerouscentaur.com)tech-bats-blog(burialsatseasandiego.com)
- CloudFront Distributions: For each domain with proper caching headers and gzip compression. Used existing wildcard certificates:
*.queenofsandiego.com(existing wildcard ACM cert)*.sailjada.com(existing wildcard ACM cert)dangerouscentaur.comwildcard distribution (E2Q4UU71SRNTMB) via Namecheap CNAME- New
burialsatseasandiego.comcert via GoDaddy DNS validation
- DNS Configuration:
- Route53 ALIAS records for
tech.queenofsandiego.comandtech.sailjada.compointing to CloudFront - Namecheap CNAME for
tech.dangerouscentaur.comto existing wildcard distribution - GoDaddy CNAME for
tech.burialsatseasandiego.comwith ACM certificate DNS validation
- Route53 ALIAS records for
3. Claude Code Integration Hook
Added a Stop hook at /Users/cb/.claude/hooks/tech_blog_stop.sh which:
- Executes automatically when a Claude session ends
- Copies the session transcript to a working directory
- Invokes
tech_blog_generator.pywith the transcript path - Routes generated HTML posts to the appropriate S3 bucket
- Invalidates the CloudFront cache for the affected tech blog
- Logs all operations to
~/.claude/logs/tech_blog_hook.logfor auditing
Registered this hook in /Users/cb/.claude/settings.json under the hooks configuration so it triggers on every session termination.
Navigation Integration
Updated /Users/cb/Documents/repos/sites/queenofsandiego.com/index.html to add "Tech Blog" links in the Ship's Papers dropdown menu. Each main domain now has a link to its tech blog visible from the primary navigation.
Technical Architecture Decisions
Why Separate S3 Buckets Per Domain
Each blog gets its own bucket rather than a shared bucket with prefixes. This enables:
- Independent access control policies per domain
- Separate CloudFront origins for better caching isolation
- Cleaner architectural boundaries matching the multi-domain ownership model
Why CloudFront Instead of Direct S3 Access
All S3 buckets are private with CloudFront as the only public access point. This provides:
- SSL/TLS encryption in transit with domain-specific certificates
- DDoS protection via AWS Shield
- Geographic edge caching for faster delivery
- Request signing security model
DNS Provider Heterogeneity
The system handles three different DNS providers:
- Route53 (queenofsandiego.com, sailjada.com) — native AWS integration with ALIAS records for CloudFront
- Namecheap (dangerouscentaur.com) — CNAME record pointing to existing wildcard distribution
- GoDaddy (burialsatseasandiego.com) — CNAME with ACM certificate DNS validation challenge records
The initialization script detects and handles each provider via DNS nameserver lookup.
Session Transcript Processing
The blog generator parses Claude's JSONL transcript format which contains:
tool_useentries showing AWS CLI commands, file operations, and infrastructure changescommand_useentries for shell commands executed (e.g., bucket creation, invalidation)- File system state: paths modified, created, or edited
- Tool input/output pairs containing resource names and configuration details
The extractor rebuilds a narrative from this data, mapping:
File writes/edits → What changed
Tool invocations → Infrastructure operations
Command execution → AWS/DNS actions
Structured output → Resource names and IDs
This granular approach captures the "why" and "how" behind each change, not just the final state.
Credentials Scrubbing
Before any post publishes, the generator:
- Strips regex patterns matching AWS access keys, API keys, tokens
- Removes full credential objects from tool outputs
- Converts sensitive identifiers to generic labels (e.g., "AWS_ACCOUNT_ID" instead of "123456789012")
- Preserves resource names (bucket names, distribution IDs, hosted zone IDs) as these are non-secret architectural documentation
- Logs scrubbing operations so we can verify sensitive data handling