Building an Automated Technical Blog System Across Four Domain Properties
This session involved architecting and deploying a comprehensive technical documentation system that automatically captures development work across four independent domain properties: queenofsandiego.com, sailjada.com, dangerouscentaur.com, and burialsatseasandiego.com. Each property now has its own tech blog subdomain that publishes granular, detailed posts about infrastructure changes, deployments, and technical work as it happens.
The Problem We Solved
The original requirement was straightforward but technically nuanced: create a real-time, auto-generated technical audit trail for all work done across multiple unrelated domain properties. Each blog needed to be:
- Automatically generated from Claude session transcripts
- Deployed to unique tech subdomains (tech.queenofsandiego.com, tech.sailjada.com, etc.)
- Accessible from each main site's navigation menu
- Granular enough to document infrastructure decisions, code changes, and system modifications
- Secure (no credentials, API keys, or sensitive data exposed)
The challenge was coordinating SSL certificates, DNS, S3 buckets, and CloudFront distributions across properties managed by different domain registrars and DNS providers.
Infrastructure Architecture
SSL Certificate Strategy
We leveraged existing AWS Certificate Manager wildcard certificates:
*.queenofsandiego.com— existing wildcard cert, covers tech.queenofsandiego.com*.sailjada.com— existing wildcard cert, covers tech.sailjada.comdangerouscentaur.com— existing wildcard CF distribution ondc-sitesS3 bucket (distribution ID: E2Q4UU71SRNTMB)burialsatseasandiego.com— new ACM wildcard cert provisioned with DNS validation via GoDaddy
This approach avoided redundant certificate management and leveraged existing infrastructure investments.
S3 and CloudFront Setup
For each property, we created:
- Dedicated S3 bucket:
qos-tech-blog,jada-tech-blog,dc-tech-blog,bats-tech-blog - Corresponding CloudFront distribution with cache invalidation on publish
- Root index HTML pointing to static blog post directory
S3 buckets are configured for static website hosting with public read access to the blog posts directory. CloudFront distributions use the S3 website endpoint as origin, enabling efficient caching and serving.
DNS Configuration Across Providers
Since these properties use different DNS providers, we configured DNS differently per provider:
- Route53-managed zones (queenofsandiego.com, sailjada.com): Created
CNAME techrecords pointing to CloudFront distribution domain names - GoDaddy-managed zone (burialsatseasandiego.com): Added
CNAME techrecord for CloudFront, plus ACM certificate validationCNAMErecord - Namecheap-managed zone (dangerouscentaur.com): Added
CNAME techrecord pointing to existing wildcard CloudFront distribution
Blog Generator Pipeline
The automation pipeline consists of three key components:
1. Claude Code Stop Hook
File: /Users/cb/.claude/hooks/tech_blog_stop.sh
This executable bash script triggers when a Claude session ends. It:
- Reads the session transcript (available at
$CLAUDE_TRANSCRIPT_FILE) - Extracts file modifications, commands run, and tool invocations
- Invokes the blog generator with the transcript path and determined site context
- Handles errors gracefully and logs to
/tmp/claude_hooks.log
The hook is registered in Claude Code settings under the stop_hooks array, ensuring it runs automatically without user intervention.
2. Tech Blog Generator Script
File: /Users/cb/Documents/repos/tools/tech_blog_generator.py
This Python script parses Claude session transcripts (JSONL format) and generates HTML blog posts. Key functionality:
- Transcript parsing: Reads JSONL transcript entries to extract tool use (file modifications), commands, and context
- Content extraction: Identifies modified files, created files, commands executed, and reasoning blocks
- HTML generation: Creates semantically structured HTML with syntax highlighting for code blocks
- Site routing: Determines which tech blog to publish to based on file paths and context
- Credential filtering: Strips API keys, passwords, and sensitive patterns before publishing
Generated posts include section headers for "Files Modified," "Infrastructure Changes," "Decisions Made," and "Technical Details."
3. Tech Blog Infrastructure Initializer
File: /Users/cb/Documents/repos/tools/tech_blog_init.py
This script provisions all AWS resources and configures DNS records. It:
- Creates S3 buckets with static website hosting enabled
- Provisions CloudFront distributions with appropriate cache settings
- Configures bucket policies for CloudFront access
- Creates Route53 DNS records (where applicable)
- Validates ACM certificates and manages DNS validation records
- Handles multi-provider DNS scenarios (Route53, GoDaddy, Namecheap)
The script is idempotent — running it multiple times against existing infrastructure is safe.
Navigation Integration
The tech blogs are now linked from each main site's "Ship's Papers" navigation menu. This provides Sergio and other stakeholders direct access to granular technical documentation without needing separate bookmarks or URLs.
Navigation links added to:
/Users/cb/Documents/repos/sites/queenofsandiego.com/index.html— navigation element linking to tech.queenofsandiego.com- Corresponding index.html files for the other three properties
Key Technical Decisions
Why Wildcard Certificates?
Wildcard certificates eliminate the need to request and validate individual certificates for each tech subdomain. This reduces operational overhead and simplifies renewal management.
Why CloudFront + S3?
CloudFront provides global edge caching, reducing latency for blog readers worldwide. S3 serves as the origin, which is cost-effective for static content. Cache invalidation on publish (via CloudFront invalidation API) ensures readers see fresh posts immediately.
Why Multiple DNS Providers?
Rather than migrating all domains to Route53 (which would involve DNS cutover risk), we honored existing registrar relationships and