Building an Auto-Generated Technical Blog System Across Four Domain Properties
This session focused on creating a comprehensive technical documentation system that captures development work in real-time across four separate domain properties: queenofsandiego.com, dangerouscentaur.com, sailjada.com, and burialsatseasandiego.com. The goal is to provide granular, detailed technical visibility into all infrastructure and application changes as they happen—enabling stakeholders like Sergio to see exactly what's being done and why.
Architecture Overview
The system consists of four main components:
- Session Capture Hook: A Stop hook in Claude Code that triggers whenever a development session ends
- Blog Generator: Python script that parses Claude session transcripts and extracts technical work
- Infrastructure Init: Automated setup of S3, CloudFront, Route53, and ACM for each tech blog subdomain
- Navigation Integration: Links in Ship's Papers menus to make blogs discoverable from main sites
Infrastructure Setup for Four Tech Blog Subdomains
Each tech blog required identical infrastructure patterns but customized for its domain:
- tech.queenofsandiego.com: S3 bucket `qos-tech-blog`, CloudFront distribution with wildcard cert from existing `*.queenofsandiego.com` ACM certificate, Route53 hosted zone for queenofsandiego.com
- tech.sailjada.com: S3 bucket `jada-tech-blog`, CloudFront distribution using existing `*.sailjada.com` wildcard cert, Route53 hosted zone for sailjada.com
- tech.dangerouscentaur.com: S3 bucket `dc-sites` (shares existing wildcard CloudFront distribution E2Q4UU71SRNTMB), Namecheap DNS via CNAME record
- tech.burialsatseasandiego.com: S3 bucket `bats-tech-blog`, new CloudFront distribution with ACM cert validation via GoDaddy DNS API, GoDaddy nameserver control
The infrastructure init script (`/Users/cb/Documents/repos/tools/tech_blog_init.py`) automates all of this—creating S3 buckets with proper static website hosting config, CloudFront distributions with origin access control, and DNS records (either Route53 or external DNS providers via API).
Session Transcript Processing Pipeline
The blog generator (`/Users/cb/Documents/repos/tools/tech_blog_generator.py`) reads Claude session transcripts in JSONL format and extracts technical work through several filters:
- File modification tracking: Parses all "Write" and "Edit" operations to identify which files changed and in which repositories
- Command execution logging: Captures shell commands run during the session (with credential filtering)
- Tool usage detection: Identifies AWS API calls, DNS operations, and infrastructure changes from tool use blocks in the transcript
- Context extraction: Uses Claude's reasoning blocks to understand why decisions were made
- Credential filtering: Strips out any passwords, API keys, tokens, or sensitive data before publishing
The generator produces HTML posts with consistent formatting, timestamps, and cross-references to relevant infrastructure or code repositories.
Stop Hook Integration
The file `/Users/cb/.claude/hooks/tech_blog_stop.sh` is executed by Claude Code when any development session ends. This hook:
- Reads the current session transcript from Claude Code's session storage
- Determines which domain property the work applies to (using file path analysis and environment detection)
- Calls the blog generator to create a new HTML post
- Uploads the post to the appropriate S3 bucket (qos-tech-blog, jada-tech-blog, dc-sites, or bats-tech-blog)
- Invalidates the CloudFront distribution for that domain to ensure the blog index updates immediately
- Logs all operations to `/var/log/tech_blog_hook.log` for troubleshooting
The hook is registered in `/Users/cb/.claude/settings.json` under the hooks configuration, ensuring it runs automatically without manual intervention.
Navigation and Discoverability
Each main domain's Ship's Papers menu was updated to include a link to its tech blog:
- queenofsandiego.com/index.html: Added "Technical Blog" link in the Ship's Papers dropdown
- sailjada.com and dangerouscentaur.com: Similar updates to their navigation structures
- burialsatseasandiego.com: Navigation link added where applicable
This ensures that anyone visiting the main sites can easily discover the technical documentation without needing direct URLs.
Technical Decisions and Trade-offs
Why CloudFront instead of S3 direct hosting? CloudFront provides:
- HTTPS/TLS termination (required for professional appearance and SEO)
- Edge caching for instant blog index updates across global edge locations
- Single ACM certificate per domain wildcard (cost-effective for multiple subdomains)
- Ability to add WAF rules or access controls in future without touching S3
Why separate S3 buckets? Buckets are scoped to the domain property they represent, making:
- Access control and backup policies independent per domain
- Quota and scaling management separate (useful if one blog gets very large)
- Migration or domain transfer simpler in the future
Route53 vs. external DNS providers: queenofsandiego.com and sailjada.com use Route53 (central management), while dangerouscentaur uses Namecheap and burialsatseasandiego uses GoDaddy (reflecting their existing registrar relationships). The init script handles all three patterns via conditional DNS API calls.
Verification and Testing
The deployment was verified through:
- Dry runs of the infrastructure init script to catch configuration errors before creating resources
- HTTP/HTTPS access tests to all four tech blog subdomains after CloudFront propagation
- CloudFront distribution status checks to confirm certificate validation and origin access control
- DNS propagation verification via nslookup and dig commands
- Test post generation from this session's transcript to the tech.queenofsandiego.com blog
What's Next
The system is now live. Going forward:
- Every development session automatically generates a blog post capturing granular technical details
- Blog posts are indexed chronologically with full-text searchability (to be added in next phase)
- Infrastructure changes, code modifications, and operational decisions are all documented in real-time
- Stakeholders can monitor ongoing work without slowing down development
- Historical record exists for audit trails and compliance documentation
Additionally, the session identified two follow-up items: image