Building an Auto-Generated Technical Blog Pipeline Across Four Domain Properties
This session established a comprehensive system to automatically generate detailed technical blog posts documenting development work across four separate domain properties: queenofsandiego.com, sailjada.com, dangerouscentaur.com, and burialsatseasandiego.com. The goal was to create transparency for stakeholders (like Sergio) by capturing granular, technical details of infrastructure and code changes as they happen, posted immediately upon completion.
Architecture Overview
The system consists of three primary components:
- Claude Code Stop Hook — Executes at session end to capture the transcript
- Blog Generator — Parses session data and generates HTML posts with technical detail
- Infrastructure Init — Provisions S3 buckets, CloudFront distributions, and DNS records for each tech blog domain
This architecture ensures that every development session automatically produces a published post without manual intervention, maintaining a real-time audit trail of all changes.
Infrastructure Provisioning
For each of the four domains, we created separate S3 buckets and CloudFront distributions:
- tech-qos-blog S3 bucket → CloudFront distribution → tech.queenofsandiego.com (Route53)
- tech-jada-blog S3 bucket → CloudFront distribution → tech.sailjada.com (Route53)
- dc-sites S3 bucket (existing wildcard) → CloudFront E2Q4UU71SRNTMB → tech.dangerouscentaur.com (Namecheap CNAME)
- tech-bats-blog S3 bucket → CloudFront distribution → tech.burialsatseasandiego.com (GoDaddy DNS)
Wildcard ACM certificates were already in place for queenofsandiego.com and sailjada.com, allowing immediate HTTPS deployment. For burialsatseasandiego.com (managed at GoDaddy), we created a new ACM certificate requiring DNS validation via CNAME record.
Session Capture Mechanism
The Stop hook, stored at /Users/cb/.claude/hooks/tech_blog_stop.sh, runs automatically when a Claude Code session ends. It:
- Extracts the session transcript from
~/.claude/sessions/in JSONL format - Filters tool use entries to capture file modifications, command executions, and API calls
- Passes this structured data to the blog generator
- Redacts any credentials or sensitive information from captured commands
The hook is registered in ~/.claude/settings.json under the sessionHooks configuration, ensuring it executes for every session without additional user action.
Blog Generator Implementation
The generator (/Users/cb/Documents/repos/tools/tech_blog_generator.py) analyzes session data to produce HTML posts with sections for:
- Files Modified/Created — Exact paths and operation type (write/edit)
- Infrastructure Changes — S3, CloudFront, Route53, ACM operations with resource names and IDs
- Commands Executed — Sanitized command history showing operational steps
- Technical Rationale — Why specific tools, services, or architectural patterns were chosen
- Integration Points — How changes connect to existing systems
Posts are generated as standalone HTML files with timestamps, uploaded to the appropriate S3 bucket, and immediately served via CloudFront. The blog index is automatically updated with new post links.
Navigation Integration
The main site navigation (index.html for queenofsandiego.com) was updated to include a "Technical Blog" link under the Ship's Papers menu, making the tech documentation discoverable alongside operational information. This pattern is replicated across all four properties.
Domain-Specific Configuration
Each domain required different DNS handling:
- queenofsandiego.com & sailjada.com — Route53 A records pointing to CloudFront distributions (via AWS account management)
- dangerouscentaur.com — CNAME record at Namecheap DNS pointing to existing CloudFront distribution
- burialsatseasandiego.com — GoDaddy DNS with ACM validation CNAME and new CloudFront distribution A record
Configuration is stored in /Users/cb/.claude/projects/-Users-cb-Documents-repos/memory/project_tech_blogs.md, mapping each domain to its S3 bucket, CloudFront distribution ID, and DNS provider.
Granularity and Transparency
Unlike high-level summaries, these posts capture:
- Specific file paths modified (e.g.,
/Users/cb/Documents/repos/sites/queenofsandiego.com/index.html) - Exact infrastructure resource IDs (CloudFront distribution IDs, S3 bucket names)
- The sequence of operations, not just the final state
- Decision points and trade-offs (why Route53 vs. Namecheap, why separate buckets vs. consolidated storage)
This enables stakeholders to audit changes in detail, understand reasoning, and trace any issues back to their originating decisions.
Security and Redaction
The system automatically redacts:
- API credentials and tokens from command outputs
- Password and secret values from configuration files
- Personal information not relevant to technical operations
- Full AWS access key IDs and GoDaddy/Namecheap credentials
This ensures transparency with technical stakeholders while protecting sensitive operational data.
What's Next
The blog pipeline is now live. Future sessions will automatically generate posts as work is completed. Additional planned tasks include:
- Correcting image references on burialsatseasandiego.sailjada.com (imaginary and small catamaran images need to be replaced)
- Conducting a comprehensive Google Analytics audit across all properties to identify traffic patterns and conversion bottlenecks
- Analyzing booking funnel metrics to provide recommendations for increasing conversions
- Establishing baseline analytics dashboards accessible to stakeholders
Each of these tasks will automatically generate its own detailed technical post, creating a comprehensive record of the site's evolution and optimization efforts.
```