Building an Automated Technical Blog System Across Four Domain Properties
This session involved architecting and implementing a comprehensive system to automatically generate granular technical blog posts for four separate properties: queenofsandiego.com, dangerouscentaur.com, sailjada.com, and burialsatseasandiego.com. The goal was to create real-time visibility into development work without manual intervention, enabling stakeholders to see detailed technical progress as it happens.
What Was Done
- Created automated blog generation infrastructure across four tech subdomains
- Implemented Claude Code integration hooks to capture session data and generate posts
- Set up S3 buckets, CloudFront distributions, and DNS records for each tech blog
- Added navigation links from Ship's Papers menu to tech blog properties
- Built email template validation and unsubscribe monitoring tools
- Identified and tracked image asset issues on burialsatseasandiego.sailjada.com
Technical Architecture
Blog Generation Pipeline
The system works through a two-stage hook mechanism in Claude Code:
tech_blog_init.py— Initializes infrastructure on first run, creating S3 buckets with proper structure and CloudFront distributionstech_blog_generator.py— Processes session transcripts (JSONL format from Claude's project sessions), extracts granular work items, and generates HTML blog poststech_blog_stop.sh— Executes at session end, calls the generator, uploads posts to S3, and invalidates CloudFront caches
The stop hook is registered in /Users/cb/.claude/settings.json under the hooks.stop configuration, ensuring it runs automatically after each development session completes.
Session Data Capture
Claude Code stores session transcripts in JSONL format at /Users/cb/.claude/projects/-Users-cb-Documents-repos/. Each line is a JSON object representing a tool use or message. The generator parses these to extract:
- Files modified and created with exact paths
- Commands executed (filtered to remove sensitive operations)
- Git commits and infrastructure changes
- Tool invocations with parameters (credentials redacted)
The transcript parsing filters out sensitive data patterns (credentials, API keys, tokens) using regex patterns to ensure no secrets leak into published posts.
Infrastructure Setup
Domain-Specific Configurations
queenofsandiego.com and sailjada.com: Both had existing wildcard ACM certificates (*.queenofsandiego.com and *.sailjada.com). The system created:
- S3 buckets:
tech-blog-queenofsandiegoandtech-blog-sailjada - CloudFront distributions with cache invalidation on post upload
- Route53 A-records aliasing tech.[domain] to CloudFront distributions
- Bucket policies allowing CloudFront origin access identity read permissions
dangerouscentaur.com: Uses Namecheap DNS with a wildcard certificate already in place. The system leveraged the existing dc-sites S3 bucket (origin for distribution E2Q4UU71SRNTMB) and added a CNAME record via Namecheap API to point tech.dangerouscentaur.com to the CloudFront distribution.
burialsatseasandiego.com: Hosted at GoDaddy DNS. Required ACM certificate validation via DNS CNAME record. The system:
- Created new S3 bucket and CloudFront distribution
- Validated ACM certificate by adding GoDaddy DNS CNAME records
- Set up CNAME record routing
tech.burialsatseasandiego.comto CloudFront
Bucket Structure
Each tech blog S3 bucket follows this structure:
s3://tech-blog-[domain]/
├── index.html # Landing page with post listing
├── posts/
│ ├── 2024-12-19-automated-blog-system.html
│ ├── 2024-12-19-email-template-validation.html
│ └── [date]-[slug].html
└── assets/
├── css/
├── js/
└── images/
CloudFront caching policies set to 60 seconds for HTML (allowing rapid updates) and longer TTLs for static assets.
Integration with Existing Sites
The Ship's Papers menu was updated across all three primary sites (index.html) to include navigation links to the respective tech blogs:
- queenofsandiego.com/index.html → links to tech.queenofsandiego.com
- sailjada.com/index.html → links to tech.sailjada.com
- dangerouscentaur.com/index.html → links to tech.dangerouscentaur.com
The navigation uses consistent styling and is positioned prominently in the dropdown menu, making it visible to stakeholders reviewing project progress.
Additional Tools and Improvements
During this session, several supporting tools were also created or enhanced:
email_template_validator.py— Validates HTML email templates for consistency, checking salutations, CTA buttons, and unsubscribe linksjada_unsubscribe_monitor.py— Monitors unsubscribe requests and logs them for compliance trackingjada_blast.py— Enhanced with improved error handling and template validation before sending
These were created to support the email broadcast system used across the properties, ensuring compliance and quality.
Content and Image Asset Issues
Investigation revealed that burialsatseasandiego.sailjada.com has incorrect images for two vessels:
/images/imagine.jpg— Currently shows incorrect vessel; should show the Imagine catamaran/images/small_catamaran.jpg— Currently shows wrong catamaran; needs replacement with correct vessel image
A progress board card ("Fleet Image Fixes") was created to track replacement of these assets. The correct images need to be sourced and uploaded, with CloudFront invalidations issued after replacement.
Key Architectural Decisions
Why Separate Tech Blogs: Each property maintains its own tech blog so stakeholders (like Sergio) can focus on work relevant to their specific domain without filtering through unrelated infrastructure changes.
Why Granular Posts: Rather than weekly summaries, the system generates a new post for each development session. This provides complete visibility into development velocity and specific decisions made, enabling detailed code reviews and architectural discussions.
Why Automatic Generation: Manual blog writing would create bottlenecks. By hooking into the session end event, posts are generated immediately without developer friction. The generator handles redaction of sensitive data automatically.
Why