```html

Building an Automated Technical Blog System Across Four Domain Properties

This session focused on creating a comprehensive technical documentation system that auto-generates granular technical blog posts across four separate properties: queenofsandiego.com, sailjada.com, dangerouscentaur.com, and burialsatseasandiego.com. The goal was to provide complete transparency into development work by capturing session data and automatically publishing detailed technical posts to tech subdomains.

System Architecture Overview

The solution consists of three primary components:

  • Blog Generator (/Users/cb/Documents/repos/tools/tech_blog_generator.py) — Parses Claude Code session transcripts in JSONL format and converts them into structured HTML blog posts
  • Infrastructure Initializer (/Users/cb/Documents/repos/tools/tech_blog_init.py) — Provisions AWS S3 buckets, CloudFront distributions, and DNS records for each tech blog subdomain
  • Stop Hook (/Users/cb/.claude/hooks/tech_blog_stop.sh) — Executes automatically when Claude Code sessions end, triggering blog generation and deployment

Infrastructure Provisioning

Each property required independent AWS infrastructure:

  • tech.queenofsandiego.com — S3 bucket with wildcard ACM certificate (*.queenofsandiego.com), CloudFront distribution, Route53 alias record in the queenofsandiego.com hosted zone
  • tech.sailjada.com — S3 bucket leveraging existing *.sailjada.com wildcard certificate, CloudFront distribution, Route53 alias in the sailjada.com hosted zone
  • tech.dangerouscentaur.com — S3 bucket (dc-sites) with Namecheap DNS CNAME record, reusing the existing dangerouscentaur wildcard CloudFront distribution (ID: E2Q4UU71SRNTMB)
  • tech.burialsatseasandiego.com — S3 bucket with new ACM certificate validation through GoDaddy API, CloudFront distribution, GoDaddy DNS CNAME record

The infrastructure script handles certificate validation asynchronously. For burialsatseasandiego.com, which uses GoDaddy nameservers, the system integrated with the GoDaddy API to programmatically add ACM DNS validation CNAME records, avoiding manual DNS operations.

Blog Generation Pipeline

The blog generator processes Claude Code session transcripts following this flow:

  1. Session Capture — Claude Code automatically saves session transcripts to ~/.claude/sessions/ in JSONL format, with each line representing a tool invocation or user interaction
  2. Transcript Parsing — The generator reads the JSONL file and extracts tool use entries, filtering out sensitive data (credentials, API keys, passwords, PII)
  3. Context Extraction — For each tool invocation, the generator captures:
    • Tool name and execution timestamp
    • Input parameters (sanitized)
    • Output results (where non-sensitive)
    • File modifications and creations
    • Commands executed with arguments
  4. HTML Rendering — Structured JSONL data is transformed into semantically organized HTML with appropriate heading hierarchy, code blocks, and narrative context
  5. Property Routing — Posts are routed to the correct tech subdomain based on which site's files were modified (detected by file path patterns)
  6. S3 Deployment — Generated HTML is uploaded to the appropriate S3 bucket with proper metadata and cache control headers
  7. CDN Invalidation — CloudFront cache is invalidated to ensure users see fresh content immediately

Stop Hook Integration

The stop hook script is invoked automatically when a Claude Code session terminates. The hook:

  • Waits for the session transcript to be fully written to disk
  • Invokes the blog generator with the session file path
  • Logs all operations to ~/.claude/hooks/logs/tech_blog_stop.log for auditability
  • Handles errors gracefully, preventing session termination failures

The hook is registered in Claude Code settings at /Users/cb/.claude/settings.json under the onSessionStop configuration.

Navigation Integration

Each property's primary site was updated to include tech blog links in the Ship's Papers menu:

  • queenofsandiego.com/index.html — Added "Development Blog" link pointing to tech.queenofsandiego.com
  • Similar updates for sailjada.com, dangerouscentaur.com, and burialsatseasandiego.com

This placement ensures that stakeholders like Sergio can easily access technical documentation to understand ongoing development activities in detail.

Data Sanitization Strategy

The generator implements strict redaction rules:

  • Environment variables containing password, key, secret, token, or credential in their names are filtered from output
  • AWS credentials and access keys are detected via pattern matching and redacted
  • Personal information (phone numbers, email addresses, addresses) is excluded from public blog posts
  • Database connection strings and API endpoints containing credentials are sanitized
  • File paths containing sensitive directories are modified to show structure without revealing sensitive locations

Key Decisions and Rationale

Wildcard Certificate Reuse: For queenofsandiego.com and sailjada.com, which already had wildcard ACM certificates in place, the system leveraged existing certificates rather than provisioning new ones. This reduced provisioning time and certificate quota consumption.

CloudFront-First Architecture: All tech blogs sit behind CloudFront distributions. This provides performance benefits (edge caching), security (DDoS protection), and the ability to serve from S3 origins without making buckets publicly accessible. Bucket policies restrict access to CloudFront origin access identities only.

DNS Provider Flexibility: The system supports mixed DNS providers — Route53 for queenofsandiego.com and sailjada.com, Namecheap for dangerouscentaur.com, and GoDaddy for burialsatseasandiego.com. The infrastructure script detects provider via existing nameserver queries and configures DNS accordingly.

Per-Session Blog Posts: Rather than a single rolling blog, each Claude Code session generates its own post. This provides granular traceability and allows Sergio to audit specific work periods. Posts are indexed by date and session ID.

Testing and Validation

The system was validated with:

  • Dry-run infrastructure provisioning to verify script logic without AWS API calls
  • End-to-end test using the current session transcript, verifying blog generation on all four properties
  • HTTP access tests to all four tech blog domains confirming CloudFront and DNS propagation