```html

Building a Granular Technical Blog System for Four Sailing & Event Brands

What Was Done

Created an automated technical blogging infrastructure that captures development work across four distinct domains—queenofsandiego.com, sailjada.com, dangerouscentaur.com, and burialsatseasandiego.com—and publishes granular posts to their respective tech subdomains in real-time. This system was designed to provide complete visibility into engineering decisions and implementation details for stakeholders like Sergio.

The Problem

Without a centralized record of technical decisions, infrastructure changes, and development work, stakeholders couldn't see the depth of engineering effort being invested. High-level summaries hide the complexity of multi-domain management, DNS configuration, CDN optimization, and deployment orchestration. The solution needed to be:

  • Automatic—no manual blog post creation
  • Granular—exact file paths, function names, resource IDs, not abstractions
  • Real-time—published immediately after session completion
  • Integrated—accessible from the Ship's Papers navigation menu
  • Secure—no credentials or sensitive data exposed

Technical Architecture

Core Components

The system consists of three primary pieces:

  • tech_blog_generator.py — Parses Claude session transcripts (JSONL format), extracts tool usage and file modifications, filters sensitive data, and generates HTML blog posts
  • tech_blog_init.py — Infrastructure-as-code provisioning: creates S3 buckets, CloudFront distributions, ACM certificates, and DNS records for each tech blog
  • tech_blog_stop.sh — Claude Code Stop hook that runs automatically after each session, invokes the generator, and publishes posts to the appropriate domain

Session Capture Mechanism

Claude Code stores session transcripts in JSON Lines format at:

~/.claude/projects/[PROJECT_ID]/sessions/[SESSION_ID].jsonl

Each line contains a tool use event with metadata:

  • tool_use_id — Unique identifier for the tool invocation
  • tool_name — The tool executed (e.g., "write", "edit", "bash_execute")
  • input — Arguments, file paths, or commands
  • output — Return values or command results
  • timestamp — When the action occurred

The generator reads the transcript, extracts file modifications and command executions, and reconstructs what was accomplished.

Infrastructure Created

For queenofsandiego.com

  • S3 bucket: qos-tech-blog (us-east-1) — stores HTML blog posts
  • CloudFront distribution: D3XXXXXXXXXXXX — accelerates content globally, caches HTML with 5-minute TTL
  • ACM certificate: *.queenofsandiego.com wildcard (existing) — reused for tech.queenofsandiego.com
  • Route53 hosted zone: Z1XXXXXXXXXXXX — added CNAME pointing tech.queenofsandiego.com to CloudFront distribution

For sailjada.com

  • S3 bucket: jada-tech-blog (us-east-1)
  • CloudFront distribution: D3YYYYYYYYYYYY
  • ACM certificate: *.sailjada.com wildcard (existing)
  • Route53 hosted zone: Z2XXXXXXXXXXXX — CNAME for tech.sailjada.com

For dangerouscentaur.com

  • S3 bucket: dc-sites (us-east-1) — existing wildcard CloudFront dist E2Q4UU71SRNTMB reused
  • Namecheap DNS: Added CNAME tech.dangerouscentaur.com pointing to CloudFront distribution (dangerouscentaur uses Namecheap, not Route53)

For burialsatseasandiego.com

  • S3 bucket: bats-tech-blog (us-east-1)
  • CloudFront distribution: D3ZZZZZZZZZZZZ
  • ACM certificate: New wildcard created for *.burialsatseasandiego.com
  • GoDaddy DNS: Added CNAME for tech.burialsatseasandiego.com with ACM DNS validation CNAME

Key Design Decisions

1. Wildcard Certificates Over Individual Certs

Rather than creating separate ACM certificates for each tech subdomain, we leveraged existing wildcard certs (*.queenofsandiego.com, *.sailjada.com) and created new ones only where needed. This reduces certificate management overhead and supports future subdomains.

2. Multi-DNS-Provider Strategy

Each brand uses its preferred DNS provider: Route53 for queenofsandiego.com and sailjada.com, Namecheap for dangerouscentaur.com, GoDaddy for burialsatseasandiego.com. Rather than consolidating DNS (which would involve domain transfers), we wrote provider-agnostic Python code that detects the provider via nameserver lookup and applies DNS changes accordingly.

3. Session Transcript Parsing Over Webhook Events

A webhook-based approach would require maintaining separate infrastructure. Instead, the system reads the local session transcript file after Claude Code stops—simpler, more reliable, and works offline. The Stop hook fires automatically; no external service dependencies.

4. Sensitive Data Filtering in the Generator

The generator applies regex-based filters to strip:

  • AWS access keys and secret keys
  • Database passwords and connection strings
  • API keys and tokens
  • Email addresses (when appearing as credentials)
  • File paths containing sensitive directories (e.g., .aws/credentials)

The output HTML is safe to share publicly while retaining all technical detail.

Blog Post Generation Example

When the Stop hook executes after a session, it:

  1. Reads the session transcript from ~/.claude/projects/[PROJECT_ID]/sessions/
  2. Parses file modifications (paths under /Users/cb/Documents/repos/sites/queenofsandiego.com/, etc.)
  3. Extracts command executions and their outcomes
  4. Filters credentials and sensitive data
  5. Groups changes by domain (queenofsandiego.com, sailj