Source: Subscription confirmation emails from each publication (4 raw stubs in raw/newsletter-ai-news-and-updates-*.md) + Tavily extraction of each publication’s home archive (May 2026)

Catalog of AI-industry newsletters the karpathy wiki’s /inbox-refresh pipeline tracks via the Kill-the-Newsletter aggregator feed at v7zv927a6l9yn2ynvexq@kill-the-newsletter.com. All subscriptions are forwarded to a single KtN address, then converted to one Atom feed (v7zv927a6l9yn2ynvexq.xml) that the puller polls each run. The 4 newsletters below were added to the user’s subscription list on 2026-05-09 — the welcome/confirmation emails arrived first; real issues will land on subsequent run cycles. This article is the catalog of who each newsletter is, what they cover, and what their recent issues look like — so future ingests can decide whether each issue clears the wiki’s strict bar without re-researching the publication itself.

Key Takeaways

  • Single KtN feed = one aggregator, multiple newsletters. The user’s pattern is to subscribe to every AI newsletter using the same Kill-the-Newsletter forwarding address, then route everything into one Atom feed. Cleaner pipeline (one feed URL in ~/.birdclaw/newsletter-feeds.json) at the cost of per-newsletter labeling — each stub’s author: frontmatter field is the only signal of which publication it came from.
  • Strict-bar default still applies. Subscribing to a newsletter ≠ trusting every issue. Each ingested entry from the aggregator gets the same wiki-worthiness triage as a YouTube transcript or Reddit post: substantive technique / falsifiable claim / Anthropic-side announcement = consider; promotional content / TLDR-of-news with no operator angle = skip.
  • Welcome emails are skip-by-default. Subscription-confirmation emails (“thanks for joining,” “click to confirm”) have no actual publication content. They’re tracked here as the trigger that catalogs the source, not as ingestable items. Real first issues from each will arrive on subsequent runs and get triaged independently.

Tracked Newsletters

The Neuron Daily

  • URL: theneurondaily.com
  • Authors: Grant Harvey, Matthew Robinson, Eric Gerard Ruiz
  • Frequency: Monday–Friday, 5am PT / 8am ET
  • Subscribers: 700,000+ (per landing page; companies cited: Microsoft, Apple, Tesla)
  • Focus: Daily digest of AI news, tools, research with strategic-insight framing for professionals; explicit anti-fluff positioning
  • Welcome email (raw stub newsletter-ai-news-and-updates-710795a4de.md): standard onboarding — get-us-to-primary-inbox instructions, brand voice teaser (“snuggly cats!”), positioning vs “fluff” AI content
  • Recent issue titles (Tavily May 2026):
    • ”🙀 The 4-tool agent quietly powering OpenClaw” — coding agent named Pi
    • ”😺 Anthropic 🤝 SpaceX data center deal”“If you canceled Claude Code in April, try again.”
    • ”😺 Google ran out of cloud” — big-tech earnings TL;DR
    • ”😺 Anthropic just LAPPED OpenAI”“Why OpenAI & Anthropic should sell computers”
    • ”😺 Mayo’s AI spotted cancer 3 years before doctors did” — White House AI review
    • ”🙀 SubQ ships 12M tokens at 1/5 the cost”
    • “🎙️ Watch: How Isomorphic Labs works to drug ‘undruggable’ diseases”
  • Wiki triage prior: Anthropic-side announcements (SpaceX deal, model releases) and concrete benchmarks (SubQ token economics) clear the bar. AI-news roundups without operator-actionable content default to skip:tldr-of-news-no-operator-angle.

Finxter (Christian Mayer)

  • URL: blog.finxter.com / finxter.com
  • Author: Christian Mayer + the Finxter team
  • Frequency: Roughly daily; cadence varies by category
  • Subscribers: 100,000+
  • Focus: Vibe-coding + AI engineering tutorials (Python heavy), career-as-coder, AI-side-business mechanics (“digital assets,” monetizing AI skills); also distributes free cheat sheets and ebooks as lead magnets
  • Welcome email (raw stub newsletter-ai-news-and-updates-9734e118f7.md): “Confirm and Download Free Finxter Material” — lead-magnet onboarding, ebook gate
  • Recent posts (Tavily May 2026):
    • “6 Best AI Book Writers (2026): Tools for Authors & Self-Publishers”
    • “What Your Last 10 YouTube Videos Say About Your Future Business”
    • “[Free eBook] Automate the Boring Stuff and Make Money with Clawdbot” — explicit Clawdbot-mentioned ebook
    • “The JSON Trick – How to Scrape All Answers in a Subreddit?” — example: find user needs
  • Wiki triage prior: Tutorial content with reproducible workflow + falsifiable claim clears the bar (e.g., the Subreddit JSON-scraping technique = potential refresh for seo-content/clawdbot-competitive-intel or agents-agentic-systems/scrapecreators). Lead-magnet-only entries (cheat sheet downloads with no embedded technique) default to skip:lead-magnet-no-content.

LLMs Research

  • URL: llmsresearch.com (Beehiiv-hosted)
  • Author: anonymous editorial team (single-author pattern, name not on landing)
  • Frequency: Stated daily; observed cadence is biweekly digests of research-paper highlights (Tavily snapshot shows last issues Mar 2025 — may be stale or low-frequency)
  • Focus: Categorizing and explaining LLM research papers as they’re published — performance, instruction tuning, cache management, quantization, unlearning, context length, architectural changes
  • Welcome email (raw stub newsletter-ai-news-and-updates-97a0cf7d8d.md): standard Beehiiv subscription confirmation
  • Recent issue titles (most recent dated Mar 22, 2025):
    • “LLM Research Highlights: March 1-15, 2025 [Part 2/2]” — reasoning + context-length innovations
    • “LLM Research Highlights: March 1-15, 2025” — performance, instruction tuning, cache, quantization, unlearning
    • “Breakthrough papers improving LLMs performance” — Feb 16-28, 2025
    • “Creative way of using LLMs” — practical applications from Feb 15-28, 2025
    • “Research papers improving performance of LLMs [3/3]” — context-length + architecture, Jan 16 – Feb 15, 2025
  • Wiki triage prior: Verify cadence on first ingest. The 2025 dates on the landing suggest this newsletter may have gone dormant. If a real issue arrives, it likely contains a multi-paper digest — value depends on whether the analysis adds reproduction steps or operator-actionable patterns vs pure “here’s what was published” listing.

AI Hustle (Jason Stewart)

  • URL: aihustle.beehiiv.com (Beehiiv-hosted)
  • Author: Jason Stewart
  • Frequency: Self-described regular but observed posts are heavily skewed toward 2023-2024 era — likely low-frequency or near-dormant as of fetch
  • Subscribers: 8,000+ (per landing page)
  • Focus: AI news roundup + tools + tutorials; broad consumer-AI coverage (model releases, art, music, ChatGPT updates)
  • Welcome email (raw stub newsletter-ai-news-and-updates-bf58159ac2.md): “Welcome to The AI Hustle (Reply Needed)” — Beehiiv list-confirmation flow
  • Recent issue titles (Tavily May 2026 — most are 2023-2024 era):
    • “Google’s Gemini is heating up”“ChatGPT of music? New AI tools, AI master class, art and more”
    • “Meta working on new GPT-4 competitor”“Coca-Cola’s AI generated flavour, AI tutorial, tools & more”
    • “Worldcoin - Paradigm Shift in Global Finance and Identity Verification”
    • “Apple’s New AI Model”“OpenAI image editing”
    • “Microsoft’s AI PC time has arrived”
    • “Sam Altman on OpenAI drama & AGI”“Bill Gates’s predictions for AI in 2024”
    • “Stability AI reveals text to image AI”
  • Wiki triage prior: The dated issue titles suggest this is a low-priority source vs the others. Default to skip:tldr-of-news-no-operator-angle unless an issue contains a concrete operator pattern. If newsletter goes silent for 4+ weeks, consider removing from the KtN forwarding list.

Implementation

  • Tool/Service: Kill-the-Newsletter — convert email newsletters to Atom feeds.
  • Setup: Single KtN feed (v7zv927a6l9yn2ynvexq) used as a catch-all. Subscribe to each newsletter using v7zv927a6l9yn2ynvexq@kill-the-newsletter.com. Atom feed at https://kill-the-newsletter.com/feeds/v7zv927a6l9yn2ynvexq.xml is registered in ~/.birdclaw/newsletter-feeds.json as feed ai-news-and-updates.
  • Cost: Free.
  • Integration notes: All forwarding goes through ONE KtN address — per-newsletter labeling is via the author: frontmatter field on each stub (which contains the bounce-tracking SMTP from-address). The pipeline runs idempotently — bootstrap window 7 days, per-feed last_seen cursor in ~/.birdclaw/newsletter-state.json. To add a new newsletter: subscribe with the same KtN address, no config edit needed.

Open Questions

  • Per-newsletter labeling. With one KtN feed, distinguishing The Neuron from Finxter requires regex-matching the author: frontmatter (e.g., theneurondaily.com → Neuron; finxter.com → Finxter). Compile-time auto-tagging via the body refiller would surface the publication name as a real frontmatter field — currently a manual inspection.
  • Frequency drift detection. LLMs Research and AI Hustle both look low-frequency in May 2026. The wiki should flag publications where no new entry has arrived in 30+ days for re-evaluation (auto-prune from the watchlist or escalate to human review).
  • Lead-magnet vs content separation for Finxter. The publication mixes ebook downloads with substantive tutorials. The triage rule needs an explicit skip:lead-magnet-no-content reason for the former and a refresh-target rule for the latter.

Try It

  1. Subscribe to a new AI newsletter using v7zv927a6l9yn2ynvexq@kill-the-newsletter.com instead of your real email. The next /inbox-refresh run picks it up automatically.
  2. For each newsletter you’ve subscribed to but isn’t yet documented here, run /inbox-refresh and let the next issue arrive — then ingest it as a refresh of this catalog (add a new ### <Newsletter Name> section).
  3. Audit dormant subscriptions monthly — if a publication hasn’t appeared in 30 days of feed pulls, unsubscribe from the bounce-tracking-from address (the author: field on its last stub is the unsubscribe target).
  4. Treat each first real issue as a calibration sample — does the publication consistently produce wiki-worthy content, or is it 90% TLDR-of-news? Update the “Wiki triage prior” line in this article based on what you learn.