---
name: youtube-ai-digest
description: Generates a daily YouTube AI Digest — a rich interactive HTML briefing organised by STORY not by channel. Multiple channels covering the same topic are merged into one story card with ranked clips. Designed for exactly 60 mins of daily video, with flexible runtime flagged per story. Use this skill whenever the user asks for their YouTube digest, "what's new in AI today", "YouTube briefing", "best AI videos today", "digest my YouTube channels", or any variation. Requires a YouTube Data API v3 key.
---

# YouTube AI Digest Skill (Story-First)

Generates a daily AI video intelligence briefing. **Organised by story, not by channel.** When multiple channels cover the same topic, they are merged into one story card with ranked clips. Total recommended watch time targets 60 minutes, with runtime flagged flexibly per story so the user can decide how deep to go.

---

## API Key Configuration

This skill requires a **YouTube Data API v3 key**. Get one free at:
https://console.cloud.google.com/apis/credentials

Once you have it, replace the placeholder below in your copy of the skill:

```
YOUTUBE_API_KEY = "<PASTE_YOUR_API_KEY_HERE>"
```

**Security note:** Do NOT commit this key to a public repo. Treat it as a secret. Rotate if ever exposed.

**To update later:** Tell Claude "update my YouTube digest API key to AIzaSy..." and it will replace the value above.

---

## Channel Configuration

Customise this channel list for your own interests. Default starter set:

```json
{
  "channels": [
    { "handle": "@startups",       "name": "This Week in Startups",  "tier": 1, "cadence": "3x/week" },
    { "handle": "@allin",          "name": "All-In Podcast",         "tier": 1, "cadence": "weekly" },
    { "handle": "@peterdiamandis", "name": "Moonshots · Diamandis",  "tier": 1, "cadence": "weekly" },
    { "handle": "@matthew_berman", "name": "Matthew Berman",         "tier": 1, "cadence": "4x/week" },
    { "handle": "@aiadvantage",    "name": "The AI Advantage",       "tier": 2, "cadence": "weekly" },
    { "handle": "@ycombinator",    "name": "Y Combinator",           "tier": 2, "cadence": "varies" }
  ],
  "schedule": "07:00 local time daily",
  "lookback_hours": 72,
  "max_videos_per_channel": 2,
  "60_min_daily_budget": true
}
```

### Channel Tier Notes

- **Tier 1** — Always include; high signal, fast cadence. Cover first.
- **Tier 2** — Include when content is found; strong but slower.
- **Tier 3** — Include only when content found AND it's relevant to your focus area.

### Lookback window

Use **72 hours** (not 24h). Weekly channels post every 5–7 days so a 24h window misses too much. The 72h window ensures nothing slips through. De-duplicate across runs — if a video was in yesterday's digest, note it as "Already in yesterday's digest" and skip it.

### Suggested channels to consider adding

- `@BenBermanAI` — practical AI tools, high signal
- `@LexFridman` — deep technical interviews, high relevance
- `@TwentyMinuteVC` — VC/startup strategy
- `@NowYouSeeIt` — cultural/media AI impact

---

## Prerequisites

**YouTube Data API v3 key** is required. If not available, use **web search** to find the latest content from each channel (search `"{channel name}" latest video {month} {year}`) and build the digest from search results. Note at the top of the digest: "Built via web search — transcript-level timestamps unavailable."

**Transcript availability:** Most videos have auto-captions. If none, skip transcript analysis and just show the video with a "Watch manually" note.

---

## Workflow

### Step 1 — Fetch Videos (72h lookback)

For each channel, resolve the channel ID and fetch recent videos:

```
GET https://www.googleapis.com/youtube/v3/channels
  ?part=id,snippet&forHandle={handle}&key={API_KEY}

GET https://www.googleapis.com/youtube/v3/search
  ?part=snippet&channelId={CHANNEL_ID}&type=video&order=date
  &publishedAfter={ISO_72H_AGO}&maxResults=2&key={API_KEY}
```

Extract per video: `videoId`, `title`, `publishedAt`, `description` (200 chars), `thumbnailUrl`, channel name.

**If YouTube API blocked:** Fall back to web search as described above.

---

### Step 2 — Fetch Transcripts

```python
subprocess.run(["pip", "install", "youtube-transcript-api", "--break-system-packages", "-q"])
from youtube_transcript_api import YouTubeTranscriptApi

def get_transcript(video_id):
    try:
        transcript = YouTubeTranscriptApi.get_transcript(video_id)
        return [{"start": int(item['start']), "text": item['text']} for item in transcript]
    except:
        return None
```

Format: `[{start_seconds}] {text}`. Truncate to ~6000 words if very long (first 30% + last 30%, middle summarised).

---

### Step 3 — Claude Analysis (per video)

Customise the user context line for your own role. Call the Anthropic API with this system prompt:

```
You are an AI insight extractor for the user, who is a [ROLE/CONTEXT — e.g. "CEO of an AI consulting firm"]. The user watches these videos to [PURPOSE — e.g. "stay ahead on AI, business strategy, and entrepreneurship"].

Extract the 3 most valuable moments from this transcript — genuine AHA moments a busy user would jump to immediately.

For each moment return:
- timestamp_seconds: integer
- timestamp_display: "MM:SS"
- headline: 8-10 word punchy headline
- insight: 2-3 sentences on why this matters to the user specifically
- category: one of [AI Breakthrough, Business Strategy, Tool/Demo, Contrarian Take, Market Signal, Practical Tip]
- heat_score: 1-10
- story_tags: list of 1-3 short topic tags (e.g. ["Anthropic", "Pentagon", "autonomous-AI"]) — used to cluster clips across channels into stories

Also return:
- video_summary: 2-sentence TL;DR
- overall_relevance: 1-10
- est_watch_mins: estimated full video watch time in minutes
- skip_reason: null or short reason to skip

Return ONLY valid JSON. Schema:
{
  "video_summary": "string",
  "overall_relevance": number,
  "est_watch_mins": number,
  "skip_reason": null | "string",
  "moments": [
    {
      "timestamp_seconds": number,
      "timestamp_display": "string",
      "headline": "string",
      "insight": "string",
      "category": "string",
      "heat_score": number,
      "story_tags": ["string"]
    }
  ]
}
```

---

### Step 4 — Story Clustering (KEY STEP — replaces per-channel ranking)

After all videos are analysed, group moments by shared `story_tags`:

1. **Identify stories**: Group moments whose `story_tags` overlap significantly (e.g. all moments tagged "Anthropic" + "Pentagon" from multiple channels form one story)
2. **Name each story**: Write a 6-8 word story headline that captures the actual news/insight (e.g. "Anthropic Refuses Pentagon — Existential Bet or Mistake?")
3. **Write a story summary**: 3-4 sentences synthesising across all clips covering this story
4. **Rank clips within the story**: Sort by `heat_score` descending. Include the top clip first (the one that covers the story best), then supporting clips from other channels beneath it
5. **Flag the best single clip**: Mark the highest `heat_score` clip in each story as "▶ Best clip" — this is the one to watch if time is short
6. **Standalone moments**: Any moment that doesn't cluster with others becomes its own single-clip story

**Story clustering rules:**
- Minimum 2 clips to form a multi-source story; otherwise single-clip story
- Maximum 4 clips per story (pick the 4 highest heat_score)
- A clip can only appear in one story (assign to the story where its tags match most)

---

### Step 5 — Build the 60-Min Playlist

After clustering:

1. Rank stories by average `heat_score` of their top clip (descending)
2. For each story, calculate `est_story_mins` = sum of estimated clip durations (~3 min per clip recommended, or use video `est_watch_mins` proportionally)
3. Flag each story with its `est_story_mins` prominently
4. Running total: keep accumulating until you hit 60 mins
5. Stories that push past 60 mins are moved to an **"If You Have More Time"** section (not hidden, just de-emphasised)
6. Never truncate a story mid-way — include or exclude whole stories

**Target:** 5–8 stories fitting within 60 mins total.

---

### Step 6 — Read Design Skill

Before writing any HTML, read `/mnt/skills/public/frontend-design/SKILL.md` (if available) and apply its principles.

---

### Step 7 — Build the HTML Digest

Single self-contained HTML file. No external JS except Google Fonts. Pure HTML, CSS, vanilla JS.

#### Brand (default palette — replace with your own)

| Token | Value |
|---|---|
| Font | Mulish (all weights) |
| Yellow | `#f3af00` |
| Blue | `#207796` |
| Light Blue | `#dff3fa` |
| Charcoal | `#201600` |
| Background | `#ffffff` |

#### Page structure

**1. Sticky header** — brand mark + "Daily AI Digest" + date + local time + animated pulse dot + running total "60 min watch list"

**2. Hero section** — Dark charcoal background
- Large headline: "Your AI World. {Date}."
- Subtitle: {N} stories · {X} channels · 60 min watch list · Generated {time}

**3. Stats bar** — 4 tiles:
- Stories today
- Clips curated
- Channels active
- Total watch time (mins)

**4. Today's 60-Min Watch List** — Main section, stories sorted by importance

Each story card contains:
- **Story headline** (the actual news topic, not channel name)
- Source chips showing which channels covered it (e.g. "All-In · TWiST · Moonshots")
- Story synthesis (3-4 sentence summary synthesising across all clips)
- **⏱ Est. {N} mins** badge — prominently displayed
- Ranked clip list below:
  - Each clip: `▶ Best clip` or `+ Also covers this` label, channel name, timestamp deeplink, category pill, headline, 2-sentence insight, heat score bar
  - Clips sorted by heat_score — best first
  - Clips are clickable `<a>` tags opening YouTube at exact timestamp

**5. If You Have More Time** — Collapsible section with overflow stories beyond 60 mins. Same card format but visually subdued.

**6. Skipped** — Collapsible, dimmed. Channels with no content or skip_reason.

**7. Channel health bar** — Each channel, last video date, tier badge, active/inactive dot.

**8. Footer** — Brand mark + run-again note.

#### Timestamp deeplinks

```
https://www.youtube.com/watch?v={videoId}&t={timestamp_seconds}s
```

#### Key design principles

- **Story-first**: Channel names are secondary labels, never headings
- **Runtime always visible**: Every story shows ⏱ mins badge — the user can decide how deep to go
- **Best clip highlighted**: One clip per story is marked as the definitive watch
- Staggered fade-up animations on load
- Cards lift on hover
- Heat score bars animate in on scroll
- Must-watch story badges pulse
- Mobile responsive: single column below 640px

---

### Step 8 — Save and Present

Save to `/mnt/user-data/outputs/youtube_digest_{YYYY-MM-DD}.html`

Use `present_files` to deliver.

In chat, say only:
> "Your AI digest for {date} is ready — {N} stories, {X} clips across {Y} channels. 60-min watch list built."

---

## Error Handling

| Error | Behaviour |
|---|---|
| YouTube API blocked/unavailable | Fall back to web search per channel; note "web search fallback" in digest header |
| Channel not found | Log in footer health bar; continue |
| No transcript | Show clip card with "No transcript — watch manually"; skip insight extraction |
| API quota exceeded | Tell the user: "YouTube API quota hit. Falling back to web search for today's digest." |
| Video < 3 mins | Skip transcript analysis; show title + link only |
| Story clustering produces 0 multi-source stories | That's fine — show all as single-clip stories |

---

## Scheduled / Automated Runs

If using a desktop automation tool (e.g. Cowork or similar):

**Trigger phrase:** "Run my YouTube AI digest for today"

1. Trigger fires at desired time (e.g. 06:55 AM local)
2. Opens Claude, types the trigger phrase
3. Skill executes, generates HTML
4. Automation opens in browser or saves to designated folder

---

## Context to customise

Before using this skill, fill in:

- **User role / focus area** — what they watch AI videos to learn (used in the insight extraction prompt)
- **Business / project priorities** — framed so insights connect back to real work
- **Preferred content types** — practical tools, strategy, contrarian takes, etc.
- **Time budget** — default is 60 mins/day; adjust as needed
- **Local timezone** — for scheduled runs
