Extract all youtube links from page and get describtions from the videos themself


# get page to disk
curl "https://news.ycombinator.com/item?id=32220192" --output curl.htm
# filter stuff to only youtube like urls
cat curl.htm | grep -Po '(?<=href=\")[^\"]*(?=\")' | grep you | sed 's/&#x2F;/\//g' | sort -u > woot.htm

goal: Get descriptions from yt itself, using yt-dlp (and perhaps thumbnail links) and generate simple markdown page.

A HNyoutubes slow script and test build.