Extract all youtube links from page and get describtions from the videos themself
25. 7. 2022https://news.ycombinator.com/item?id=32220192
# get page to disk
curl "https://news.ycombinator.com/item?id=32220192" --output curl.htm
# filter stuff to only youtube like urls
cat curl.htm | grep -Po '(?<=href=\")[^\"]*(?=\")' | grep you | sed 's///\//g' | sort -u > woot.htm
goal: Get descriptions from yt itself, using yt-dlp (and perhaps thumbnail links) and generate simple markdown page.
A HNyoutubes slow script and test build.