Chromium / Puppeteer site crawler
This crawler does a BFS starting from a given site entry point. It will not leave the entry point domain and it will not crawl a page more than once. Given a shared redis host/cluster this crawler can be distributed across multiple machines or processes. Discovered pages will be stored in mongo collection, each with a url, outbound urls, and a radius from the origin.
yarn
./crawl -u https://www.dadoune.com# Terminal 1
./crawl -u https://www.dadoune.com# Terminal 2
./crawl -rDEBUG=crawler:* ./crawl -u https://www.dadoune.com--maxRadiusor-mthe maximum link depth the crawler will explore from the entry url.--resumeor-rto resume crawling after prematurely exiting a process or to add additional crawlers to an existing crawl.--urlor-uthe entry point URL to kick the crawler off.