A hipster web crawler
Go to file
Matt Arnold 4f5efa818b Add uglydump subcommand to web2text. 2021-04-03 23:38:41 -04:00
snarfbot Add uglydump subcommand to web2text. 2021-04-03 23:38:41 -04:00
.gitignore Refactor in preperation building the crawler 2020-12-30 21:46:12 -05:00
README.md Add uglydump subcommand to web2text. 2021-04-03 23:38:41 -04:00
requirements.txt Inital Commit 2020-12-29 00:06:37 -05:00
web2text.py Add uglydump subcommand to web2text. 2021-04-03 23:38:41 -04:00

README.md

Snarfbot

This will eventually be a web crawler that saves websites in plaintext files. For now please enjoy a few cli tools, written as POC. Comments, compliments, complaints, and pull requests accepted.

web2text

Command line tool that does exactly what it says on the tin. Extract the content of a web document to plain text. With a choice of two scraping engines.

The scrape command will attempt scraping with Newspaper3k. Which produces pretty text file, and attempts to filter out things like comments sections, page navgation links and so forth. However may truncate long pages. Has trouble with some javascript navigation elements. And uses a fairly obvious user agent that may be blocked or limited by some sites.

The uglydump command will dump the contents of a page with minimal filtering using a spoofed user agent by default. You may get javascript source and style information in your output. But the minimal filtering was chosen in order not to lose potentially important data. The default user agent chosen is a currentish version of firefox on Ubuntu X11