A hipster web crawler
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Matt Arnold 4f5efa818b Add uglydump subcommand to web2text. 2 years ago
snarfbot Add uglydump subcommand to web2text. 2 years ago
.gitignore Refactor in preperation building the crawler 2 years ago
README.md Add uglydump subcommand to web2text. 2 years ago
requirements.txt Inital Commit 2 years ago
web2text.py Add uglydump subcommand to web2text. 2 years ago

README.md

Snarfbot

This will eventually be a web crawler that saves websites in plaintext files. For now please enjoy a few cli tools, written as POC. Comments, compliments, complaints, and pull requests accepted.

web2text

Command line tool that does exactly what it says on the tin. Extract the content of a web document to plain text. With a choice of two scraping engines.

The scrape command will attempt scraping with Newspaper3k. Which produces pretty text file, and attempts to filter out things like comments sections, page navgation links and so forth. However may truncate long pages. Has trouble with some javascript navigation elements. And uses a fairly obvious user agent that may be blocked or limited by some sites.

The uglydump command will dump the contents of a page with minimal filtering using a spoofed user agent by default. You may get javascript source and style information in your output. But the minimal filtering was chosen in order not to lose potentially important data. The default user agent chosen is a currentish version of firefox on Ubuntu X11