Compare commits

...

81 Commits

Author SHA1 Message Date
Ploum dc75136d07 fix a crash in opnk when cached but not downloaded 2024-05-11 22:49:03 +02:00
Ploum 51aa7fe853 fix spartan protocol error 2024-05-08 11:58:06 +02:00
Ploum 9a7e88d01b fix crash when feedparser is crashing on a bad RSS 2024-04-23 13:07:23 +02:00
Étienne Mollier 339acef720 opnk.py: fix warning with python3.12.
As initially identified by Paul Wise in [Debian Bug#1064209], opnk.py
experiences the following warning when running under python3.12:

	$ python3.12 opnk.py gemini://ploum.net >/dev/null
	/home/emollier/debian/forward-upstream/offpunk/opnk.py:52: SyntaxWarning: invalid escape sequence '\%'
	  less_prompt = "page %%d/%%D- lines %%lb/%%L - %%Pb\%%"

This is due to the interpretation of escape sequences being less
relaxed in the new Python interpreter version.  Doubling the backslash
is one way to resolve this issue.

[Debian Bug#1064209]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1064209

Signed-off-by: Étienne Mollier <emollier@debian.org>
2024-02-20 15:16:51 +01:00
Ploum 9c8693dc09 display empty files instead of opening them with xdg-open 2024-02-20 10:45:43 +01:00
Ploum 4e3d3ce62d netcache: add support for IPv6 hostname bug #40 2024-02-15 22:59:27 +01:00
Ploum d427287784 offpunk: fix IPv6 as an URL (bug #40) 2024-02-15 16:16:37 +01:00
Ploum 4a3ec61f1f 2.2 - February 13th 2023
- cache folder is now configurable through $OFFPUNK_CACHE_PATH environment variable (by prx)
- offpunk: adding an URL to a list now update the view mode if url already present
- netcache: solve an infinite gemini loop with code 6X (see also bug #31)
- ansicat: added support for <video> HTML-element
- ansicat: if chafa fails to load an image, fallback to timg if available
- offpunk: add list autocompletion to "tour"
- offpunk: removed "blackbox", which has not been used nor maintained
- offpunk: "gus" was broken, it is functionnal again
- opnk/offpunk: more informative prompt in less
- ansicat: added support for HTML description elements <dt> and <dd> (by Bert Livens)
- opnk: added "--mode" command-line argument (bug #39)
- offpunk: support for "preformatted" theming (bug #38)
- opnk/netcache: added "--cache-validity" command-line argument (bug #37)
- ansicat: consider files as XML, not SVG, if they don’t have .svg extension
- offpunk: fix "view link" crashing with link to empty files
2024-02-12 22:25:46 +01:00
Ploum 9bec3b48dd fix view link crashing with empty files 2024-02-11 12:14:52 +01:00
Ploum 36c9709bc4 consider files as XML, not SVG, if no .svg extension 2024-02-10 22:09:28 +01:00
Ploum 6ad59020a1 force new paragraph after a preformatted html block 2024-02-07 10:22:24 +01:00
Ploum 95b5c96534 adding a linebreak after a preformatted block 2024-02-06 15:10:31 +01:00
Ploum 8be531e5e2 a simple type was forcing images to be rendered twice 2024-02-01 14:33:44 +01:00
Ploum 0fda6f5623 --cache-validity argument added to opnk and netcach (#37) 2024-01-31 17:54:50 +01:00
Ploum 1bf98cf060 support for preformatted them - close #38 2024-01-31 17:25:33 +01:00
Ploum 2faf460f0f close #39: implement --mode in opnk 2024-01-31 16:30:49 +01:00
Ploum 6484cf3426 less prompts: last line of the screen 2024-01-30 16:21:58 +01:00
Bert Livens 1cd331170c Added support for <dd> and <dt> tags to ansicat to render websites like https://fsl.software/ better.
Signed-off-by: Bert Livens <bert@bertlivens.be>
2024-01-29 16:51:21 +01:00
Ploum eea914018c remove old blackbox call 2024-01-29 15:26:13 +01:00
Ploum 79a3f9875f More informative prompt in LESS 2024-01-29 15:15:00 +01:00
Ploum 0a9fb62582 check that the cache_path ends with / 2024-01-23 21:21:15 +01:00
prx 87837fd1fb implement set cache directory
Hi,
find below a patch which let user set a custom chache folder.

Environment variable OFFPUNK_CACHE_PATH is used.
This way, it can be set globally in a profile, or occasionnaly before running offpunk.
It also avoid the pain to parse options and dealing with flags in scripts.
Thank you for your attention.

Regards.

prx
2024-01-23 21:11:52 +01:00
Ploum 01de6fe6ae changelog update 2024-01-23 14:12:22 +01:00
Ploum cf459e5295 offpunk: "gus" was broken, it is functionnal again 2024-01-06 21:24:51 +01:00
Ploum c86a377d98 offpunk: removed "blackbox", which has not been used nor maintained 2024-01-06 21:21:08 +01:00
Ploum fa0793ef16 Migrating self.current_url to the modded URL
- adding an URL to a list now update the view mode if url already present
- self.current_url now contains the modded URL
2024-01-06 00:27:51 +01:00
Ploum 9d1fb1a3d4 add list autocompletion to tour 2023-12-25 00:14:04 +01:00
Ploum 6d7c45188f fallback to timg if chafa fails to load an image 2023-12-22 18:50:03 +01:00
Bert Livens 426161e35d Added initial support for html <video> elements, including for their poster tag.
Signed-off-by: Bert Livens <bert@bertlivens.be>
2023-12-19 22:24:35 +01:00
Ploum 1ed8ba749e solve an infinite loop with certificate 6X 2023-12-18 10:11:17 +01:00
Ploum ba5f6ecb91 easier error messages for pictures 2023-12-16 23:50:15 +01:00
Ploum 1bbd317c1d v2.1 - December 15th 2023
Two years ago, I started to modify AV-98 in order to browser Gemini
offline.
https://ploum.net/2021-12-06-offline-av98.html
https://ploum.net/2021-12-17-offline-gemini.html

It is interesting to reflect on those last two years and look at what
this little experiment has become. A tool I use daily. In fact, on of my
main tools with Neovim and Neomutt. Also a community. People
contributing. Sending patches, bug reports or even thank you notes. A
few people hanging out on the offpunk:matrix.org room. I’m really
grateful for that and for all the people who make the Gemini-sphere
alive.

This 2.1 release is also a more relaxed release. The 2.0 release was
well received, widely packaged and has very few bugs (comparing to 1.0).

So nothing ground breaking but, besides a few bug fixed (including the
infamous gemini redirection bug), some little nice-to-have features.
Like highlighting "new links" in Gemini and RSS pages. I was unsure if
it would work, turns out that it is the best thing ever invented to read
Gemini Antenna or long RSS feeds.

Also added: "copy title" and "copy link", allowing you to quickly
reference a page in a personnal note or an email.

And, one day, I realized that I often wanted to know where a given link
was pointing to (a bit like hovering your mouse on a link in a
traditional browser). So I added a "view XX" where XX is the number of
the link.

What is really interesting with this feature is how I wrote it: I
realized I had a need, opened the offpunk.py, wrote a few lines of code
to test it and… that was it. No correction to do. It was good at the
very first try. Which is a testament to the fact that the 2.0
refactorisation was actually a good thing.

So, here is it, the 2.1 release. Enjoy!

Changelog since 2.0:
- freshly updated gemtext/rss links are highlighted ("new_link" theme option)
- offpunk : new "copy title" and "copy link" function
- offpunk : new "view XX" feature where XX is a number to view information about a link
- ansicat: added "--mode" option
- redirections are now reflected in links and the cache (bug #28)
- ansicat: avoid a crash when urllib.parse.urljoin fails
- offpunk: Fix a crash when gus is called without parameters (Von Hohenheiden)
- ansicat: fixed a crash when parsing wrong hidden_url in gemini (bug #32)
- offpunk: offpunk --version doesn’t create the cache anymore (bug #27)
- ansicat: fix a crash with HTML without title (bug #33)
- netcache: gemini socket code can crash when IPv6 is disabled (mailing-list)
2023-12-12 16:53:54 +01:00
Ploum 52a3ef643a IPv6 might crash if disabled on the OS level
In his message "Bug in some gemini capsule", user Y C had crashes when
connecting to IPv6 enabled capsule while IPv6 was disabled at OS level.

This raised a crash when creating the socket in netcache gemini code.
For whatever reason, the socket creation was not in the "try/catch"
section.

This should fix the issue.
2023-12-11 23:45:19 +01:00
Ploum e3e81fe344 tentatively fix #33 2023-12-10 22:54:37 +01:00
Ploum f9e33914aa typo in changelog 2023-12-06 15:21:26 +01:00
Ploum f373144cca New view link feature
"view 12" will now gives you hindsight about link 12
2023-12-06 15:11:18 +01:00
Ploum 92516082c1 Access to xdg folders now refactored to be a function
Instead of creating three global variables, a xdg() function now returns
the DATA,CONFIG and CACHE folders.

This allows us to create the cache only when tentatively accessed
(this fixes bug #27)
2023-12-04 11:10:20 +01:00
Ploum aad1730cd8 New "copy link" and "copy title" features 2023-12-03 13:18:12 +01:00
Ploum 3164658352 Fixed a crash when parsing hidden_urls bug #32
GemtextRenderer is parsing the text for URLs not starting with "=>" and
adding them later to the list to avoid having to copy/paste with the
mouse. This is an hidden feature.

In this case, the url was not supposed to be one and included [] chars
which prevent urllib to know how to handle it.

The fix involved refactoring the looks_like_url functions out of offpunk
and add it to offutils so it can be used by ansicat to ensure a function
looks_like_url before giving it to urllib.
2023-12-02 00:11:34 +01:00
Ploum c3aff6755e follow redirects everywhere. Should fix #28$
«
2023-12-01 17:14:22 +01:00
vonhohenheiden@tutanota.com 3862183fee fix crash on calling 'gus' without query parameter
Apologies for the additional work and thanks for accepting the patch!
Please find it attached.
Best,
Von

Nov 26, 2023, 10:20 by sourcehut23@ploum.eu:

> On 23/11/26 08:04, vonhohenheiden@tutanota.com wrote:
> >This is a simple fix for a crash I observed when calling 'gus' without any parameters. The following stacktrace can be reproduced by ´python3 offpunk.py <http://offpunk.py> > gus [Enter]´
>
>
> Hi,
>
> Thanks for catching this. The patch looks good but cannot apply. It
> might be because the stacktrace is also considered as a patch by git am
> (that’s my only explanation).
>
> Do you mind sending it again without the stacktrace or as an attachment.
>
> A good practice in offpunk is also to add a line in the CHANGELOG (with
> your name at the end).  If not done, don’t worry, I will do it in a
> later commit.
>
> Thanks!
>
> Ploum
>
>>
>>
> >---
> >Traceback (most recent call last):
> >  File "offpunk/offpunk.py", line 1910, in <module>
> >    main()
> >  File "offpunk/offpunk.py", line 1905, in main
> >    gc.cmdloop()
> >  File "python3.11/cmd.py", line 138, in cmdloop
> >    stop = self.onecmd(line)
> >           ^^^^^^^^^^^^^^^^^
> >  File "python3.11/cmd.py", line 217, in onecmd
> >    return func(arg)
> >           ^^^^^^^^^
> >  File "offpunk/offpunk.py", line 949, in do_gus
> >    self._go_to_url(urllib.parse.urlunparse("gemini","geminispace.info <http://geminispace.info>","/search","",line,""))
> >                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >TypeError: urlunparse() takes 1 positional argument but 6 were given
> >---
>
>>
>>
> >Proposed patch follows.
> >Thanks,
> >Von
>
>>
>>
> >Signed-off-by: vonhohenheiden <vonhohenheiden@tutanota.com>
> >---
> >offpunk.py <http://offpunk.py> | 3 +++
> >1 file changed, 3 insertions(+)
>
>>
>>
> >diff --git a/offpunk.py b/offpunk.py
> >index a76ac0a..6e82c72 100755
> >--- a/offpunk.py
> >+++ b/offpunk.py
> >@@ -945,6 +945,9 @@ Use 'ls -l' to see URLs."""
> > 
> >     def do_gus(self, line):
> >         """Submit a search query to the geminispace.info <http://geminispace.info> search engine."""
> >+        if not line:
> >+            print("What?")
> >+            return 
> >         self._go_to_url(urllib.parse.urlunparse("gemini","geminispace.info <http://geminispace.info>","/search","",line,""))
> > 
> >     def do_history(self, *args):
> >-- 
> >2.43.0
>
>>
>>
>
> --
> Ploum - Lionel Dricot
> Blog: https://www.ploum.net
> Livres: https://ploum.net/livres.html
>

>From 8ffc15145bad3a74c7771d488df3cb751c4b8039 Mon Sep 17 00:00:00 2001
From: vonhohenheiden <vonhohenheiden@tutanota.com>
Date: Sun, 26 Nov 2023 07:38:19 +0100
Subject: [PATCH] fix crash on calling 'gus' without parameters

Signed-off-by: vonhohenheiden <vonhohenheiden@tutanota.com>
2023-11-26 20:51:50 +01:00
Ploum 233f237e15 added new_link to help theme 2023-11-25 11:44:16 +01:00
Ploum 1dddec0e86 switch the new_page duration from 60 to 600 seconds to catch the same reload 2023-11-24 11:10:03 +01:00
Ploum 6e09f0264b New theme option: new_link
In gemtext and RSS rendering, if a link point to a page which is
considered as "new" (it has been cached less than 60 seconds after the
page itself), we display it differently (by default in bold white).

This feature allows to quickcly see new links in RSS pages or aggregator
such as antenna.
2023-11-21 22:04:09 +01:00
Ploum b03cbd9c64 ansicat : added --mode option 2023-11-19 22:07:15 +01:00
Ploum 7e4bdd0601 Releasing 2.0 2023-11-16 12:00:00 +01:00
Ploum 5b5a2d6551 removing file from pyproject 2023-11-12 23:32:48 +01:00
Ploum 8ad571a269 remove timg from pyproject.toml 2023-11-12 22:50:46 +01:00
Ploum 512189256e updating README for 2.0 2023-11-12 15:51:48 +01:00
Ploum c9e5310a07 preparing changelog for 2.0 2023-11-12 15:13:27 +01:00
Ploum 63db80c7be introduced default_protocol option which default to gemini, solving #21 2023-11-12 15:01:58 +01:00
Ploum b5640cc474 typo reported on mastodon 2023-11-09 13:13:19 +01:00
Ploum e7831338de don’t crash ansicat if run without arguments 2023-11-09 12:46:47 +01:00
Ploum 5bf84c91fa documentation for ansicat/netcache/opnk 2023-11-09 12:36:32 +01:00
Ploum 24fe364f51 returns the good version number 2023-11-08 18:18:33 +01:00
Ploum 818257bcef renaming cache_migration to netcache_migration: fixes #25 2023-11-08 16:45:27 +01:00
Ploum ac78e85d04 Fixes bug #26
- make python-requests optional again
- reimplement --disable-http which had no effect
2023-11-08 16:37:13 +01:00
Ploum 8d082cb2df migration to hatchling and 2.0-beta2 release 2023-11-08 11:37:29 +01:00
Jean Abou Samra e1e25b9456 Simple packaging fix
--=-6tQG7FEKAZyWWN9kR8o8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Ploum,

I'm the author of the "d=C3=A9p=C3=AAche" you commented on here:

https://linuxfr.org/news/l-installation-et-la-distribution-de-paquets-pytho=
n-1-4#comment-1940663

I skimmed at your problem. I admit I have not read all packaging-related
threads on this mailing list. If I understand correctly, you want
to keep using the (relatively non-standard) current setup of
Offpunk, namely a bunch of scripts at the root of the source tree.

You can't do that with Flit, because it's an opinionated tool
which insists that you do it the more standard way. So here's
a simple patch that does it with hatchling instead.

You might also want to review the dependency list in pyproject.toml.
I see "timg>=3D1.3.2", but the latest version of timg on PyPI
is 1.1.6.

Best,
Jean

From 90b3b2ab3700c57f76d3ae5760a4f49048bca70d Mon Sep 17 00:00:00 2001
From: Jean Abou Samra <jean@abou-samra.fr>
Date: Tue, 7 Nov 2023 23:16:43 +0100
Subject: [PATCH] Basic fix for packaging

Flit is not suitable for this project because it insists on packaging a
single package, while Ploum insists on keeping top-level scripts that
aren't inside a package. Use hatchling instead.

To test:

$ python -m venv temp-test-venv # may require "apt install python3-venv" or such
$ source temp-test-venv/bin/activate
$ pip install .[http]
$ offpunk
$ deactivate

To check the build artifacts:

$ python -m build # may require "apt install python3-build" or such
$ cd dist
$ tar -xvf offpunk-2.0b1.tar.gz # check files, this should contain everything
$ unzip offpunk-2.0b1-py3-none-any.whl # check files, this should contain only Python modules (plus a dist-info directory)
2023-11-08 11:34:03 +01:00
Ploum 51dc856161 small debug for netcache 2023-11-08 10:25:39 +01:00
Ploum 856b89ff45 fixes a crash when parsing invalid RSS dates 2023-11-03 23:01:46 +01:00
Ploum bf17b21b30 fixes hang/crash when meeting ; itemtype in gopher 2023-11-01 23:45:47 +01:00
Ploum 29c447cd8e Revert completely previous fix.
Instead, if we do not support an inline image format, we don’t display
it at all instead of displaying a fake URL
2023-10-20 00:06:59 +02:00
Ploum 979b80c5bd fixes a crash with data:image/svg+xml links 2023-10-20 00:02:35 +02:00
Ploum f05adf1b59 solve a crash with tour when argument and no url 2023-10-19 13:34:26 +02:00
Ploum 924eed3775 fixes a crash with some invalid URLs 2023-10-14 17:22:47 +02:00
Ploum 010288a6fb fixes input in Gemini 2023-10-12 15:23:59 +02:00
Ploum 2b234cdc43 initial tentative to support podcast RSS/atom feeds 2023-10-09 13:26:12 +02:00
Ploum deaa199303 gemtext renderer should be the default, not plaintextrenderer 2023-10-08 00:23:08 +02:00
Ploum 1baf311f2c new html parsing for titles 2023-10-07 23:54:58 +02:00
Ploum d50bc5a8e2 force closing html title elements 2023-10-07 23:45:01 +02:00
Ploum f6cb7723e1 experimental: new plaintext renderer. Also used to view source 2023-10-07 23:30:09 +02:00
Ploum c19576bc43 shame on me: never commit without having launching the thing 2023-10-05 18:22:10 +02:00
Ploum d50925ce03 should fix ~lioploum/offpunk#24 2023-10-05 18:17:01 +02:00
Ploum 5dd2238ef2 don’t crash if there’s no XMLParsedAsHTMLWarning in BS4 (as we are trying to avoid them anyway 2023-10-05 14:27:58 +02:00
Ploum 4892b9e450 fixes a crash reported by Xavier Maillard for RSS feeds without link elemnet 2023-10-01 14:04:17 +02:00
Lionel Dricot eeae7e3ad7 put blocked URLs in its own file to make contributions easier 2023-09-26 22:21:19 +02:00
Lionel Dricot 7ffbd1b288 adding a comment to understand what I did 2023-09-25 11:05:55 +02:00
Austreelis 39c5f17a39 Fix None prompt until manually changed
The GeminiClient's constructor expected set_prompt to return the prompt,
while the function directly mutated it without returning anything.

This manifested as having "None" as prompt instead of the default "ON>", until
entering the offline command.

This patch both fixes the constructor by not setting self.prompt to the
result of GeminiClient.set_prompt, *and* make that function return the
prompt as well. Each of those is a separated hunk, feel free to only
apply whichever feels best (though applying both should warrant any
future mistake of the sort).

Signed-off-by: Austreelis <dev@austreelis.net>
2023-09-23 11:00:35 +02:00
Lionel Dricot f8d185eac9 ignoring encoding errors in ansicat 2023-09-23 10:42:45 +02:00
Lionel Dricot 8c752d7b44 fixed an an/or logical confusion that would cause non-displayable documents to be marked as to be refetched even when not necessary 2023-09-19 14:24:18 +02:00
14 changed files with 765 additions and 343 deletions

View File

@ -1,14 +1,102 @@
# Offpunk History
## 2.0-beta2 - unreleased
Changes since beta1
## 2.3 - Unreleased
- offpunk/netcache: fix IPv6 as an URL (bug #40)
- ansicat: display empty files (instead of opening them with xdg-open)
- fix escape sequence warning in python 3.12 (by Étienne Mollier) (Debian #1064209)
- ansicat : fix crash when feedparser is crashing on bad RSS
- netcache: fix spartan protocol error
- opnk: fix a crash when caching returns None
## 2.2 - February 13th 2023
- cache folder is now configurable through $OFFPUNK_CACHE_PATH environment variable (by prx)
- offpunk: adding an URL to a list now update the view mode if url already present
- netcache: solve an infinite gemini loop with code 6X (see also bug #31)
- ansicat: added support for <video> HTML-element
- ansicat: if chafa fails to load an image, fallback to timg if available
- offpunk: add list autocompletion to "tour"
- offpunk: removed "blackbox", which has not been used nor maintained
- offpunk: "gus" was broken, it is functionnal again
- opnk/offpunk: more informative prompt in less
- ansicat: added support for HTML description elements <dt> and <dd> (by Bert Livens)
- opnk: added "--mode" command-line argument (bug #39)
- offpunk: support for "preformatted" theming (bug #38)
- opnk/netcache: added "--cache-validity" command-line argument (bug #37)
- ansicat: consider files as XML, not SVG, if they dont have .svg extension
- offpunk: fix "view link" crashing with link to empty files
## 2.1 - December 15th 2023
- freshly updated gemtext/rss links are highlighted ("new_link" theme option)
- offpunk : new "copy title" and "copy link" function
- offpunk : new "view XX" feature where XX is a number to view information about a link
- ansicat: added "--mode" option
- redirections are now reflected in links and the cache (bug #28)
- ansicat: avoid a crash when urllib.parse.urljoin fails
- offpunk: Fix a crash when gus is called without parameters (Von Hohenheiden)
- ansicat: fixed a crash when parsing wrong hidden_url in gemini (bug #32)
- offpunk: offpunk --version doesnt create the cache anymore (bug #27)
- ansicat: fix a crash with HTML without title (bug #33)
- netcache: gemini socket code can crash when IPv6 is disabled (mailing-list)
## 2.0 - November 16th 2023
Changes since 1.10
- IMPORTANT: Licence has been changed to AGPL for ideological reasons
- IMPORTANT: Contact adress has been changed to offpunk2 on the same domain (because of spam)
- IMPORTANT: code has been splitted into several differents files.
- IMPORTANT: migrating from flit to hatchling (patch by Jean Abou Samra)
Major features:
- New command-line tool: "netcache"
- New command-line tool: "ansicat"
- New command-line tool: "opnk"
- "theme" command allows customization of the colours
- "--config-file" allows to start offpunk with custom config (#16)
- "view source" to view the source code of a page
- introduced the "default_protocol" options (default to gemini)
Improvments:
- Reading position is saved in less for the whole session
- Rendering is cached for the session, allowing faster browsing of a page already visited
- "redirect" supports domains starting with "*" to also block all subdomins
- "--images-mode" allow to choose at startup which images should be dowloaded (none,readable,full)
- Support for embedded multi-format rendering (such as RSS feeds with html elements)
- The cache is now automatically upgraded if needed (see .version in your cache)
- Images of html files are now downloaded with the html (slower sync but better reading experience)
- "--sync" can optionnaly take some lists as arguments, in order to make for specific sync
- initial tentative to support podcasts in RSS/Atom feeds
Other notable changes from 1.X:
- "accept_bad_ssl_certificates" now more agressive for http and really accepts them all
- Gopher-only: we dont support naming a page after the name of the incoming link
- Gemini-only: support for client generated certificates has been removed
- "file" is now marked as a dependency (thank Guillaume Loret)
## 2.0 (beta3 - final 2.0) - Released as 2.0
Changes since beta2:
- bug #25 : makes python-requests optional again
- --disable-http had no effect: reimplemented
- introduced the "default_protocol" options (default to gemini) to enter URLs without the :// part (fixes bug #21)
## 2.0-beta2 - November 8th 2023
Changes since beta1
- IMPORTANT: migrating from flit to hatchling (patch by Jean Abou Samra)
- "--sync" can optionnaly take some lists as arguments, in order to make for specific sync
- "view source" to view the source code of a page
- initial tentative to support podcasts in RSS/Atom feeds
- new PlaintextRenderer which display .txt files without any margin/color/linebreaks
- default URL blocked list is now its own file to make contributions easier
- prompt color is now part of the theme
- improves handling of base64 images
- fixes gophermap being considered as gemtext files
- fixes opening mailto links
- fixes existing non-html ressources marked a to_fetch even when not needed (simple and/or confusion)
- fixes a crash with RSSfeeds without <link> element
- fixes a crash with data:image/svg+xml links
- fixes a bug in HTML renderer where some hX element were not closed properly
- fixes input in Gemini while online
- fixes a crash with invalid URL
- fixes a crash while parsing invalid dates in RSS
- fixes hang/crash when meeting the ";" itemtype in gopher
- attempt at hiding XMLparsedAsHTMLWarning from BS4 library
- chafa now used by default everywhere if version > 1.10
- ignoring encoding error in ansicat
## 2.0-beta1 - September 05th 2023
This is an an experimental release. Bug reports and feedbacks are welcome on the offpunk-devel list.

View File

@ -13,13 +13,13 @@ Offpunk is a fork of the original [AV-98](https://tildegit.org/solderpunk/AV-98)
## How to use
Offpunk is a single python file. Installation is optional, you can simply download and run "./offpunk.py" or "python3 offpunk.py" in a terminal.
Offpunk is a set of python files. Installation is optional, you can simply git clone the project and run "./offpunk.py" or "python3 offpunk.py" in a terminal.
You use the `go` command to visit a URL, e.g. `go gemini.circumlunar.space`. (gemini:// is assumed if no protocol is specified. Supported protocols are gemini, gopher, finger, http, https, mailto, spartan and file).
You use the `go` command to visit a URL, e.g. `go gemini.circumlunar.space`. (gemini:// is assumed if no protocol is specified. Supported protocols are gemini, gopher, finger, http, https, mailto, spartan and file. Default protocol is configurable).
Links in pages are assigned numerical indices. Just type an index to follow that link. If page is too long to fit on your screen, the content is displayed in the less pager (by default). Type `q` to quit and go back to Offpunk prompt. Type `view` or `v` to display it again. (`view full` or `v full` allows to see the full html page instead of the article view. `v feed` try to display the linked RSS feed and `v feeds` displays a list of available feeds. This only applies to html pages)
Links in pages are assigned numerical indices. Just type an index to follow that link. If page is too long to fit on your screen, the content is displayed in the less pager. Type `q` to quit and go back to Offpunk prompt. Type `view` or `v` to display it again. (`view full` or `v full` allows to see the full html page instead of the article view. `v feed` try to display the linked RSS feed and `v feeds` displays a list of available feeds. This only applies to html pages. `v source` allows you to see the source code of a page and `v normal` to go back to normal view)
Use `add` to add a capsule to your bookmarks and `bookmarks` or `bm` to show your bookmarks (you can create multiple bookmarks lists, edit and remove them. See the `list` manual with `help list`).
Use `add` to add a page to your bookmarks and `bookmarks` or `bm` to show your bookmarks (you can create multiple bookmarks lists, edit and remove them. See the `list` manual with `help list`).
Use `offline` to only browse cached content and `online` to go back online. While offline, the `reload` command will force a re-fetch during the next synchronisation.
@ -35,6 +35,10 @@ For example, running
will refresh your bookmarks if those are at least 12h old. If cache-validity is not set or set to 0, any cache is considered good and only content never cached before will be fetched. `--assume-yes` will automatically accept SSL certificates with errors instead of refusing them.
Sync can be applied to only a subset of list.
`offpunk --sync bookmarks tour to_fetch --cache-validity 3600`
Offpunk can also be configured as a browser by other tool. If you want to use offpunk directly with a given URL, simply type:
`offpunk URL`
@ -53,19 +57,16 @@ Questions can be asked on the users mailing list:
## Dependencies
Offpunk has no "strict dependencies", i.e. it should run and work without anything
Offpunk has few "strict dependencies", i.e. it should run and work without anything
else beyond the Python standard library and the "less" pager. However, it will "opportunistically
import" a few other libraries if they are available to offer an improved
experience or some other features. Python libraries requests, bs4 and readability are required for http/html support. Images are displayed if chafa or timg are presents (python-pil is needed for chafa version before 1.10). When displaying only a picture (not inline), rendering will be pixel perfect in compatible terminals (such as Kitty) if chafa is at least version 1.8 or if timg is used.
experience or some other features such as HTTP/HTML or image support.
To avoid using unstable or too recent libraries, the rule of thumb is that a library should be packaged in Debian/Ubuntu. Keep in mind that Offpunk is mainly tested will all libraries installed. If you encounter a crash without one optional dependencies, please report it. Patches and contributions to remove dependencies or support alternatives are highly appreciated.
* [List of existing Offpunk packages (Repology)](https://repology.org/project/offpunk/versions)
* PIP: [requirements file to install dependencies with pip](requirements.txt)
* Debian Unstable: [Official Package by Étienne Mollier](https://packages.debian.org/sid/offpunk)
* Ubuntu/Debian: [command to install dependencies on Ubuntu/Debian without pip](ubuntu_dependencies.txt)
* Arch: [AUR package for Arch Linux, maintained by kseistrup](https://aur.archlinux.org/packages/offpunk-git)
* [Nix](https://nixos.org/): [package](https://github.com/NixOS/nixpkgs/blob/master/pkgs/applications/networking/browsers/offpunk/default.nix), maintained by [DamienCassou](https://github.com/DamienCassou)
* Alpine Linux: [package maintained by mio](https://pkgs.alpinelinux.org/packages?name=offpunk)
* Please contribute packages for other systems, theres a [mailing-list dedicated to packaging](https://lists.sr.ht/~lioploum/offpunk-packagers).
Run command `version` in offpunk to see if you are missing some dependencies.
@ -81,17 +82,22 @@ Dependencies to enable web browsing (packagers may put those in an offpunk-web m
* [BeautifulSoup4](https://www.crummy.com/software/BeautifulSoup) and [Readability](https://github.com/buriy/python-readability) are both needed to render HTML. Without them, HTML will not be rendered or be sent to an external parser like Lynx. (apt-get install python3-bs4 python3-readability or pip3 install readability-lxml)
* [Python-feedparser](https://github.com/kurtmckee/feedparser) will allow parsing of RSS/Atom feeds and thus subscriptions to them. (apt-get install python3-feedparser)
* [Chafa](https://hpjansson.org/chafa/) allows to display pictures in your console. Install it and browse to an HTML page with picture to see the magic.
* [Timg](https://github.com/hzeller/timg) is a slower alternative to chafa for inline images. But it has better rendering when displaying only the image. Install both to get the best of both world but if you need to choose one, choose Chafa.
* [Python-pil](http://python-pillow.github.io/) is required to only display the first frame of animated gif with chafa if chafa version is lower than 1.10.
Gopher dependencies:
* [Python-chardet](https://github.com/chardet/chardet) is used to detect the character encoding on Gopher (and may be used more in the future)
Older dependencies which are only needed on older systems (where chafa < 1.10)
* [Timg](https://github.com/hzeller/timg) is a slower alternative to chafa for inline images. Might be deprecated in the future.
* [Python-pil](http://python-pillow.github.io/) is required to only display the first frame of animated gif with chafa if chafa version is lower than 1.10. Might be deprecated in the future.
Nice to have (packagers should may make those optional):
* [Xsel](http://www.vergenet.net/~conrad/software/xsel/) allows to `go` to the URL copied in the clipboard without having to paste it (both X and traditional clipboards are supported). Also needed to use the `copy` command. (apt-get install xsel)
* [Python-setproctitle](https://github.com/dvarrazzo/py-setproctitle) will change the process name from "python" to "offpunk". Useful to kill it without killing every python service.
* [Python-chardet](https://github.com/chardet/chardet) is used to detect the character encoding on Gopher (and may be used more in the future)
## Features
* Browse https/gemini/gopher/spartan without leaving your keyboard and without distractions
* Browse https/gemini/gopher without leaving your keyboard and without distractions
* Customize your experience with the `theme` command.
* Built-in documentation: type `help` to get the list of command or a specific help about a command.
* Offline mode to browse cached content without a connection. Requested elements are automatically fetched during the next synchronization and are added to your tour.
* HTML pages are prettified to focus on content. Read without being disturbed or see the full page with `view full`.
@ -102,16 +108,15 @@ Nice to have (packagers should may make those optional):
* Ability to specify external handler programs for different MIME types (use `handler`)
* Enhanced privacy with `redirect` which allows to block a http domain or to redirect all request to a privacy friendly frontent (such as nitter for twitter).
* Non-interactive cache-building with configurable depth through the --sync command. The cache can easily be used by other software.
* IPv6 support
* Supports any character encoding recognised by Python
* Cryptography : TOFU or CA server certificate validation
* Cryptography : Extensive client certificate support if an `openssl` binary is available
* `netcache`, a standalone CLI tool to retrieve the cached version of a network ressource.
* `ansicat`, a standalone CLI tool to render HTML/Gemtext/image in a terminal.
* `opnk`, a standalone CLI tool to open any kind of ressources (local or network) and display it in your terminal or, if not possible, fallback to `xdg-open`.
## RC files
You can use an RC file to automatically run any sequence of valid Offpunk
commands upon start up. This can be used to make settings controlled with the
`set` or `handler` commanders persistent. You can also put a `go` command in
`set`, `handler` or `themes` commands persistent. You can also put a `go` command in
your RC file to visit a "homepage" automatically on startup, or to pre-prepare
a `tour` of your favourite Gemini sites or `offline` to go offline by default.
@ -121,5 +126,7 @@ The RC file should be called `offpunkrc` and goes in $XDG_CONFIG_DIR/offpunk (or
The offline content is stored in ~/.cache/offpunk/ as plain .gmi/.html files. The structure of the Gemini-space is tentatively recreated. One key element of the design is to avoid any database. The cache can thus be modified by hand, content can be removed, used or added by software other than offpunk.
Theres no feature to automatically trim the cache. But part of the cache can safely be removed manually.
The cache can be accessed/built with the `netcache` tool. See `netcache -h` for more informations.
Theres no feature to automatically trim the cache. But any part of the cache can safely be removed manually as there are no databases or complex synchronisation.

View File

@ -12,9 +12,9 @@ import mimetypes
import fnmatch
import netcache
import offthemes
from offutils import run,term_width,is_local,looks_like_base64
from offutils import run,term_width,is_local,looks_like_base64, looks_like_url
import base64
from offutils import _DATA_DIR
from offutils import xdg
try:
from readability import Document
_HAS_READABILITY = True
@ -24,21 +24,23 @@ except ModuleNotFoundError:
try:
from bs4 import BeautifulSoup
from bs4 import Comment
#if bs4 version >= 4.9.1, we need to silent some xml warnings
#if bs4 version >= 4.11, we need to silent some xml warnings
import bs4
version = bs4.__version__.split(".")
recent = False
if int(version[0]) > 4:
recent = True
elif int(version[0]) == 4:
if int(version[1]) > 9:
recent = True
elif int(version[1]) == 9:
recent = version[2] >= 1
recent = int(version[1]) >= 11
if recent:
from bs4 import XMLParsedAsHTMLWarning
import warnings
warnings.filterwarnings("ignore", category=XMLParsedAsHTMLWarning)
# As this is only for silencing some warnings, we fail
# silently. We dont really care
try:
from bs4 import XMLParsedAsHTMLWarning
import warnings
warnings.filterwarnings("ignore", category=XMLParsedAsHTMLWarning)
except:
pass
_HAS_SOUP = True
except ModuleNotFoundError:
_HAS_SOUP = False
@ -104,7 +106,8 @@ def inline_image(img_file,width):
if not os.path.exists(img_file):
return ""
#Chafa is faster than timg inline. Let use that one by default
inline = None
#But we keep a list of "inlines" in case chafa fails
inlines = []
ansi_img = ""
#We avoid errors by not trying to render non-image files
if shutil.which("file"):
@ -118,32 +121,39 @@ def inline_image(img_file,width):
if hasattr(img_obj,"n_frames") and img_obj.n_frames > 1:
# we remove all frames but the first one
img_obj.save(img_file,format="gif",save_all=False)
inline = "chafa --bg white -s %s -f symbols"
inlines.append("chafa --bg white -s %s -f symbols")
elif _NEW_CHAFA:
inline = "chafa --bg white -t 1 -s %s -f symbols --animate=off"
if not inline and _NEW_TIMG:
inline = "timg --frames=1 -p q -g %sx1000"
if inline:
cmd = inline%width + " %s"
inlines.append("chafa --bg white -t 1 -s %s -f symbols --animate=off")
if _NEW_TIMG:
inlines.append("timg --frames=1 -p q -g %sx1000")
image_success = False
while not image_success and len(inlines)>0:
cmd = inlines.pop(0)%width + " %s"
try:
ansi_img = run(cmd, parameter=img_file)
image_success = True
except Exception as err:
ansi_img = "***image failed : %s***\n" %err
ansi_img = "***IMAGE ERROR***\n%s\n%s" %(str(err)[:50],str(err)[-50:])
return ansi_img
def terminal_image(img_file):
#Render by timg is better than old chafa.
# it is also centered
cmd = None
cmds = []
if _NEW_CHAFA:
cmd = "chafa -C on -d 0 --bg white -t 1 -w 1"
elif _NEW_TIMG:
cmd = "timg --loops=1 -C"
cmds.append("chafa -C on -d 0 --bg white -t 1 -w 1")
elif _HAS_CHAFA:
cmd = "chafa -d 0 --bg white -t 1 -w 1"
if cmd:
cmd = cmd + " %s"
run(cmd, parameter=img_file, direct_output=True)
cmds.append("chafa -d 0 --bg white -t 1 -w 1")
if _NEW_TIMG:
cmds.append("timg --loops=1 -C")
image_success = False
while not image_success and len(cmds) > 0:
cmd = cmds.pop(0) + " %s"
try:
run(cmd, parameter=img_file, direct_output=True)
image_success = True
except Exception as err:
print(err)
# First, we define the different content->text renderers, outside of the rest
@ -165,7 +175,10 @@ class AbstractRenderer():
def display(self,mode=None,directdisplay=False):
wtitle = self.get_formatted_title()
body = wtitle + "\n" + self.get_body(mode=mode)
if mode == "source":
body = self.body
else:
body = wtitle + "\n" + self.get_body(mode=mode)
if directdisplay:
print(body)
return True
@ -352,15 +365,28 @@ class AbstractRenderer():
# Beware, blocks are not wrapped nor indented and left untouched!
# They are mostly useful for pictures and preformatted text.
def add_block(self,intext):
def add_block(self,intext,theme=None):
# If necessary, we add the title before a block
self._title_first()
# we dont want to indent blocks
self._endline()
self._disable_indents()
self.final_text += self.current_indent + intext
self.new_paragraph = False
self._endline()
#we have to apply the theme for every line in the intext
#applying theme to preformatted is controversial as it could change it
if theme:
block = ""
lines = intext.split("\n")
for l in lines:
self.open_theme(theme)
self.last_line += self.current_indent + l
self.close_theme(theme)
self._endline()
self.last_line += "\n"
#one thing is sure : we need to keep unthemed blocks for images!
else:
self.final_text += self.current_indent + intext
self.new_paragraph = False
self._endline()
self._enable_indents()
def add_text(self,intext):
@ -507,8 +533,12 @@ class AbstractRenderer():
self.rendered_text[mode] += results[0] + "\n"
#we should absolutize all URLs here
for l in results[1]:
abs_l = urllib.parse.urljoin(self.url,l.split()[0])
self.links[mode].append(abs_l)
ll = l.split()[0]
try:
abs_l = urllib.parse.urljoin(self.url,ll)
self.links[mode].append(abs_l)
except Exception as err:
print("Urljoin Error: Could not make an URL out of %s and %s"%(self.url,ll))
for l in self.get_subscribe_links()[1:]:
self.links[mode].append(l[0])
@ -542,6 +572,30 @@ class AbstractRenderer():
# The prepare() function output a list of tuple. Each tuple is [output text, format] where
# format should be in _FORMAT_RENDERERS. If None, current renderer is used
class PlaintextRenderer(AbstractRenderer):
def get_mime(self):
return "text/plain"
def get_title(self):
if self.title:
return self.title
elif self.body:
lines = self.body.splitlines()
if len(lines) > 0:
# If not title found, we take the first 50 char
# of the first line
title_line = lines[0].strip()
if len(title_line) > 50:
title_line = title_line[:49] + ""
self.title = title_line
return self.title
else:
self.title = "Empty Page"
return self.title
else:
return "(unknown)"
def render(self,gemtext, width=None,mode=None,startlinks=0):
return gemtext, []
# Gemtext Rendering Engine
class GemtextRenderer(AbstractRenderer):
def get_mime(self):
@ -567,7 +621,7 @@ class GemtextRenderer(AbstractRenderer):
self.title = "Empty Page"
return self.title
else:
return "Unknown Gopher Page"
return "(unknown)"
#render_gemtext
def render(self,gemtext, width=None,mode=None,startlinks=0):
@ -600,7 +654,7 @@ class GemtextRenderer(AbstractRenderer):
r.close_theme("preformatted")
elif preformatted:
# infinite line to not wrap preformated
r.add_block(line+"\n")
r.add_block(line+"\n",theme="preformatted")
elif len(line.strip()) == 0:
r.newparagraph(force=True)
elif line.startswith("=>"):
@ -613,7 +667,14 @@ class GemtextRenderer(AbstractRenderer):
if len(splitted) > 1:
name = splitted[1]
link = format_link(url,len(links)+startlinks,name=name)
if r.open_theme("oneline_link"):
# If the link point to a page that has been cached less than
# 600 seconds after this page, we consider it as a new_link
current_modif = netcache.cache_last_modified(self.url)
link_modif = netcache.cache_last_modified(url)
if current_modif and link_modif and current_modif - link_modif < 600 and\
r.open_theme("new_link"):
theme = "new_link"
elif r.open_theme("oneline_link"):
theme = "oneline_link"
else:
theme = "link"
@ -660,12 +721,19 @@ class GemtextRenderer(AbstractRenderer):
if "://" in line:
words = line.split()
for w in words:
if "://" in w:
if "://" in w and looks_like_url(w):
hidden_links.append(w)
r.add_text(line.rstrip())
links += hidden_links
return r.get_final(), links
class EmptyRenderer(GemtextRenderer):
def get_mime(self):
return "text/empty"
def prepare(self,body,mode=None):
text= "(empty file)"
return [[text, "GemtextRenderer"]]
class GopherRenderer(AbstractRenderer):
def get_mime(self):
return "text/gopher"
@ -738,7 +806,7 @@ class GopherRenderer(AbstractRenderer):
class FolderRenderer(GemtextRenderer):
#it was initialized with:
#self.renderer = FolderRenderer("",self.get_cache_path(),datadir=_DATA_DIR)
#self.renderer = FolderRenderer("",self.get_cache_path(),datadir=xdg("data"))
def __init__(self,content,url,center=True,datadir=None):
GemtextRenderer.__init__(self,content,url,center)
self.datadir = datadir
@ -809,10 +877,15 @@ class FeedRenderer(GemtextRenderer):
return "application/rss+xml"
def is_valid(self):
if _DO_FEED:
parsed = feedparser.parse(self.body)
try:
parsed = feedparser.parse(self.body)
except:
parsed = False
else:
return False
if parsed.bozo:
if not parsed:
return False
elif parsed.bozo:
return False
else:
#If no content, then fallback to HTML
@ -860,10 +933,26 @@ class FeedRenderer(GemtextRenderer):
self.validity = False
postslist = ""
for i in parsed.entries:
line = "=> %s " %i.link
if "link" in i:
line = "=> %s " %i.link
elif "links" in i and len(i.links) > 0:
link = None
j = 0
while not link and j < len(i.links):
link = i.links[j].href
if link:
line = "=> %s "%link
else:
line = "* "
else:
line = "* "
if "published" in i:
pub_date = time.strftime("%Y-%m-%d",i.published_parsed)
line += pub_date + " : "
#sometimes fails so protect it
try:
pub_date = time.strftime("%Y-%m-%d",i.published_parsed)
line += pub_date + " : "
except:
pass
if "title" in i:
line += "%s" %(i.title)
if "author" in i:
@ -956,7 +1045,11 @@ class HtmlRenderer(AbstractRenderer):
except Exception as err:
pass
soup = BeautifulSoup(self.body,"html.parser")
self.title = str(soup.title.string)
if soup.title:
self.title = str(soup.title.string)
else:
self.title = ""
return self.title
else:
return ""
@ -1017,7 +1110,7 @@ class HtmlRenderer(AbstractRenderer):
toreturn = " " + toreturn
return toreturn
def recursive_render(element,indent="",preformatted=False):
if element.name == "blockquote":
if element.name in ["blockquote", "dd"]:
r.newparagraph()
r.startindent(" ",reverse=" ")
for child in element.children:
@ -1025,7 +1118,7 @@ class HtmlRenderer(AbstractRenderer):
recursive_render(child,indent="\t")
r.close_theme("blockquote")
r.endindent()
elif element.name in ["div","p"]:
elif element.name in ["div","p","dt"]:
r.newparagraph()
for child in element.children:
recursive_render(child,indent=indent)
@ -1043,18 +1136,19 @@ class HtmlRenderer(AbstractRenderer):
elif element.name in ["h4","h5","h6"]:
if not r.open_theme("subsubtitle"):
r.open_theme("subtitle")
r.newparagraph()
for child in element.children:
r.newparagraph()
recursive_render(child)
r.newparagraph()
r.close_all()
#r.close_all()
r.close_all()
r.newparagraph()
elif element.name in ["code","tt"]:
for child in element.children:
recursive_render(child,indent=indent,preformatted=True)
elif element.name in ["pre"]:
r.newparagraph()
r.add_block(element.text)
r.newparagraph()
r.add_block(element.text,theme="preformatted")
r.newparagraph(force=True)
elif element.name in ["li"]:
r.startindent("",sub=" ")
for child in element.children:
@ -1123,15 +1217,64 @@ class HtmlRenderer(AbstractRenderer):
if not mode in self.images:
self.images[mode] = []
abs_url,data = looks_like_base64(src,self.url)
links.append(abs_url+" "+text)
self.images[mode].append(abs_url)
#if abs_url is None, it means we dont support
#the image (such as svg+xml). So we hide it.
if abs_url:
links.append(abs_url+" "+text)
self.images[mode].append(abs_url)
link_id = " [%s]"%(len(links)+startlinks)
r.add_block(ansi_img)
r.open_theme("image_link")
r.center_line()
r.add_text(text + link_id)
r.close_theme("image_link")
r.newline()
elif element.name == "video":
poster = element.get("poster")
src = element.get("src")
for child in element.children:
if not src:
if child.name == "source":
src = child.get("src")
text = ""
if poster:
ansi_img = render_image(poster,width=width,mode=mode)
alt = element.get("alt")
if alt:
alt = sanitize_string(alt)
text += "[VIDEO] %s"%alt
else:
text += "[VIDEO]"
if poster:
if not mode in self.images:
self.images[mode] = []
poster_url,d = looks_like_base64(poster,self.url)
if poster_url:
vid_url,d2 = looks_like_base64(src,self.url)
self.images[mode].append(poster_url)
r.add_block(ansi_img)
r.open_theme("image_link")
r.center_line()
if vid_url and src:
links.append(vid_url+" "+text)
link_id = " [%s]"%(len(links)+startlinks)
r.add_text(text + link_id)
else:
r.add_text(text)
r.close_theme("image_link")
r.newline()
elif src:
vid_url,d = looks_like_base64(src,self.url)
links.append(vid_url+" "+text)
link_id = " [%s]"%(len(links)+startlinks)
r.add_block(ansi_img)
r.open_theme("image_link")
r.center_line()
r.add_text(text + link_id)
r.close_theme("image_link")
r.newline()
elif element.name == "br":
r.newline()
elif element.name not in ["script","style","template"] and type(element) != Comment:
@ -1174,17 +1317,23 @@ _FORMAT_RENDERERS = {
"text/gemini": GemtextRenderer,
"text/html" : HtmlRenderer,
"text/xml" : FeedRenderer,
"text/plain" : PlaintextRenderer,
"application/xml" : FeedRenderer,
"application/rss+xml" : FeedRenderer,
"application/atom+xml" : FeedRenderer,
"text/gopher": GopherRenderer,
"image/*": ImageRenderer,
"application/javascript": HtmlRenderer,
"application/json": HtmlRenderer,
"text/empty": EmptyRenderer,
}
def get_mime(path,url=None):
#Beware, this one is really a shaddy ad-hoc function
if not path:
return None
#If the file is empty, simply returns it
elif os.path.exists(path) and os.stat(path).st_size == 0:
return "text/empty"
elif url and url.startswith("gopher://"):
#special case for gopher
#code copy/pasted from netcache
@ -1201,7 +1350,7 @@ def get_mime(path,url=None):
mime = "text/gopher"
elif itemtype == "h":
mime = "text/html"
elif itemtype in ("9","g","I","s"):
elif itemtype in ("9","g","I","s",";"):
mime = "binary"
else:
mime = "text/gopher"
@ -1223,6 +1372,9 @@ def get_mime(path,url=None):
# If its a xml file, consider it as such, regardless of what file thinks
elif path.endswith(".xml"):
mime = "text/xml"
# If it doesnt end with .svg, it is probably an xml, not a SVG file
elif "svg" in mime and not path.endswith(".svg"):
mime = "text/xml"
#Some xml/html document are considered as octet-stream
if mime == "application/octet-stream":
mime = "text/xml"
@ -1237,6 +1389,10 @@ def get_mime(path,url=None):
else:
#by default, we consider its gemini except for html
mime = "text/gemini"
#file doesnt recognise gemtext. It should be the default renderer.
#the only case were it doesnt make sense is if the file is .txt
if mime == "text/plain" and not path.endswith(".txt"):
mime = "text/gemini"
return mime
def renderer_from_file(path,url=None,theme=None):
@ -1247,7 +1403,7 @@ def renderer_from_file(path,url=None,theme=None):
url = path
if os.path.exists(path):
if mime.startswith("text/") or mime in _FORMAT_RENDERERS:
with open(path) as f:
with open(path,errors="ignore") as f:
content = f.read()
f.close()
else:
@ -1260,7 +1416,7 @@ def renderer_from_file(path,url=None,theme=None):
def set_renderer(content,url,mime,theme=None):
renderer = None
if mime == "Local Folder":
renderer = FolderRenderer("",url,datadir=_DATA_DIR)
renderer = FolderRenderer("",url,datadir=xdg("data"))
if theme:
renderer.set_theme(theme)
return renderer
@ -1291,7 +1447,7 @@ def set_renderer(content,url,mime,theme=None):
renderer.set_theme(theme)
return renderer
def render(input,path=None,format="auto",mime=None,url=None):
def render(input,path=None,format="auto",mime=None,url=None,mode=None):
if not url: url = ""
else: url=url[0]
if format == "gemtext":
@ -1306,45 +1462,59 @@ def render(input,path=None,format="auto",mime=None,url=None):
r = ImageRenderer(input,url)
elif format == "folder":
r = FolderRenderer(input,url)
elif format in ["plaintext","text"]:
r = PlaintextRenderer(input,url)
else:
if not mime and path:
r= renderer_from_file(path,url)
else:
r = set_renderer(input,url,mime)
if r:
r.display(directdisplay=True)
r.display(directdisplay=True,mode=mode)
else:
print("Could not render %s"%input)
def main():
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--format", choices=["auto","gemtext","html","feed","gopher","image","folder"],
help="Renderer to use. Available: auto, gemtext, html, feed, gopher, image, folder")
descri = "ansicat is a terminal rendering tool that will render multiple formats (HTML, \
Gemtext, RSS, Gophermap, Image) into ANSI text and colors.\n\
When used on a file, ansicat will try to autodetect the format. When used with \
standard input, the format must be manually specified.\n\
If the content contains links, the original URL of the content can be specified \
in order to correctly modify relatives links."
parser = argparse.ArgumentParser(prog="ansicat",description=descri)
parser.add_argument("--format", choices=["auto","gemtext","html","feed","gopher","image","folder","text","plaintext"],
help="Renderer to use. Available: auto, gemtext, html, feed, gopher, image, folder, plaintext")
parser.add_argument("--mime", help="Mime of the content to parse")
## The argument needs to be a path to a file. If none, then stdin is used which allows
## to pipe text directly into ansirenderer
parser.add_argument("--url",metavar="URL", nargs="*",
help="Original URL of the content")
parser.add_argument("--mode", metavar="MODE",
help="Which mode should be used to render: normal (default), full or source.\
With HTML, the normal mode try to extract the article.")
parser.add_argument("content",metavar="INPUT", nargs="*", type=argparse.FileType("r"),
default=sys.stdin, help="Path to the text to render (default to stdin)")
args = parser.parse_args()
# Detect if we are running interactively or in a pipe
if sys.stdin.isatty():
#we are interactive, not in stdin, we can have multiple files as input
for f in args.content:
path = os.path.abspath(f.name)
try:
content = f.read()
except UnicodeDecodeError:
content = f
render(content,path=path,format=args.format,url=args.url,mime=args.mime)
if isinstance(args.content,list):
for f in args.content:
path = os.path.abspath(f.name)
try:
content = f.read()
except UnicodeDecodeError:
content = f
render(content,path=path,format=args.format,url=args.url,mime=args.mime,mode=args.mode)
else:
print("Ansicat needs at least one file as an argument")
else:
#we are in stdin
if not args.format and not args.mime:
print("Format or mime should be specified when running with stdin")
else:
render(args.content.read(),path=None,format=args.format,url=args.url,mime=args.mime)
render(args.content.read(),path=None,format=args.format,url=args.url,mime=args.mime,mode=args.mode)
if __name__ == '__main__':
main()

View File

@ -54,6 +54,8 @@ either thanks to the MIME type,
or from the file being rendered itself.
.It Fl \-mime Ar MIME
MIME type of the content to parse.
.It Fl \-mode Ar MODE
MODE to use to render to choose between normal (default), full or source
.It Fl \-url Ar URL ...
original URL of the content.
.El

View File

@ -27,6 +27,15 @@ otherwise it would always refresh it from the version available online.
It is also useful for mapping a given URL to its location in the cache,
independently of whether it has been downloaded first.
.Pp
Default cache path is
.Pa ~/.cache/offpunk .
Set
.Ev OFFPUNK_CACHE_PATH
environment variable to use another location.
.Bd -literal
OFFPUNK_CACHE_PATH=/home/ploum/custom-cache netcache.py gemini://some.url
.Ed
.Pp
.Xr Offpunk 1
is a command-line browser and feed reader dedicated to browsing the Web,
Gemini, Gopher and Spartan.
@ -47,6 +56,8 @@ The value is expressed in megabytes.
.It Fl \-timeout Ar TIMEOUT
time to wait before cancelling connection.
The value is expressed in seconds.
.It Fl \-cache-validity CACHE_VALIDITY
Maximum age (in second) of the cached version before redownloading a new version.
.El
.
.Sh EXIT STATUS

View File

@ -37,6 +37,10 @@ path to the file or URL to open.
.Bl -tag -width Ds -offset indent
.It Fl h , \-help
Show a help message and exit
.It Fl \-mode Ar MODE
MODE to use to render to choose between normal (default), full or source
.It Fl \-cache-validity CACHE_VALIDITY
Maximum age (in second) of the cached version before redownloading a new version.
.El
.
.Sh EXIT STATUS

View File

@ -3,7 +3,6 @@ import os
import sys
import urllib.parse
import argparse
import requests
import codecs
import getpass
import socket
@ -15,7 +14,7 @@ import sqlite3
from ssl import CertificateError
import ansicat
import offutils
from offutils import _CACHE_PATH,_DATA_DIR,_CONFIG_DIR
from offutils import xdg
import time
try:
import chardet
@ -30,10 +29,11 @@ try:
_BACKEND = default_backend()
except(ModuleNotFoundError,ImportError):
_HAS_CRYPTOGRAPHY = False
if not os.path.exists(_CACHE_PATH):
print("Creating cache directory {}".format(_CACHE_PATH))
os.makedirs(_CACHE_PATH)
try:
import requests
_DO_HTTP = True
except (ModuleNotFoundError,ImportError):
_DO_HTTP = False
# This list is also used as a list of supported protocols
standard_ports = {
@ -87,10 +87,9 @@ def cache_last_modified(url):
if not url:
return None
path = get_cache_path(url)
if path:
if path and os.path.isfile(path):
return os.path.getmtime(path)
else:
print("ERROR :NOCACHE in cache_last_modified")
return None
def is_cache_valid(url,validity=0):
@ -122,9 +121,10 @@ def is_cache_valid(url,validity=0):
#Theres not even a cache!
return False
def get_cache_path(url):
def get_cache_path(url,add_index=True):
# Sometimes, cache_path became a folder! (which happens for index.html/index.gmi)
# In that case, we need to reconstruct it
# if add_index=False, we dont add that "index.gmi" at the ends of the cache_path
#First, we parse the URL
if not url:
return None
@ -145,7 +145,7 @@ def get_cache_path(url):
elif scheme == "mailto":
path = parsed.path
elif url.startswith("list://"):
listdir = os.path.join(_DATA_DIR,"lists")
listdir = os.path.join(xdg("data"),"lists")
listname = url[7:].lstrip("/")
if listname in [""]:
name = "My Lists"
@ -174,7 +174,7 @@ def get_cache_path(url):
mime = "text/gopher"
elif itemtype == "h":
mime = "text/html"
elif itemtype in ("9","g","I","s"):
elif itemtype in ("9","g","I","s",";"):
mime = "binary"
else:
mime = "text/gopher"
@ -191,11 +191,11 @@ def get_cache_path(url):
if local:
cache_path = path
elif scheme and host:
cache_path = os.path.expanduser(_CACHE_PATH + scheme + "/" + host + path)
cache_path = os.path.expanduser(xdg("cache") + scheme + "/" + host + path)
#Theres an OSlimitation of 260 characters per path.
#We will thus cut the path enough to add the index afterward
cache_path = cache_path[:249]
# FIXME : this is a gross hack to give a name to
# this is a gross hack to give a name to
# index files. This will break if the index is not
# index.gmi. I dont know how to know the real name
# of the file. But first, we need to ensure that the domain name
@ -213,12 +213,12 @@ def get_cache_path(url):
cache_path += "/"
if not url.endswith("/"):
url += "/"
if cache_path.endswith("/"):
if add_index and cache_path.endswith("/"):
cache_path += index
#sometimes, the index itself is a dir
#like when folder/index.gmi?param has been created
#and we try to access folder
if os.path.isdir(cache_path):
if add_index and os.path.isdir(cache_path):
cache_path += "/" + index
else:
#URL is missing either a supported scheme or a valid host
@ -268,7 +268,7 @@ def set_error(url,err):
cache = get_cache_path(url)
if is_cache_valid(url):
os.utime(cache)
else:
elif cache:
cache_dir = os.path.dirname(cache)
root_dir = cache_dir
while not os.path.exists(root_dir):
@ -290,6 +290,7 @@ def set_error(url,err):
return cache
def _fetch_http(url,max_size=None,timeout=DEFAULT_TIMEOUT,accept_bad_ssl_certificates=False,**kwargs):
if not _DO_HTTP: return None
def too_large_error(url,length,max_size):
err = "Size of %s is %s Mo\n"%(url,length)
err += "Offpunk only download automatically content under %s Mo\n" %(max_size/1000000)
@ -368,10 +369,11 @@ def _fetch_gopher(url,timeout=DEFAULT_TIMEOUT,**kwargs):
request = selector
request += "\r\n"
s.sendall(request.encode("UTF-8"))
response = s.makefile("rb").read()
response1 = s.makefile("rb")
response = response1.read()
# Transcode response into UTF-8
#if itemtype in ("0","1","h"):
if not itemtype in ("9","g","I","s"):
if not itemtype in ("9","g","I","s",";"):
# Try most common encodings
for encoding in ("UTF-8", "ISO-8859-1"):
try:
@ -392,7 +394,7 @@ def _fetch_gopher(url,timeout=DEFAULT_TIMEOUT,**kwargs):
mime = "text/gopher"
elif itemtype == "h":
mime = "text/html"
elif itemtype in ("9","g","I","s"):
elif itemtype in ("9","g","I","s",";"):
mime = None
else:
# by default, we should consider Gopher
@ -498,7 +500,7 @@ def _validate_cert(address, host, cert,accept_bad_ssl=False,automatic_choice=Non
sha.update(cert)
fingerprint = sha.hexdigest()
db_path = os.path.join(_CONFIG_DIR, "tofu.db")
db_path = os.path.join(xdg("config"), "tofu.db")
db_conn = sqlite3.connect(db_path)
db_cur = db_conn.cursor()
@ -528,7 +530,7 @@ def _validate_cert(address, host, cert,accept_bad_ssl=False,automatic_choice=Non
db_conn.commit()
break
else:
certdir = os.path.join(_CONFIG_DIR, "cert_cache")
certdir = os.path.join(xdg("config"), "cert_cache")
with open(os.path.join(certdir, most_frequent_cert+".crt"), "rb") as fp:
previous_cert = fp.read()
if _HAS_CRYPTOGRAPHY:
@ -571,7 +573,7 @@ def _validate_cert(address, host, cert,accept_bad_ssl=False,automatic_choice=Non
VALUES (?, ?, ?, ?, ?, ?)""",
(host, address, fingerprint, now, now, 1))
db_conn.commit()
certdir = os.path.join(_CONFIG_DIR, "cert_cache")
certdir = os.path.join(xdg("config"), "cert_cache")
if not os.path.exists(certdir):
os.makedirs(certdir)
with open(os.path.join(certdir, fingerprint+".crt"), "wb") as fp:
@ -580,6 +582,7 @@ def _validate_cert(address, host, cert,accept_bad_ssl=False,automatic_choice=Non
def _fetch_gemini(url,timeout=DEFAULT_TIMEOUT,interactive=True,accept_bad_ssl_certificates=False,\
**kwargs):
cache = None
newurl = url
url_parts = urllib.parse.urlparse(url)
host = url_parts.hostname
port = url_parts.port or standard_ports["gemini"]
@ -631,10 +634,10 @@ def _fetch_gemini(url,timeout=DEFAULT_TIMEOUT,interactive=True,accept_bad_ssl_ce
# Connect to remote host by any address possible
err = None
for address in addresses:
s = socket.socket(address[0], address[1])
s.settimeout(timeout)
s = context.wrap_socket(s, server_hostname = host)
try:
s = socket.socket(address[0], address[1])
s.settimeout(timeout)
s = context.wrap_socket(s, server_hostname = host)
s.connect(address[4])
break
except OSError as e:
@ -653,6 +656,9 @@ def _fetch_gemini(url,timeout=DEFAULT_TIMEOUT,interactive=True,accept_bad_ssl_ce
# Send request and wrap response in a file descriptor
url = urllib.parse.urlparse(url)
new_netloc = host
#Handle IPV6 hostname
if ":" in new_netloc:
new_netloc = "[" + new_netloc + "]"
if port != standard_ports["gemini"]:
new_netloc += ":" + str(port)
url = urllib.parse.urlunparse(url._replace(netloc=new_netloc))
@ -688,9 +694,10 @@ def _fetch_gemini(url,timeout=DEFAULT_TIMEOUT,interactive=True,accept_bad_ssl_ce
else:
#TODO:FIXME we should not ask for user input while non-interactive
user_input = input("> ")
return _fetch_gemini(query(user_input))
newurl = url.split("?")[0]
return _fetch_gemini(newurl+"?"+user_input)
else:
return None
return None,None
# Redirects
elif status.startswith("3"):
newurl = urllib.parse.urljoin(url,meta)
@ -727,9 +734,9 @@ def _fetch_gemini(url,timeout=DEFAULT_TIMEOUT,interactive=True,accept_bad_ssl_ce
raise RuntimeError(meta)
# Client cert
elif status.startswith("6"):
print("Handling certificates for status 6X are not supported by offpunk\n")
print("Please open a bug report")
_fetch_gemini(url)
error = "Handling certificates for status 6X are not supported by offpunk\n"
error += "See bug #31 for discussion about the problem"
raise RuntimeError(error)
# Invalid status
elif not status.startswith("2"):
raise RuntimeError("Server returned undefined status code %s!" % status)
@ -760,16 +767,21 @@ def _fetch_gemini(url,timeout=DEFAULT_TIMEOUT,interactive=True,accept_bad_ssl_ce
else:
body = fbody
cache = write_body(url,body,mime)
return cache
return cache,newurl
def fetch(url,offline=False,download_image_first=True,images_mode="readable",validity=0,**kwargs):
url = normalize_url(url)
newurl = url
path=None
print_error = "print_error" in kwargs.keys() and kwargs["print_error"]
if is_cache_valid(url,validity=validity):
path = get_cache_path(url)
#Firt, we look if we have a valid cache, even if offline
#If we are offline, any cache is better than nothing
if is_cache_valid(url,validity=validity) or (offline and is_cache_valid(url,validity=0)):
path = get_cache_path(url)
#if the cache is a folder, we should add a "/" at the end of the URL
if not url.endswith("/") and os.path.isdir(get_cache_path(url,add_index=False)) :
newurl = url+"/"
elif offline and is_cache_valid(url,validity=0):
path = get_cache_path(url)
elif "://" in url and not offline:
@ -780,17 +792,22 @@ def fetch(url,offline=False,download_image_first=True,images_mode="readable",val
print("%s is not a supported protocol"%scheme)
path = None
elif scheme in ("http","https"):
path=_fetch_http(url,**kwargs)
if _DO_HTTP:
path=_fetch_http(url,**kwargs)
else:
print("HTTP requires python-requests")
elif scheme == "gopher":
path=_fetch_gopher(url,**kwargs)
elif scheme == "finger":
path=_fetch_finger(url,**kwargs)
elif scheme == "gemini":
path=_fetch_gemini(url,**kwargs)
path,newurl=_fetch_gemini(url,**kwargs)
elif scheme == "spartan":
path,newurl=_fetch_spartan(url,**kwargs)
else:
print("scheme %s not implemented yet")
print("scheme %s not implemented yet"%scheme)
except UserAbortException:
return
return None, newurl
except Exception as err:
cache = set_error(url, err)
# Print an error message
@ -813,13 +830,13 @@ def fetch(url,offline=False,download_image_first=True,images_mode="readable",val
print("""ERROR5: Trying to create a directory which already exists
in the cache : """)
print(err)
elif isinstance(err,requests.exceptions.SSLError):
elif _DO_HTTP and isinstance(err,requests.exceptions.SSLError):
if print_error:
print("""ERROR6: Bad SSL certificate:\n""")
print(err)
print("""\n If you know what you are doing, you can try to accept bad certificates with the following command:\n""")
print("""set accept_bad_ssl_certificates True""")
elif isinstance(err,requests.exceptions.ConnectionError):
elif _DO_HTTP and isinstance(err,requests.exceptions.ConnectionError):
if print_error:
print("""ERROR7: Cannot connect to URL:\n""")
print(str(err))
@ -829,10 +846,10 @@ def fetch(url,offline=False,download_image_first=True,images_mode="readable",val
print("ERROR4: " + str(type(err)) + " : " + str(err))
#print("\n" + str(err.with_traceback(None)))
print(traceback.format_exc())
return cache
return cache, newurl
# We download images contained in the document (from full mode)
if not offline and download_image_first and images_mode:
renderer = ansicat.renderer_from_file(path,url)
renderer = ansicat.renderer_from_file(path,newurl)
if renderer:
for image in renderer.get_images(mode=images_mode):
#Image should exist, should be an url (not a data image)
@ -847,13 +864,17 @@ def fetch(url,offline=False,download_image_first=True,images_mode="readable",val
#if that ever happen
fetch(image,offline=offline,download_image_first=False,\
images_mode=None,validity=0,**kwargs)
return path
return path, newurl
def main():
descri="Netcache is a command-line tool to retrieve, cache and access networked content.\n\
By default, netcache will returns a cached version of a given URL, downloading it \
only if not existing. A validity duration, in seconds, can also be given so that \
netcache downloads the content only if the existing cache is older than the validity."
# Parse arguments
parser = argparse.ArgumentParser(description=__doc__)
parser = argparse.ArgumentParser(prog="netcache",description=descri)
parser.add_argument("--path", action="store_true",
help="return path to the cache instead of the content of the cache")
parser.add_argument("--offline", action="store_true",
@ -862,11 +883,12 @@ def main():
help="Cancel download of items above that size (value in Mb).")
parser.add_argument("--timeout", type=int,
help="Time to wait before cancelling connection (in second).")
parser.add_argument("--cache-validity",type=int, default=0,
help="maximum age, in second, of the cached version before \
redownloading a new version")
# No argument: write help
parser.add_argument('url', metavar='URL', nargs='*',
help='download URL and returns the content or the path to a cached version')
# arg = URL: download and returns cached URI
# --cache-validity : do not download if cache is valid
# --validity : returns the date of the cached version, Null if no version
# --force-download : download and replace cache, even if valid
args = parser.parse_args()
@ -877,8 +899,8 @@ def main():
if args.offline:
path = get_cache_path(u)
else:
print("Download URL: %s" %u)
path = fetch(u,max_size=args.max_size,timeout=args.timeout)
path,url = fetch(u,max_size=args.max_size,timeout=args.timeout,\
validity=args.cache_validity)
if args.path:
print(path)
else:

33
offblocklist.py Normal file
View File

@ -0,0 +1,33 @@
# The following are the default redirections from Offpunk
# Those are by default because they should make sens with offpunk
redirects = {
"*twitter.com" : "nitter.net",
"youtube.com" : "yewtu.be",
"youtu.be" : "yewtu.be",
"*reddit.com" : "teddit.net",
"*medium.com" : "scribe.rip",
}
#following are blocked URLs. Visiting them with offpunk doesnt make sense.
#Blocking them will save a lot of bandwith
blocked = {
"*facebook.com",
"*facebook.net",
"*fbcdn.net",
"*linkedin.com",
"*licdn.com",
"*admanager.google.com",
"*google-health-ads.blogspot.com",
"*firebase.google.com",
"*google-webfonts-helper.herokuapp.com",
"*tiktok.com" ,
"*doubleclick.net",
"*google-analytics.com" ,
"*ads.yahoo.com",
"*advertising.amazon.com",
"*advertising.theguardian.com",
"*advertise.newrepublic.com",
}

View File

@ -4,8 +4,9 @@
Offline-First Gemini/Web/Gopher/RSS reader and browser
"""
__version__ = "2.0-beta1"
__version__ = "2.2"
## Initial imports and conditional imports {{{
import argparse
import cmd
import datetime
@ -25,22 +26,17 @@ import netcache
import opnk
import ansicat
import offthemes
from offutils import run,term_width,is_local,mode_url,unmode_url
from offutils import _CONFIG_DIR,_DATA_DIR,_CACHE_PATH
from offutils import run,term_width,is_local,mode_url,unmode_url, looks_like_url
from offutils import xdg
import offblocklist
try:
import setproctitle
setproctitle.setproctitle("offpunk")
_HAS_SETPROCTITLE = True
except ModuleNotFoundError:
_HAS_SETPROCTITLE = False
_HAS_XSEL = shutil.which('xsel')
try:
import requests
_DO_HTTP = True
except ModuleNotFoundError:
_DO_HTTP = False
## }}} end of imports
# Command abbreviations
_ABBREVS = {
@ -81,47 +77,6 @@ _ABBREVS = {
_MIME_HANDLERS = {
}
#An IPV6 URL should be put between []
#We try to detect them has location with more than 2 ":"
def fix_ipv6_url(url):
if not url or url.startswith("mailto"):
return url
if "://" in url:
schema, schemaless = url.split("://",maxsplit=1)
else:
schema, schemaless = None, url
if "/" in schemaless:
netloc, rest = schemaless.split("/",1)
if netloc.count(":") > 2 and "[" not in netloc and "]" not in netloc:
schemaless = "[" + netloc + "]" + "/" + rest
elif schemaless.count(":") > 2:
schemaless = "[" + schemaless + "]/"
if schema:
return schema + "://" + schemaless
return schemaless
# Cheap and cheerful URL detector
def looks_like_url(word):
try:
if not word.strip():
return False
url = fix_ipv6_url(word).strip()
parsed = urllib.parse.urlparse(url)
#sometimes, urllib crashed only when requesting the port
port = parsed.port
scheme = word.split("://")[0]
mailto = word.startswith("mailto:")
start = scheme in netcache.standard_ports
local = scheme in ["file","list"]
if mailto:
return "@" in word
elif not local:
return start and ("." in word or "localhost" in word)
else:
return "/" in word
except ValueError:
return False
# GeminiClient Decorators
def needs_gi(inner):
def outer(self, *args, **kwargs):
@ -142,7 +97,7 @@ class GeminiClient(cmd.Cmd):
os.umask(0o077)
self.opencache = opnk.opencache()
self.theme = offthemes.default
self.prompt = self.set_prompt("ON")
self.set_prompt("ON")
self.current_url = None
self.hist_index = 0
self.marks = {}
@ -151,7 +106,7 @@ class GeminiClient(cmd.Cmd):
# Sync-only mode is restriced by design
self.offline_only = False
self.sync_only = False
self.support_http = _DO_HTTP
self.support_http = netcache._DO_HTTP
self.automatic_choice = "n"
self.client_certs = {
"active": None
@ -177,22 +132,11 @@ class GeminiClient(cmd.Cmd):
"wikipedia" : "gemini://vault.transjovian.org:1965/search/%s/%s",
"search" : "gemini://kennedy.gemi.dev/search?%s",
"accept_bad_ssl_certificates" : False,
"default_protocol" : "gemini",
}
self.redirects = {
"*twitter.com" : "nitter.42l.fr",
"*facebook.com" : "blocked",
"*tiktok.com" : "blocked",
"*doubleclick.net": "blocked",
"*google-analytics.com" : "blocked",
"youtube.com" : "yewtu.be",
"*reddit.com" : "teddit.net",
"*medium.com" : "scribe.rip",
"*admanager.google.com": "blocked",
"*google-health-ads.blogspot.com": "blocked",
"*firebase.google.com": "blocked",
"*google-webfonts-helper.herokuapp.com": "blocked",
}
self.redirects = offblocklist.redirects
for i in offblocklist.blocked:
self.redirects[i] = "blocked"
term_width(new_width=self.options["width"])
self.log = {
"start_time": time.time(),
@ -222,6 +166,7 @@ class GeminiClient(cmd.Cmd):
self.prompt = "\001\x1b[%sm\002"%open_color + prompt + "\001\x1b[%sm\002"%close_color + "> "
#support for 256 color mode:
#self.prompt = "\001\x1b[38;5;76m\002" + "ON" + "\001\x1b[38;5;255m\002" + "> " + "\001\x1b[0m\002"
return self.prompt
def complete_list(self,text,line,begidx,endidx):
allowed = []
@ -261,6 +206,8 @@ class GeminiClient(cmd.Cmd):
return [i+" " for i in allowed if i.startswith(text)]
def complete_move(self,text,line,begidx,endidx):
return self.complete_add(text,line,begidx,endidx)
def complete_tour(self,text,line,begidx,endidx):
return self.complete_add(text,line,begidx,endidx)
def complete_theme(self,text,line,begidx,endidx):
elements = offthemes.default
@ -340,11 +287,11 @@ class GeminiClient(cmd.Cmd):
params["validity"] = 60
# Use cache or mark as to_fetch if resource is not cached
if handle and not self.sync_only:
displayed = self.opencache.opnk(url,mode=mode,grep=grep,theme=self.theme,**params)
displayed, url = self.opencache.opnk(url,mode=mode,grep=grep,theme=self.theme,**params)
modedurl = mode_url(url,mode)
if not displayed:
#if we cant display, we mark to sync what is not local
if not is_local(url) or not netcache.is_cache_valid(url):
if not is_local(url) and not netcache.is_cache_valid(url):
self.get_list("to_fetch")
r = self.list_add_line("to_fetch",url=modedurl,verbose=False)
if r:
@ -354,12 +301,13 @@ class GeminiClient(cmd.Cmd):
else:
self.page_index = 0
# Update state (external files are not added to history)
self.current_url = url
self.current_url = modedurl
if update_hist and not self.sync_only:
self._update_history(modedurl)
else:
#we are asked not to handle or in sync_only mode
netcache.fetch(url,**params)
if self.support_http or not parsed.scheme in ["http","https"] :
netcache.fetch(url,**params)
@needs_gi
def _show_lookup(self, offset=0, end=None, show_url=False):
@ -514,7 +462,7 @@ class GeminiClient(cmd.Cmd):
"theme ELEMENT COLOR"
ELEMENT is one of: window_title, window_subtitle, title,
subtitle,subsubtitle,link,oneline_link,image_link,preformatted,blockquote.
subtitle,subsubtitle,link,oneline_link,new_link,image_link,preformatted,blockquote.
COLOR is one or many (separated by space) of: bold, faint, italic, underline, black,
red, green, yellow, blue, purple, cyan, white.
@ -608,10 +556,13 @@ Each color can alternatively be prefaced with "bright_"."""
print("Already online. Try offline.")
def do_copy(self, arg):
"""Copy the content of the last visited page as gemtext in the clipboard.
"""Copy the content of the last visited page as gemtext/html in the clipboard.
Use with "url" as argument to only copy the adress.
Use with "raw" to copy ANSI content as seen in your terminal (not gemtext).
Use with "cache" to copy the path of the cached content."""
Use with "raw" to copy ANSI content as seen in your terminal (with colour codes).
Use with "cache" to copy the path of the cached content.
Use with "title" to copy the title of the page.
Use with "link" to copy a link in the gemtext format to that page with the title.
"""
if self.current_url:
if _HAS_XSEL:
args = arg.split()
@ -620,6 +571,7 @@ Use with "cache" to copy the path of the cached content."""
url = self.get_renderer().get_link(int(args[1])-1)
else:
url,mode = unmode_url(self.current_url)
print(url)
run("xsel -b -i", input=url, direct_output=True)
elif args and args[0] == "raw":
tmp = self.opencache.get_temp_filename(self.current_url)
@ -629,6 +581,15 @@ Use with "cache" to copy the path of the cached content."""
elif args and args[0] == "cache":
run("xsel -b -i", input=netcache.get_cache_path(self.current_url),\
direct_output=True)
elif args and args[0] == "title":
title = self.get_renderer().get_page_title()
run("xsel -b -i",input=title, direct_output=True)
print(title)
elif args and args[0] == "link":
link = "=> %s %s"%(unmode_url(self.current_url)[0],\
self.get_renderer().get_page_title())
print(link)
run("xsel -b -i", input=link,direct_output=True)
else:
run("xsel -b -i", input=open(netcache.get_cache_path(self.current_url), "rb"),\
direct_output=True)
@ -681,6 +642,9 @@ Use with "cache" to copy the path of the cached content."""
# If this isn't a mark, treat it as a URL
elif looks_like_url(line):
self._go_to_url(line)
elif "://" not in line and "default_protocol" in self.options.keys()\
and looks_like_url(self.options["default_protocol"]+"://"+line):
self._go_to_url(self.options["default_protocol"]+"://"+line)
else:
print("%s is not a valid URL to go"%line)
@ -795,7 +759,7 @@ Current tour can be listed with `tour ls` and scrubbed with `tour clear`."""
display = not self.sync_only
for l in self.get_renderer(url).get_links():
self.list_add_line("tour",url=l,verbose=False)
else:
elif self.current_url:
for index in line.split():
try:
pair = index.split('-')
@ -887,7 +851,7 @@ Marks are temporary until shutdown (not saved to disk)."""
output += " - python-cryptography : " + has(netcache._HAS_CRYPTOGRAPHY)
output += " - xdg-open : " + has(opnk._HAS_XDGOPEN)
output += "\nWeb browsing:\n"
output += " - python-requests : " + has(_DO_HTTP)
output += " - python-requests : " + has(netcache._DO_HTTP)
output += " - python-feedparser : " + has(ansicat._DO_FEED)
output += " - python-bs4 : " + has(ansicat._HAS_SOUP)
output += " - python-readability : " + has(ansicat._HAS_READABILITY)
@ -908,14 +872,14 @@ Marks are temporary until shutdown (not saved to disk)."""
output += " - Render images (python-pil, chafa or timg) : " + has(ansicat._RENDER_IMAGE)
output += " - Render HTML (bs4, readability) : " + has(ansicat._DO_HTML)
output += " - Render Atom/RSS feeds (feedparser) : " + has(ansicat._DO_FEED)
output += " - Connect to http/https (requests) : " + has(_DO_HTTP)
output += " - Connect to http/https (requests) : " + has(netcache._DO_HTTP)
output += " - Detect text encoding (python-chardet) : " + has(netcache._HAS_CHARDET)
output += " - copy to/from clipboard (xsel) : " + has(_HAS_XSEL)
output += " - restore last position (less 572+) : " + has(opnk._LESS_RESTORE_POSITION)
output += "\n"
output += "Config directory : " + _CONFIG_DIR + "\n"
output += "User Data directory : " + _DATA_DIR + "\n"
output += "Cache directoy : " + _CACHE_PATH
output += "Config directory : " + xdg("config") + "\n"
output += "User Data directory : " + xdg("data") + "\n"
output += "Cache directoy : " + xdg("cache")
print(output)
@ -955,7 +919,11 @@ Use 'ls -l' to see URLs."""
def do_gus(self, line):
"""Submit a search query to the geminispace.info search engine."""
self._go_to_url(urllib.parse.urlunparse("gemini","geminispace.info","/search","",line,""))
if not line:
print("What?")
return
search = line.replace(" ","%20")
self._go_to_url("gemini://geminispace.info/search?%s"%search)
def do_history(self, *args):
"""Display history."""
@ -989,10 +957,11 @@ Use "view normal" to see the default article view on html page.
Use "view full" to see a complete html page instead of the article view.
Use "view feed" to see the the linked feed of the page (in any).
Use "view feeds" to see available feeds on this page.
Use "view XX" where XX is a number to view information about link XX.
(full, feed, feeds have no effect on non-html content)."""
if self.current_url and args and args[0] != "":
u, m = unmode_url(self.current_url)
if args[0] in ["full","debug"]:
if args[0] in ["full","debug","source"]:
self._go_to_url(self.current_url,mode=args[0])
elif args[0] in ["normal","readable"]:
self._go_to_url(self.current_url,mode="readable")
@ -1015,8 +984,24 @@ Use "view feeds" to see available feeds on this page.
ans = input(stri)
if ans.isdigit() and 0 < int(ans) <= len(subs):
self.do_go(subs[int(ans)-1][0])
elif args[0].isdigit():
link_url = self.get_renderer().get_link(int(args[0]))
if link_url:
print("Link %s is: %s"%(args[0],link_url))
if netcache.is_cache_valid(link_url):
last_modified = netcache.cache_last_modified(link_url)
link_renderer = self.get_renderer(link_url)
if link_renderer:
link_title = link_renderer.get_page_title()
print(link_title)
else:
print("Empty cached version")
print("Last cached on %s"%time.ctime(last_modified))
else:
print("No cached version for this link")
else:
print("Valid argument for view are : normal, full, feed, feeds")
print("Valid argument for view are : normal, full, feed, feeds or a number")
else:
self._go_to_url(self.current_url)
@ -1141,9 +1126,9 @@ If no argument given, URL is added to Bookmarks."""
def get_list(self,list):
list_path = self.list_path(list)
if not list_path:
old_file_gmi = os.path.join(_CONFIG_DIR,list + ".gmi")
old_file_nogmi = os.path.join(_CONFIG_DIR,list)
target = os.path.join(_DATA_DIR,"lists")
old_file_gmi = os.path.join(xdg("config"),list + ".gmi")
old_file_nogmi = os.path.join(xdg("config"),list)
target = os.path.join(xdg("data"),"lists")
if os.path.exists(old_file_gmi):
shutil.move(old_file_gmi,target)
elif os.path.exists(old_file_nogmi):
@ -1236,8 +1221,6 @@ archives, which is a special historical list limited in size. It is similar to `
url = self.current_url
r = self.get_renderer(url)
if r:
mode = r.get_mode()
url = mode_url(url,mode)
title = r.get_page_title()
else:
title = ""
@ -1254,23 +1237,26 @@ archives, which is a special historical list limited in size. It is similar to `
return False
else:
if not url:
url,mode = unmode_url(self.current_url)
url = self.current_url
unmoded_url,mode = unmode_url(url)
# first we check if url already exists in the file
with open(list_path,"r") as l_file:
lines = l_file.readlines()
l_file.close()
for l in lines:
sp = l.split()
if url in sp:
if verbose:
print("%s already in %s."%(url,list))
return False
with open(list_path,"a") as l_file:
l_file.write(self.to_map_line(url))
l_file.close()
if verbose:
print("%s added to %s" %(url,list))
return True
if self.list_has_url(url,list,exact_mode=True):
if verbose:
print("%s already in %s."%(url,list))
return False
# If the URL already exists but without a mode, we update the mode
# FIXME: this doesnt take into account the case where you want to remove the mode
elif url != unmoded_url and self.list_has_url(unmoded_url,list):
self.list_update_url_mode(unmoded_url,list,mode)
if verbose:
print("%s has updated mode in %s to %s"%(url,list,mode))
else:
with open(list_path,"a") as l_file:
l_file.write(self.to_map_line(url))
l_file.close()
if verbose:
print("%s added to %s" %(url,list))
return True
@needs_gi
def list_add_top(self,list,limit=0,truncate_lines=0):
@ -1309,8 +1295,14 @@ archives, which is a special historical list limited in size. It is similar to `
def list_rm_url(self,url,list):
return self.list_has_url(url,list,deletion=True)
def list_update_url_mode(self,url,list,mode):
return self.list_has_url(url,list,update_mode = mode)
# deletion and has_url are so similar, I made them the same method
def list_has_url(self,url,list,deletion=False):
# deletion : true or false if you want to delete the URL
# exact_mode : True if you want to check only for the exact url, not the canonical one
# update_mode : a new mode to update the URL
def list_has_url(self,url,list,deletion=False, exact_mode=False, update_mode = None):
list_path = self.list_path(list)
if list_path:
to_return = False
@ -1319,7 +1311,8 @@ archives, which is a special historical list limited in size. It is similar to `
lf.close()
to_write = []
# lets remove the mode
url=unmode_url(url)[0]
if not exact_mode:
url=unmode_url(url)[0]
for l in lines:
# we separate components of the line
# to ensure we identify a complete URL, not a part of it
@ -1327,15 +1320,27 @@ archives, which is a special historical list limited in size. It is similar to `
if url not in splitted and len(splitted) > 1:
current = unmode_url(splitted[1])[0]
#sometimes, we must remove the ending "/"
if url == current:
to_return = True
elif url.endswith("/") and url[:-1] == current:
if url == current or (url.endswith("/") and url[:-1] == current):
to_return = True
if update_mode:
new_line = l.replace(current,mode_url(url,update_mode))
to_write.append(new_line)
elif not deletion:
to_write.append(l)
else:
to_write.append(l)
else:
elif url in splitted:
to_return = True
if deletion :
# We update the mode if asked by replacing the old url
# by a moded one in the same line
if update_mode:
new_line = l.replace(url,mode_url(url,update_mode))
to_write.append(new_line)
elif not deletion:
to_write.append(l)
else:
to_write.append(l)
if deletion or update_mode:
with open(list_path,"w") as lf:
for l in to_write:
lf.write(l)
@ -1377,7 +1382,7 @@ archives, which is a special historical list limited in size. It is similar to `
#return the path of the list file if list exists.
#return None if the list doesnt exist.
def list_path(self,list):
listdir = os.path.join(_DATA_DIR,"lists")
listdir = os.path.join(xdg("data"),"lists")
list_path = os.path.join(listdir, "%s.gmi"%list)
if os.path.exists(list_path):
return list_path
@ -1389,7 +1394,7 @@ archives, which is a special historical list limited in size. It is similar to `
if list in ["create","edit","delete","help"]:
print("%s is not allowed as a name for a list"%list)
elif not list_path:
listdir = os.path.join(_DATA_DIR,"lists")
listdir = os.path.join(xdg("data"),"lists")
os.makedirs(listdir,exist_ok=True)
list_path = os.path.join(listdir, "%s.gmi"%list)
with open(list_path,"a") as lfile:
@ -1427,7 +1432,7 @@ If current page was not in a list, this command is similar to `add LIST`."""
self.list_add_line(args[0])
def list_lists(self):
listdir = os.path.join(_DATA_DIR,"lists")
listdir = os.path.join(xdg("data"),"lists")
to_return = []
if os.path.exists(listdir):
lists = os.listdir(listdir)
@ -1504,7 +1509,7 @@ The following lists cannot be removed or frozen but can be edited with "list edi
- tour : contains the next URLs to visit during a tour (see "help tour")
"""
listdir = os.path.join(_DATA_DIR,"lists")
listdir = os.path.join(xdg("data"),"lists")
os.makedirs(listdir,exist_ok=True)
if not arg:
lists = self.list_lists()
@ -1609,19 +1614,6 @@ The following lists cannot be removed or frozen but can be edited with "list edi
else:
cmd.Cmd.do_help(self, arg)
### Flight recorder
def do_blackbox(self, *args):
"""Display contents of flight recorder, showing statistics for the
current gemini browsing session."""
lines = []
# Compute flight time
now = time.time()
delta = now - self.log["start_time"]
hours, remainder = divmod(delta, 3600)
minutes, seconds = divmod(remainder, 60)
# Assemble lines
lines.append(("Patrol duration", "%02d:%02d:%02d" % (hours, minutes, seconds)))
def do_sync(self, line):
"""Synchronize all bookmarks lists and URLs from the to_fetch list.
- New elements in pages in subscribed lists will be added to tour
@ -1819,7 +1811,7 @@ def main():
GeminiClient.do_version(None,None)
sys.exit()
else:
for f in [_CONFIG_DIR, _DATA_DIR]:
for f in [xdg("config"), xdg("data")]:
if not os.path.exists(f):
print("Creating config directory {}".format(f))
os.makedirs(f)
@ -1833,7 +1825,7 @@ def main():
# Queue is a list of command (potentially empty)
def read_config(queue,rcfile=None,interactive=True):
if not rcfile:
rcfile = os.path.join(_CONFIG_DIR, "offpunkrc")
rcfile = os.path.join(xdg("config"), "offpunkrc")
if os.path.exists(rcfile):
print("Using config %s" % rcfile)
with open(rcfile, "r") as fp:
@ -1900,7 +1892,6 @@ def main():
gc.onecmd(line)
lists = None
gc.call_sync(refresh_time=refresh_time,depth=depth,lists=args.url)
gc.onecmd("blackbox")
else:
# We are in the normal mode. First process config file
torun_queue = read_config(torun_queue,rcfile=args.config_file,interactive=True)

View File

@ -37,6 +37,7 @@ offpunk1 = {
"subtitle" : ["blue"],
"subsubtitle" : ["blue","faint"], #fallback to subtitle if none
"link" : ["blue","faint"],
"new_link": ["bold"],
"oneline_link": [], #for gopher/gemini. fallback to link if none
"image_link" : ["yellow","faint"],
"preformatted": ["faint"],

View File

@ -13,54 +13,128 @@ import shutil
import shlex
import urllib.parse
import urllib.parse
import cache_migration
import netcache_migration
import netcache
CACHE_VERSION = 1
## Config directories
## We implement our own python-xdg to avoid conflict with existing libraries.
_home = os.path.expanduser('~')
data_home = os.environ.get('XDG_DATA_HOME') or \
os.path.join(_home,'.local','share')
config_home = os.environ.get('XDG_CONFIG_HOME') or \
os.path.join(_home,'.config')
_CONFIG_DIR = os.path.join(os.path.expanduser(config_home),"offpunk/")
_DATA_DIR = os.path.join(os.path.expanduser(data_home),"offpunk/")
_old_config = os.path.expanduser("~/.offpunk/")
## Look for pre-existing config directory, if any
if os.path.exists(_old_config):
_CONFIG_DIR = _old_config
#if no XDG .local/share and not XDG .config, we use the old config
if not os.path.exists(data_home) and os.path.exists(_old_config):
_DATA_DIR = _CONFIG_DIR
cache_home = os.environ.get('XDG_CACHE_HOME') or\
os.path.join(_home,'.cache')
_CACHE_PATH = os.path.join(os.path.expanduser(cache_home),"offpunk/")
os.makedirs(_CACHE_PATH,exist_ok=True)
# We upgrade the cache only once at startup, hence the UPGRADED variable
# This is only to avoid unecessary checks each time the cache is accessed
UPGRADED=False
def upgrade_cache(cache_folder):
#Lets read current version of the cache
version_path = cache_folder + ".version"
current_version = 0
if os.path.exists(version_path):
current_str = None
with open(version_path) as f:
current_str = f.read()
f.close()
try:
current_version = int(current_str)
except:
current_version = 0
#Now, lets upgrade the cache if needed
while current_version < CACHE_VERSION:
current_version += 1
upgrade_func = getattr(netcache_migration,"upgrade_to_"+str(current_version))
upgrade_func(cache_folder)
with open(version_path,"w") as f:
f.write(str(current_version))
f.close()
UPGRADED=True
#Lets read current version of the cache
version_path = _CACHE_PATH + ".version"
current_version = 0
if os.path.exists(version_path):
current_str = None
with open(version_path) as f:
current_str = f.read()
f.close()
#get xdg folder. Folder should be "cache", "data" or "config"
def xdg(folder="cache"):
## Config directories
## We implement our own python-xdg to avoid conflict with existing libraries.
_home = os.path.expanduser('~')
data_home = os.environ.get('XDG_DATA_HOME') or \
os.path.join(_home,'.local','share')
config_home = os.environ.get('XDG_CONFIG_HOME') or \
os.path.join(_home,'.config')
_CONFIG_DIR = os.path.join(os.path.expanduser(config_home),"offpunk/")
_DATA_DIR = os.path.join(os.path.expanduser(data_home),"offpunk/")
_old_config = os.path.expanduser("~/.offpunk/")
## Look for pre-existing config directory, if any
if os.path.exists(_old_config):
_CONFIG_DIR = _old_config
#if no XDG .local/share and not XDG .config, we use the old config
if not os.path.exists(data_home) and os.path.exists(_old_config):
_DATA_DIR = _CONFIG_DIR
## get _CACHE_PATH from OFFPUNK_CACHE_PATH environment variable
# if OFFPUNK_CACHE_PATH empty, set default to ~/.cache/offpunk
cache_home = os.environ.get('XDG_CACHE_HOME') or\
os.path.join(_home,'.cache')
_CACHE_PATH = os.environ.get('OFFPUNK_CACHE_PATH', \
os.path.join(os.path.expanduser(cache_home),"offpunk/"))
#Check that the cache path ends with "/"
if not _CACHE_PATH.endswith("/"):
_CACHE_PATH += "/"
os.makedirs(_CACHE_PATH,exist_ok=True)
if folder == "cache" and not UPGRADED:
upgrade_cache(_CACHE_PATH)
if folder == "cache":
return _CACHE_PATH
elif folder == "config":
return _CONFIG_DIR
elif folder == "data":
return _DATA_DIR
else:
print("No XDG folder for %s. Check your code."%folder)
return None
#An IPV6 URL should be put between []
#We try to detect them has location with more than 2 ":"
def fix_ipv6_url(url):
if not url or url.startswith("mailto"):
return url
if "://" in url:
schema, schemaless = url.split("://",maxsplit=1)
else:
schema, schemaless = None, url
if "/" in schemaless:
netloc, rest = schemaless.split("/",1)
if netloc.count(":") > 2 and "[" not in netloc and "]" not in netloc:
schemaless = "[" + netloc + "]" + "/" + rest
elif schemaless.count(":") > 2 and "[" not in schemaless and "]" not in schemaless:
schemaless = "[" + schemaless + "]/"
if schema:
return schema + "://" + schemaless
return schemaless
# Cheap and cheerful URL detector
def looks_like_url(word):
try:
current_version = int(current_str)
except:
current_version = 0
#Now, lets upgrade the cache if needed
while current_version < CACHE_VERSION:
current_version += 1
upgrade_func = getattr(cache_migration,"upgrade_to_"+str(current_version))
upgrade_func(_CACHE_PATH)
with open(version_path,"w") as f:
f.write(str(current_version))
f.close()
if not word.strip():
return False
url = fix_ipv6_url(word).strip()
parsed = urllib.parse.urlparse(url)
#sometimes, urllib crashed only when requesting the port
port = parsed.port
scheme = word.split("://")[0]
mailto = word.startswith("mailto:")
start = scheme in netcache.standard_ports
local = scheme in ["file","list"]
if mailto:
return "@" in word
elif not local:
if start:
#IPv4
if "." in word or "localhost" in word:
return True
#IPv6
elif "[" in word and ":" in word and "]" in word:
return True
else: return False
else: return False
return start and ("." in word or "localhost" in word or ":" in word)
else:
return "/" in word
except ValueError:
return False
## Those two functions add/remove the mode to the
# URLs. This is a gross hack to remember the mode

58
opnk.py
View File

@ -48,16 +48,16 @@ else:
# -S : do not wrap long lines. Wrapping is done by offpunk, longlines
# are there on purpose (surch in asciiart)
#--incsearch : incremental search starting rev581
if less_version >= 581:
less_base = "less --incsearch --save-marks -~ -XRfMWiS"
elif less_version >= 572:
less_base = "less --save-marks -XRfMWiS"
else:
less_base = "less -XRfMWiS"
_DEFAULT_LESS = less_base + " \"+''\" %s"
_DEFAULT_CAT = less_base + " -EF %s"
def less_cmd(file, histfile=None,cat=False,grep=None):
less_prompt = "page %%d/%%D- lines %%lb/%%L - %%Pb\\%%"
if less_version >= 581:
less_base = "less --incsearch --save-marks -~ -XRfWiS -P \"%s\""%less_prompt
elif less_version >= 572:
less_base = "less --save-marks -XRfMWiS"
else:
less_base = "less -XRfMWiS"
_DEFAULT_LESS = less_base + " \"+''\" %s"
_DEFAULT_CAT = less_base + " -EF %s"
if histfile:
env = {"LESSHISTFILE": histfile}
else:
@ -159,11 +159,14 @@ class opencache():
if inpath in self.renderer_time.keys():
last_downloaded = netcache.cache_last_modified(inpath)
last_cached = self.renderer_time[inpath]
usecache = last_cached > last_downloaded
if last_cached and last_downloaded:
usecache = last_cached > last_downloaded
else:
usecache = False
else:
usecache = False
if not usecache:
renderer = ansicat.renderer_from_file(path,inpath,theme=theme)
renderer = ansicat.renderer_from_file(path,url=inpath,theme=theme)
if renderer:
self.rendererdic[inpath] = renderer
self.renderer_time[inpath] = int(time.time())
@ -180,24 +183,25 @@ class opencache():
def opnk(self,inpath,mode=None,terminal=True,grep=None,theme=None,**kwargs):
#Return True if inpath opened in Terminal
# False otherwise
# also returns the url in case it has been modified
#if terminal = False, we dont try to open in the terminal,
#we immediately fallback to xdg-open.
#netcache currently provide the path if its a file.
#may this should be migrated here.
if not offutils.is_local(inpath):
kwargs["images_mode"] = mode
cachepath = netcache.fetch(inpath,**kwargs)
cachepath,inpath = netcache.fetch(inpath,**kwargs)
if not cachepath:
return False
return False, inpath
# folowing line is for :// which are locals (file,list)
elif "://" in inpath:
cachepath = netcache.fetch(inpath,**kwargs)
cachepath,inpath = netcache.fetch(inpath,**kwargs)
elif inpath.startswith("mailto:"):
cachepath = inpath
elif os.path.exists(inpath):
cachepath = inpath
else:
print("%s does not exist"%inpath)
return
return False, inpath
renderer = self.get_renderer(inpath,mode=mode,theme=theme)
if renderer and mode:
renderer.set_mode(mode)
@ -212,7 +216,7 @@ class opencache():
#dont use less, we call it directly
if renderer.has_direct_display():
renderer.display(mode=mode,directdisplay=True)
return True
return True, inpath
else:
body = renderer.display(mode=mode)
#Should we use the cache? only if it is not local and theres a cache
@ -239,7 +243,7 @@ class opencache():
#We dont want to restore positions in lists
firsttime = is_local(inpath)
less_cmd(self.temp_files[key], histfile=self.less_histfile[key],cat=firsttime,grep=grep)
return True
return True, inpath
#maybe, we have no renderer. Or we want to skip it.
else:
mimetype = ansicat.get_mime(cachepath)
@ -252,7 +256,7 @@ class opencache():
else:
print("Cannot find a mail client to send mail to %s" %inpath)
print("Please install xdg-open (usually from xdg-util package)")
return
return False, inpath
else:
cmd_str = self._get_handler_cmd(mimetype)
try:
@ -260,7 +264,7 @@ class opencache():
except FileNotFoundError:
print("Handler program %s not found!" % shlex.split(cmd_str)[0])
print("You can use the ! command to specify another handler program or pipeline.")
return False
return False, inpath
#We remove the renderers from the cache and we also delete temp files
def cleanup(self):
@ -274,13 +278,23 @@ class opencache():
self.last_mode = {}
def main():
parser = argparse.ArgumentParser(description=__doc__)
descri = "opnk is an universal open command tool that will try to display any file \
in the pager less after rendering its content with ansicat. If that fails, \
opnk will fallback to opening the file with xdg-open. If given an URL as input \
instead of a path, opnk will rely on netcache to get the networked content."
parser = argparse.ArgumentParser(prog="opnk",description=descri)
parser.add_argument("--mode", metavar="MODE",
help="Which mode should be used to render: normal (default), full or source.\
With HTML, the normal mode try to extract the article.")
parser.add_argument("content",metavar="INPUT", nargs="*",
default=sys.stdin, help="Path to the file or URL to open")
parser.add_argument("--cache-validity",type=int, default=0,
help="maximum age, in second, of the cached version before \
redownloading a new version")
args = parser.parse_args()
cache = opencache()
for f in args.content:
cache.opnk(f)
cache.opnk(f,mode=args.mode,validity=args.cache_validity)
if __name__ == "__main__":
main()

View File

@ -1,6 +1,6 @@
[build-system]
requires = ["flit_core >=3.2,<4"]
build-backend = "flit_core.buildapi"
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "offpunk"
@ -35,8 +35,6 @@ html = ["bs4", "readability-lxml"]
http = ["requests"]
process-title = ["setproctitle"]
rss = ["feedparser"]
timg = ["timg>=1.3.2"]
file = ["file"]
[project.urls]
Homepage = "https://sr.ht/~lioploum/offpunk/"
@ -49,5 +47,12 @@ netcache = "netcache:main"
ansicat = "ansicat:main"
opnk = "opnk:main"
[tool.flit.sdist]
include = ["doc/", "man/", "CHANGELOG"]
[tool.hatch.version]
path = "offpunk.py" # read __version__
[tool.hatch.build.targets.wheel]
only-include = [
"ansicat.py", "netcache_migration.py", "netcache.py",
"offblocklist.py", "offpunk.py", "offthemes.py",
"offutils.py", "opnk.py",
]