Some of my friends had mentioned adding RSS to their static site generators was hard; how about turning your index page into the RSS feed, and letting browsers generate the HTML for you?
]]>This feed indeed has some interesting things, related to SCP, sci-fi (especially time traveling), or programming. I bookmarked the Perl introduction, if I ever want to learn Perl and scare my fellow Python developers at work.
]]>Some of my favorite things with Pale Moon include turning it into Netscape, sync support, and built-in RSS preview and subscription support via Live Bookmarks. I however have an issue with the RSS preview: it allows you to subscribe not only via Live Feeds but also with other desktop applications that you might have installed, such as Thunderbird, or Yahoo. My issue is that I wanted to add a button to quickly subscribe on envs.net's TinyTinyRSS instance, and after various attempts I could not add it in the user interface.
Here comes the trusty about:config
to the rescue! Looking up yahoo
in the configuration values pointed me to two keys in the configuration:
https://add.my.yahoo.com/rss?url=%s
, I changed it to https://rss.envs.net/public.php?op=subscribe&feed_url=%s
. I found this URL by looking at the bookmarklets configuration in TinyTinyRSS and reading the short JS code that redirects you to TinyTinyRSS.I initially tried to add a button next to the My Yahoo! one by creating two new keys, .types.1.title
and .types.1.uri
, but that failed. I did not yet look into the Pale Moon source code to see why this could have failed.
With this change, instead of Yahoo, I can quickly subscribe to anyone's RSS feeds faster than ever. This will definitely not help my backlog of 2600+ articles…
]]><xsl:value-of select="description" disable-output-escaping="yes" />
disable-output-escaping
is optional according to the W3C specification since version 1. libxslt, Chromium and Internet Explorer do support it, but Firefox chose not to, and a Bugzilla ticket for it will celebrate its 20th birthday this year. They do say themselves that this causes issues for RSS support, so I chose to just not care about it. If you are a Firefox user and are seeing raw, unparsed HTML tags in there, I can only suggest using another browser, or just subscribing to this feed and reading this in its home, a RSS aggregator.
https://www.bbc.com/weather/2644080
Take this integer suffix, which is the ID of the location, and put it in one of these two URLs to get some RSS feeds:
https://weather-broker-cdn.api.bbci.co.uk/en/forecast/rss/3day/2644080
https://weather-broker-cdn.api.bbci.co.uk/en/observation/rss/2644080
This procedure is documented exactly like so on the BBC help pages!
]]>Feed type | URL parameters | URL rewriting |
---|---|---|
All posts | /?feed=rss2 |
/feed/rss2/ |
All comments | /?feed=comments-rss2 |
/comments/feed/rss2/ |
Comments on a post | /?p=42&feed=rss2 |
/[post name]/feed/rss2/ |
In categories | /?cat=1,2,3&feed=rss2 |
/category/cat1,cat2,cat3/feed/rss2/ |
In tags | /?tag=tag1,tag2,tag3&feed=rss2 |
/tag/tag1,tag2,tag3/feed/rss2/ |
In all categories | /?cat=1+2+3&feed=rss2 |
/category/cat1+cat2+cat3/feed/rss2/ |
In tags | /?tag=tag1+tag2+tag3&feed=rss2 |
/tag/tag1+tag2+tag3/feed/rss2/ |
By author | Undocumented | /author/[name]/feed/rss2/ |
Search results | ?s=[query]&feed=rss2 |
— |
Replace rss2
with atom
for an Atom feed, and with rdf
for an RSS 1.0 feed.
I added a distinction between RSS 1.0 and RSS 2.0 in ITSB and used it to provide more official feeds from the Antigua and Barbuda Department of Marine Services and Merchant Shipping Inspection and Investigation Division and the mongolian Air Accidents Investigation Bureau.
]]>Most people just make blogs on there, called gemlogs. Those gemlogs are frequently practicing a habit that has been disappearing from blogs faster than the blogs themselves disappeared in favor of social media: posts that reply to other people's posts.
I like email as a discussion method because it works like letters, just with some faster delivery and cheaper postage cost; no typing notifications and no expectations of a very fast reply like on instant messaging platforms, so you have less anxiety and more time to write out your thoughts. the UI of most email clients encourage you to write more, to not just send one line; the text length limit probably exists due to technical limitations, but you wouldn't be able to reach it without writing book after book in a single email. twitter is probably the worst place to debate on, since having much less space to explain yourself means your thoughts immediately get misinterpreted.
Replying to other people's posts on your own blog or gemlog is basically like e-mail, but the discussion can be read by a lot more people. you get all the benefits of long-form writing and asynchronous communication, combined with sharing with or receiving knowledge from your readers and other people's readers. However, you can hit an issue where the person who posted the text you replied to might be completely unaware of your reply, and might never read it, unlike email. some standards exist to help with this, such as Webmentions, or the trackback namespace for RSS.
I will let you click the link to read more about trackback; because I am posting today to show you an alternative, if you want to use something that approximatively nothing supports: mod_annotation, a proposed RSS 1.0 module. This, like most RSS 1.0 modules, never reached a status of standard module and disappeared from the Internet, so the only way to find them now is to use the Wayback Machine. I love the Wayback Machine.
To use this module, first add a new XML namespace to your feed: xmlns:annotate="http://purl.org/rss/1.0/modules/annotate/"
. Then, in the <item>
tag, add the following tag to reference something else:
<annotate:reference rdf:resource="https://envs.net/~lucidiot/rsrsss/"/>
This module was only proposed for RSS 1.0, but most feed readers barely make any distinction between RSS 1.0 and 2.0, so if a feed reader ever supported this module, you could probably use it safely in RSS 2.0 too.
]]>Turns out RFC 4685 defines an XML namespace one can use to define replies. It is rather similar to yesterday's mod_annotation.
To use this namespace, you will need to first add the namespace to your feed: xmlns:thr="http://purl.org/syndication/thread/1.0"
. You then have access to two new elements and two new attributes, and the spec also defines a new rel
value:
<thr:in-reply-to>
to indicate what you are replying to using the ID indicated in the <id>
tag;<link rel="replies">
to point to a page where some, or all, known replies to a post are listed;thr:updated
to add on the above link the last date when the page was updated;thr:count
to add on the above link the number of known replies listed in the linked page;<thr:total>
to indicate the total number of known replies, as the linked replies pages might only contain a portion of them.None of those are required. You can repeat the <link rel="replies" />
as many times as you might need, if you have multiple pages. The metadata given by the <thr:total>
element and the thr:count
and thr:updated
attributes is non-authoritative, which means it does not have to be exact.
If you are using <thr:in-reply-to>
, it is recommended to also include the post's link in a <link rel="related">
to allow a graceful fallback for feed readers that might not support the threading extensions.
The RFC includes a bunch of examples that should be enough to get you started should you ever want to try using this namespace.
Before I start writing posts on threading for just every single syndication format, here is some info for two formats I have experimented with in ITSB:
JSON Feed does not have support for threading in its spec, but you could just make your own extension for that. It does not use JSON-LD either, which would have allowed for a similar extension system as XML; but after experiencing the complexity of JSON-LD first hand at my day job and facing the numerous interoperability issues that causes, I can definitely understand that they wouldn't want to.
Channel Definition Format allows for nested channels, so you could at least create a structured representation of a thread as a tree if you, the original author of the post, knew about all the replies. You cannot, however, specify that you are replying to something yourself. The format does support XML namespace extensions, so you could use thr
or mod_annotation.
I recently translated some Japanese specifications I found on the Wayback Machine for two obsolete syndication formats. I first had to determine which encoding the specifications were using, because Google Translate was really unhappy with that; I had to convert from Shift-JIS to UTF-16 then to UTF-8, and from EUC-JP to UTF-8. I am using Google Translate because I know absolutely nothing about Japanese; I just take the messy "English" translation and turn it into comprehensible English.
I first translated HINA, a format that relies on RFC 822 message headers and was designed for Asahina-Antenna. It appears that in Japan, feed readers were called "antennas". This format is apparently still served by some websites according to a quick online search; I will look into that later, just as I will look into those antennas.
Today, I translated LIRS, a format that uses a gzipped simili-CSV to report the same thing.
These two formats do not have item descriptions or optional URLs; they are only meant to report changes on external content. They already take into account the notion of feed aggregation. HINA even has image-related data for photo galleries.
It is pretty hard to trace those formats, first because of the rather obvious language barrier I am facing, and second because the Wayback Machine did not always catch everything, so there are many dead links. Of course, everything is completely dead today. I am however going to keep looking into those formats, and they will soon be implemented in ITSB just for the sake of keeping them alive.
]]>Friday postcards are a concept made by ~jumblesale on tilde.town in which you share a URL to an image along with "#fridaypostcard" (and optionally a comment) on IRC, and a bot picks it up and builds an HTML page every Friday.
An archive gets generated each week too, but there was no easy way to get postcards in my RSS reader and I had found multiple issues in the way URLs were handled, causing some Imgur URLs to not work among other things. I at first had copy-pasted the original script, but then rewrote it to handle those errors and get every single postcard ever made into one W3C-valid RSS feed.
You can browse the script that generates this feed on tildegit.
]]>A few months ago, ~netscape_navigator showed me his "recent reading" list, for which I requested an RSS feed. He uses it in an interesting process to feed on the news while driving using text-to-speech, and just decided to publish his curated news feed. I now generally see this feed as my "wholesome news" feed, because most articles on there are about interesting scientific discoveries, hacking projects, tech history podcasts and articles, etc. There still are some bad news but they are much less related to current politics or other issues of the tech industry like e-waste, america-centrism or racism.
You could probably argue this is kind of a circle jerk, since I am only reading the news from my friends who are more likely to share the same opinions as me; but this feed does not really have that many news, and I am already well aware of the most important issues in tech since I will still see them being discussed on IRC, tilde.news, Misskey, or at the workplace. They are discussed enough for me to just not want them to pollute my RSS reader as well, a place where I can go with the expectation to either relax or learn things. Having this feed here helps me get more interesting articles from lesser-known English-speaking news websites that I simply never heard of in France, such as Scientific American, or discover new blogs.
You can also view the articles in a browser, but why would you do that when you have an RSS reader?
]]>Updates are pretty rare, but it still an interesting feed to have; on the rare occasion that a new article gets there, you know you're in for an great read.
]]>If you register for an account, you can save your searches and then either create email alerts about any new publication in the search results, or get an RSS feed of it. I use that to follow various terms like bookmark, postcard, calendar and USB: I know some people who collect bookmarks, the free postcards they make give me nice illustrations for my notebooks, the calendars usually are large A0 posters so I can fill my wall with them, and they used to offer three publications in the form of USB drives, so I stay on the lookout for that. I should probably also add notebook to the lot, because I have a drawer full of free notebooks. My very first Bullet Journal was started on one of those books.
The website is supposed to only allow you to order one free copy per email address (or per account if you registered, since you can also order without registering), and you will need to confirm your email address if you order as a guest. Since some mail providers like Gmail let you get away with putting dots or dashes in your address and will redirect to your actual email, you can actually get much more from a single address; I was using only the dots and counted in binary to get all the possible unique combinations of dots while ordering a hundred USB keys or nearly a hundred notebooks. I got them all, in a hundred separate envelopes. That was a lot of fun :D
]]>This good friend has built his own static site generator, and built a few more, and we sometimes joke that all that he does is build site generators instead of writing actual blog content. But his feed (and thus his blog) sometimes fill up with some interesting articles anyway. You can in particular get some great examples of well written documentation if you want some inspiration to make this often overlooked part of software development a little nicer in your own projects.
]]>Escargot is a project to revive all of those clients and extra tools, and bring them back into 2021. It is already currently possible to talk between MSN and Yahoo Messenger, and there are plans to maybe, in the long term, support Matrix, XMPP, IRC, or AIM (which already has a server from another project called NINA), to really bring together all of those messaging services.
As I have been occasionally using a Windows XP laptop as my daily driver for a few days each time, I have kept a MSN Messenger 7.5 instance running. Just one friend got in touch with me using it, but I just like to see it being online in my notification area anyway. I also have installed Mercury Messenger on my phone so I can really stay online on MSN all the damn time. If you want to reach me there, and somehow manage to get an Escargot account and a compatible client installed, you can find my Escargot ID on my contact page.
I just discovered today that Escargot has an RSS feed for its recent news, Escargot Today. And it does not just include the last 5 or 10 posts like most blogs do, this feed just has every single news entry since 2017, which is neat. There are not that many updates since most of the project's true activity is on their GitLab repo, but if you plan on playing with this client, this feed will make sure you don't miss out on any breaking changes they might make. You can also access that page on newer versions of MSN since they changed the MSN Today URL to point at their site.
]]>Back when I was using Pale Moon, I could not find out how to add a new option without removing the existing ones so I just overwrote Yahoo with TinyTinyRSS. But this time, I got it to work with an extra setting! Here is the configuration in about:config
that I managed to use for SeaMonkey:
TinyTinyRSS
application/vnd.mozilla.maybe.feed
https://rss.envs.net/public.php?op=subscribe&feed_url=%s
/feed
. As Substack really scares me due to this e-mail part, I have never tried paying for one of those newsletters, so I do not know if that RSS feed would be available too for paid subscribers, maybe with a token. That would make it a very rare kind of paid RSS feeds, something which I know exists as I have seen it in a specification for the PlayStation Portable but that I have never seen in the wild before.
This particular feed is a free newsletter about space exploration from an Indian writer. He initially had two newsletters, Space Impact and Moon Monday, but they got merged into one. I initially discovered this blog through Moon Monday, a weekly report of everything that is happening related to our exploration of the Moon. The goal of this weekly report is to show that exploring the Moon is still on the table and that we still have a lot to learn about it. Things happen quickly enough that posting once a week is indeed necessary.
The reporting is generally pretty comprehensive, despite a noticeable bias against ISRO; the author regularly criticizes his own country's space program as it is often opaque or makes bad decisions. I would like to see more of this critical thinking applied to all the other reported events (which are usually only shown as facts, without much commentary), as there are a lot of issues with Artemis, the American lunar base program, and with ILRS, the Russian and Chinese project.
NASA going fully commercial on the base, all the way to calling for proposals on spacesuits, vehicles that transport astronauts between their training building to the launch pad, rovers, communication satellites, etc., and potentially allowing companies to mine the Moon, means NASA is bringing capitalism to space along with all its issues. Roscosmos' space budget is being slashed by Putin, because they did not achieve their set objectives in time—obviously, less budget will mean they can do more next year. And China's space program had a lot of issues, since rocket parts sometimes fall onto inhabitants (a huge no for absolutely everyone else), and most of what we know about the program comes from leaks.
But I would never have learned about all of these issues without having this blog as a starting point, teaching me about the current state of the space industry and scientific community, which have been completely transformed in the last few years. So if you are interested in learning more about space and what we're planning about it, I cannot recommend it enough.
]]>With those screenshots, I had found some software a few months ago called PlantStudio and it is impressive. There are some screenshots I haven't really exploited yet, mostly of Japanese software. I really am fond of exploring the Japanese web, be it through some of their attempts at creating internet standards like HINA or through the software they created.
Speaking of, It makes me a little sad that Japanese websites are slowly switching to the more "modern" designs we see now, like flat design, and are not keeping the condensed, efficient looks they had before. I used to browse Pixiv regularly, and while its new design is a bit more useful to English speakers, I had gotten used to knowing from memory what each link was in Japanese in the old design and the new one made me lose a ton of features. I wish we could just all go back in time and destroy JavaScript to prevent all of this.
]]>The feed is currently broken due to improper XML quoting, and the repo is being moved from another Gitea instance that has had serious technical issues for a while, so it is a bit messy. I opened an issue to get it resolved.
]]>This feed is from a newsletter than also offers an RSS feed and fax delivery. Its author mostly focuses on the history of technology on various subjects, and has taught me as many things as the previously mentioned 365 RFCs project.
]]>I wasn't really expecting to be able to keep up posting some things to this feed for a whole year, especially considering that I have multiple other websites to take care of and that this year has been hectic.
I still have a pretty long list of things I would like to post about, more interesting posts that just throwing a feed around at random. Let's admit it, when I post a feed, it's just to keep posting regularly when I just don't have the time.
I have been posting about some of my programming projects over on my French blog, and I have been thinking about posts on RSRSSS or on feeds in general later this year. This might give me some fuel to post more on this meta-feed. For now though, I'm posting about parsing geospatial data, and I'll follow that with a dozen articles on some reverse engineering I've been doing. There are so many posts I want to write everywhere and I have so little time and energy…
I have no idea who is even reading this feed since I don't have any stats and I don't want any, but if you've been reading this for the whole year, well thank you very much. Let's hope RSRSSS stays up for another year!
]]>There are many tools to generate RSS feeds from HTML pages, and some of them might just be point and click and they might work for pages that are relatively simple. Some work by looking for semantic HTML tags like <article>
, or some require you to write some CSS or XPath selectors or just do some code. But my favorite kind of tool is some program or website that is dedicated to serving RSS feeds for some particular websites, for which feeds are regularly asked for but the devs are refusing to. I guess this somehow falls under the category of adversarial interoperability. Providing an RSS feed for a website against its publisher's will is one of many ways to prevent it from being a completely closed environment, and force it to fit the philosophy of the web, which is to share.
I have done a few of those feed generators over time, and I even published one of its feeds on here. But I had yet to see someone in my Internet circle do something similar. @codl, a cool friend, made Feedplz, a service that provides RSS and Atom feeds for FurAffinity and SSP-Comics. If you are interested in those websites, definitely check this service out and give codl some love.
It's always great to see someone other than me show some interest in feeds, especially to the point of creating new feeds. This service might not have the most well written Python code, or might break easily should any of those websites choose to change something, but it has the merit of existing and of being a reminder that feeds do exist and that some people want them. Just that alone gives me warm fuzzies.
]]>While the Atom feed feels a little crude to me, a constant abuser of XML namespaces inside of feeds, I like the idea of what is basically a static site generator whose content comes from feeds. Static sites always feel much more manageable to me, be it as a developer, as a server administrator, or as an archivist. I do have an archivist side, with how much I've been using the Internet Archive in all my projects.
]]>DeviantArt is not aimed at developers but deviants (I guess they're both devs?) but still provides some RSS feeds. With their recent redesign, they have been doing away with most of their comfy and featureful interface to replace it with some laggy experience that's inconsistent with its own mobile apps. Every switch back to a page that still uses the old UI is a breath of fresh air.
One of those pages is the RSS feeds documentation, which also makes me a little worried that they might do away with RSS feeds at some point. They still do serve RSS feeds anyway, allowing you to search for deviations or journal entries. The feeds use Media RSS, which could make them usable on a Playstation Portable if you know how to work around the SSL issues. There is one base URL for all of the feeds:
https://backend.deviantart.com/rss.xml
The query parameters for the feeds are pretty poorly documented, so here is my attempt at it:
popular
, to sort by most popular.username:
. Filters by the name of the submitter.username:
. Filters by the name of the submitter.digitalart/drawings
.boost:
and special:
.
<atom link rel="next" />
tags to get pre-made URLs for the next page.q
.9
for most popular, and any other integer for newest.
Some of those query parameters were found by digging through a million URLs archived by the Wayback Machine using their CDX server.
If you have any further knowledge that should be added here, feel free to contact me.
]]>The project has been much more silent since the pandemic, and there have been no news at all on whether or not it will be coming back, so for now this feed is quite inactive. However, the history there is pretty interesting to read still.
Since this is a WordPress blog, you can also get the Atom feed or the RDF Site Summary feed.
]]>So if you have been coding in PHP, ASP.NET, or ColdFusion, you will now know that you have been writing nothing but RSS feeds the whole time!
I couldn't find out what truly is the .sfm
file extension, so if you know about it, please let me know.
I guess having that Atom feed was quite predictable, considering that Atom has been standardized in RFC 4287. Also note that RFCs should now be written using a specific XML format, also defined in another RFC; RFC 7991 being the current version. It feels quite strange to me to see such "high-level" formats in an RFC; I am more used to seeing RFCs about lower-level protocols like TCP/IP or BGP.
]]>!post
command on his private IRC server to let a user post a link with some title to a webpage. The project kicked off nicely by not having any HTML sanitization, so the trolls (and QA engineers I guess) that we are on this IRC server sent tons of JavaScript, CSS overrides, iframes, background music, etc. That broke the RSS feed, but the sanitization is now properly in place and the feed is usable. If you are curious to see what our little corner of the internet finds on other corners of the internet, feel free to look around and subscribe to the feed.
]]>Accept
header to application/rss+xml
or application/atom+xml
, or by appending .rss
or .atom
to a username.
For example, you can check out the RSS feed and the Atom feed for my profile on tildegit.org, the Gitea instance hosted by and for the tildeverse.
This is a nice first step, though I feel that user feeds are among the least useful of all the feeds that most common Git platforms provide. As I also maintain Alpine Linux packages, help manage breadpunk.club and manage my own server at home, the feeds that would matter the most to me are the tags or release notes feeds. Those feeds are the most efficient way to be notified of any new releases on most software, and I have opened some issues in the past to ask some maintainers to use Git tags just so I can use the feed.
There is an issue for global feed support, and it is on the 1.17.0 roadmap. I subscribed to it, using the unfortunately email-based GitHub notification system, and will definitely follow it closely.
]]>Most posts are just random automated posts made by bots that were created to convert RSS feeds to ActivityPub, but sometimes I can find some nice things. I got the opportunity to mention HINA at some point, and this week I found a new feed to add to my reader, Grab Free Games. The website's goal is pretty simple: tell you about any Steam games that are temporarily available for free, so that you can add them immediately to your Steam library and them prompty forget about them and never play them. Truly an amazing tool!
]]>If you are more of an IETF fan, you can also get an Atom feed.
]]><managingEditor>
and <webMaster>
must use e-mail addresses, and both also explicitly state that the webMaster
is the e-mail address to contact for technical issues regarding the feed, most developers and users of RSS feeds and feed readers do not seem to have that in mind. Most feeds do not use those tags or might not even use valid e-mail addresses. Since most feed readers are focused on just getting the user to read some articles, and spit out some incomprehensible error or fail silently when something is wrong in a feed, they do not use those tags even when they are correctly specified to let the user ask for help.
I could go and ask for some enhancements to error reporting on all feed readers, but that would be extremely exhausting, as most work in open-source projects feels to me—I definitely am not great at communication. Instead, here is a small initiative that RSS feed developers can make to make the internals of their RSS feed generation system more visible and, for a user that is curious enough to be reading the feed's source, point directly to where they can complain at.
One of the approved RSS 1.0 modules, mod_admin
, also called the Administrative Module, defines an XML namespace, xmlns:admin="http://webns.net/mvcb/"
, and two extra tags you can use:
<admin:generatorAgent>
rdf:resource
attribute to point to the feed generator. This is redundant with the RSS 2.0 <generator>
tag, and the W3C Feed Validation Service will complain if you use both tags at once, but with this new tag, you can specify a URI instead of some arbitrary string, which could let a feed reader make a link available more easily.<admin:errorReportsTo>
rdf:resource
attribute should point to somewhere to report issues with this feed. This is usually a mailto:
URI, but you could also point it to a contact form over HTTP. This is similar to the <webMaster>
tag, but the W3C validator does not complain about a redundancy here, so you can safely use both.By adding <admin:generatorAgent>
to your feed, you could let some random developer, let's say, me, look at your RSS feed generation code, and maybe find the bug for you. By adding <admin:errorReportsTo>
, a tag name that is more explicit than webMaster
, with a clickable mailto:
link or a link to a contact form, you can make it easier for curious users and random developers to tell you that something is wrong.
It is obviously not that likely that some random user is going to look at the source of the feed when something is wrong, but considering that content syndication over feeds is dying and that most of its remaining users are the tech-savvy ones, it is not impossible.
And if, like me, you are using an XSLT as your <?xml-stylesheet?>
, you could add a link to report errors with your feed if someone is displaying it in a browser. If you open RSRSSS in your web browser and your browser does not have native support for RSS feeds, then you can find this link at the very bottom of the page.
As mentioned earlier when I talked about the 1.16.0 release, the feeds are accessible either by setting the Accept
header to application/rss+xml
or application/atom+xml
when requesting a user, an organization or a repository's URL, or by appending .rss or .atom to the username, repository name or organization name. Some examples:
I hope that we will see the feeds for releases in the next release, so that Gitea adds the one missing feature to make package maintainers happy.
By the way, the RSS feed for the RSRSSS repo could be called the Really Simple RSRSSS Repository Syndication feed, or RSRSRSSSRS.
]]>pandoc
. While the regular touting of Markdown as if it was the most perfect markup ever in most of the links shared on this blog annoys me, the suggestions can be inspiring to build your own plain text systems, and the author encourages using RSS feeds. You can get featured in it as well, by getting a quick interview over email on how you use plaintext.
]]>I mentioned reaching 200 feeds on IRC, and ~dozens awarded me a badge for my outstanding feedchievement!
I have quite the backlog of feeds to share on here. Feel free to harass me if I don't post one each week, because I have no excuse.
]]>robots.txt
file.
To use it within RSS and Atom feeds, you will need to add the namespace to the root tag as usual: xmlns:access="http://www.bloglines.com/about/specs/fac-1.0"
. You can then add the access:restriction
element as a child of the root element, with the relationship
attribute set to allow
or deny
. When the element is not set, allow
will be assumed. If the feed had previously set deny
, removing the element will still cause aggregators to keep assuming a denial; allow
must be explicitly set to restore indexability.
<rss version="2.0" xmlns:access="http://www.bloglines.com/about/specs/fac-1.0"> <access:restriction relationship="deny" /> <channel> <!-- ... --> </channel> </rss>
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:access="http://www.bloglines.com/about/specs/fac-1.0"> <access:restriction relationship="deny" /> <!-- ... --> </feed>
Note that this is the only case I know of where an RSS extension adds a tag outside of both <channel>
and <item>
.
books
repository.
This Git repo hosts a CSV file exported from dozens' Calibre library. The file is also converted to a recfile, and with my help to an RSS feed. This was set up after dozens shared some of his collection with the casakhstan, and we got interested in it and any new additions to it.
If you are interested in a book found within this library, feel free to contact ~dozens to request one of the ebooks. You wouldn't download an ebook, wouldn't you?
]]>While looking for some sort of authoritative source for the definition of a hypergrid, I found Outworldz, a website full of resources related to OpenSimulator, the server software that runs all of those worlds. Its name seems to be a pun on InWorldz, one of the largest commercial OpenSimulator grids, which has existed from 2010 to 2018 before shutting down and seemingly starting from scratch.
I am quite curious about those old virtual worlds, especially now with all this metaverse bullshit. I browsed around a bit and realized they provide an RSS feed of the LSL scripts shared on this website. LSL, or the Linden Scripting Language, is the scripting language made by Linden Lab for Second Life. Scripting is how most of the virtual worlds come to life; be it enabling automatic transactions, sending messages, embedding YouTube videos to create movie theaters, etc., LSL can become important quite fast if you want to do interesting things in OpenSim. So here it is, a feed that genuinely surprised me, full of interesting content for a community that sounds inactive today, but still definitely exists.
]]>/now
page or /uses
page, this could become the /ideas
page on your personal website.
I made my ideas page into an RSS feed after I rewrote the page so it would be generated from a recfile, so here it is. If you have made your own ideas page, feel free to let me know and I'll feature you in the section at the bottom of the webpage.
]]>For example, feeds that just have pretty images don't need much spoons to process, while blogs with in-depths reflections on some topics will take more time to read. I can skim through some feeds while barely reading the post titles, but some other feeds have items that are actual tasks to complete. I have seen various posts about people saying you should weed out as much as you can from your feedreader because you will never read everything, but the point is not always to read everything.
When I was doing my categorizing, I had been asked on IRC about which categories I use and why, so this post is a more thought-out reply.
I am probably not fully done with categorizing, but I got a pretty good list right now. My current list of categories does represent how I use my feed reader pretty well, and over 90% of the feeds have a category right now. Here's a summary of those categories:
Some of those categories explain why I am not yet sharing an OPML export of the feeds I currently subscribe to; some feeds have a private token embedded in the URL for authentication, or some are not meant to be shared too publicly. Managing a custom OPML export from TinyTinyRSS would be a bit too much work whenever I subscribe or unsubscribe from a feed. Instead, you'll just get the feeds featured in RSRSSS as an OPML file, as I slowly work my way through my subscriptions or other feeds that I find interesting and share them on here.
]]>There are a few more specific feeds if you want just one category of their posts.
]]>I have been logging my dreams on and off for multiple years in my notebooks. I had intially started that as a quest to do lucid dreaming, a quest that I gave up on because reality checks are a really difficult routine to get into and they weren't bringing much of a result. The lucid in lucid dreaming was the original inspiration for my nickname. I kept on logging my dreams after that because I was still interested in remembering my dreams, and I nowadays also use this as an excuse to write at least one line a day in a notebook, since that is a habit I want to keep; when I do not remember a dream, I will still write I did not remember my dreams.
Getting those random dreams in my feedreader is what got me started in posting my own dream logs on my wiki. I might end up separating them at some point, as I am slowly working my way through all the dreams I have logged over the years in many notebooks, translating them and obfuscating them for public consumption.
My dreams almost always involve me with some family members or friends in a very bizarre situation, but still in keeping with most of the laws of physics, whereas dozens' dreams appear to have a much more malleable world. That might be related to me being much less exposed to fantasy or RPGs and having much less creativity.
]]>dozens, now a recurring character in this feed, summarized me in an original way:
]]>there's an rss devil on my shoulder telling me to make new feeds all the time, and it is lucidiot. on my other shoulder is an rss angel and it is also lucidiot and it is also always telling me make new feeds.
like how in some storylines the joker just wants batman to be the very best batman he can be. lucidiot is the rss joker to my compulsive writing batman.rssoker: everyone is just one bad day away from making another rss feed
rssatman: you're wrong rssoker
rssoker: HAHAHAHAHAH
This blog is mostly known for its random posts on Microsoft jargon (which Chen calls Microspeak) or various pieces of Windows trivia, such as why Pinball was removed from Windows, and why it cannot come back even though they want to. More recently, a post on a song that made hard drives crash resulted in a vulnerability being reported.
]]>Each status includes an emoji to summarize your current mood or state, and if you know how to use the browser's developer tools, you can make it use any Unicode character you want. You can show off your current status emoji using a badge, and I have put mine among the many others on my tilde.town page.
status.cafe provides a whole bunch of Atom feeds; you can subscribe to a feed of everyone at once or the specific feed of each user as https://status.cafe/users/[username].atom
. For example, here is mine.
While exploring the information superhighway to learn more about other feeds I wanted to post about on here, I also stumbled upon imood, which possibly was a source of inspiration for status.cafe. Statuses are called moods, and have both a personal mood (some sentence that the user writes) and a base mood selected from a set. There are no feeds on this one though, so status.cafe is clearly superior.
]]>I am building a small backlog of posts to hopefully allow me to post somewhat regularly, using something even worse than just writing XML by hand as I usually do in here. I created a database using LibreOffice Base on a whim, just because I wanted to play with that one evening, and ended up inserting about 50 feeds that I wanted to post and creating a form to start writing short descriptions of each feed to later post them.
Today, I somehow conjured the energy to rewrite the XSLT that powers the HTML rendering of this feed, on browsers that do not support RSS feeds natively. It looks more refined, and I have some ideas for future extensions such as supporting enclosures, GeoRSS, Event RSS and more. It now also includes some optional JavaScript that fixes the HTML unescaping issue on Firefox, making the feed much more readable. After finishing a first version of this new theme, I decided to finally post again and announce this possible return.
One of the many reasons why i have so little energy to post is that i feel like most of what i do is meaningless or not interesting to anyone, which is probably to be expected when i explicitly choose to ignore SEO or just when i work on very niche topics. if you want to help me fight the negative voices in my head, feel free to reach out to me. Even just a single sentence to tell me you are reading me is hugely appreciated.
]]>I have known about 100r for a while, but only follow them by reading this feed because their creations and ideas are often incompatible with my life; I do not live on a boat and do not really enjoy the kind of very minimalist, colorless aesthetic that they follow. Getting those quick monthly reminders of the existence of the concept of permacomputing, and taking a few minutes to watch those people living in what looks like an alternate universe completely separate from mine, can still be interesting in small doses.
]]>It could also have feeds for all posts by a user or all posts within a forum (a group of boards), but having the option of following this messageboard using a feedreader is still quite neat.
There also used to be a vpub instance dedicated to vpub itself at vpub.miso.town, but it appears to be offline now. m15o, vpub's creator, said they are not working on it as well.
]]>The JTS and its C++ port called GEOS power pretty much every geographical information system under the sun. I don't understand much about the math behind GIS stuff, but this blog showcases some of the new features in the JTS with some simple examples. It can help me suggest improvements at work, or get new ideas for weird projects since I like playing with maps now.
This feed is also available as an Atom feed. ]]>
Multiple RSS feeds are available, to get the PBF version, the BZ2-compressed XML version, or the entire history of the map and not just its current state. Those are documented on the OSM wiki.
I really like the name of this thing. Planet. You can just… download the whole planet. This is how far we've come as a society. Why stop at downloading a car when you can download a planet?
]]>Just as I noticed this, it turns out that the board has turned 20 just a few days ago. Happy birthday!
Discovering this also made me find out that Dave Winer is also on the Fediverse, and is just as bitter as he is rumored to be. I guess the old days of the wars between RSS 1, RSS 2 and Atom are still not over.
]]>Probably the most exotic part of this image sharing service is that every image is highly compressed: no more than 400×400 pixels in size, and saved as JPEG with a quality of 5%. This probably makes hosting this service a lot easier in terms of bandwidth and storage space, and makes the images look blurry or less detailed. Most piclog users are status.cafe users, so it's interesting to see the photographic equivalent of the things I see fellow status.cafe users post regularly.
While you can get a feed of every photo from everyone, you can also get feeds for each user, with https://piclog.blue/user-feed.php?id=
followed by the integer ID of the relevant user. You can get this ID by opening their profile, since the same ID will be in the profile URL. Here's m15o's feed for example.
As of posting this, the feeds unfortunately do not use Media RSS or embed the image into the description as an <img>
tag, so you will have to open each item in your browser to view the image with most feedreaders.
The NTSB is the one investigation agency I really must have in ITSB. It might just be the largest agency for transportation safety investigations worldwide, and anyone who ever watched a Mayday documentary or looked into plane crashes has heard of it. They produce the largest amount of reports out of all the agencies I found through ITSB.
Fortunately, they provided an official RSS feed for their released investigation reports. I'm using the past tense though, because they unfortunately decided to shut it down. The feeds were still available for a little while, but they would be completely empty. I have yet to see anyone ever sunsetting a feed properly, by adding a post to warn everyone for a few days before just killing the feed completely, so this issue went unnoticed for a while.
To generate a feed when there is no official one available, I usually just run curl
on a webpage that lists investigation reports, then use pup to select some HTML elements and convert them to a JSON structure, then mess around with said JSON with jq, and finally convert that back into XML using xmltodict. But after looking around on the NTSB's website, I went for a much weirder method.
The NTSB provides a service called CAROL, a tool to search through all the investigation reports and safety recommendations the NTSB ever published. Getting a lot of structured data sounds a lot more interesting than having to parse the scant details I can get from unnecessarily complex HTML pages, so I wanted to use that as my source for my custom feed.
After a lot of experimenting, I ended up writing a separate script that exports 1 year of completed investigation reports as a large JSON file. I could have exported 10 or more years of reports, but that resulted in an extremely large RSS feed that would make most feedreaders blow up, so I only got one year.
I then use a 671 lines long jq script to process this JSON file into an RSS feed, including as much information as I can within the <description>
so that you sometimes do not need to read the PDF report at all.
This mess results in a feed that is far, far better than any other feed I have in ITSB, especially any official feed. If every webmaster wants to remove RSS and replace it with newsletters, since that's what I gathered from my few attempts at reaching out to those agencies, maybe the real solution is to push for more open data instead. Let the people who know and use RSS make proper RSS feeds without scraping your website…
]]>A while ago, I tried to look for more blogs related to stationery, handwriting, etc., and I found The Cramped. This blog evolved a bit over time, and nowadays the author occasionally shares other people's posts related to writing, notebooks, writing in notebooks, personal knowledge management, etc. This kind of feed is nice to have if I want to discover more blogs! I subscribed to maybe two or three already thanks to it.
]]>A coworker suggested that maybe we should have a standard for french tacos, since many places selling french tacos commit blasphemy by adding veggies in them, sometimes even adding them by default without warning you. I have some experience writing joke RFCs, so that's something to consider.
]]>icbm
XML namespace allows you to specify an ICBM address in either the <channel>
or the <item>
elements, allowing you to relate a location to either the entire RSS feed or a single specific item on that feed.
<rss version="2.0" xmlns:icbm="http://postneo.com/icbm"> <channel> <!-- ... --> <icbm:latitude>30.0301</icbm:latitude> <icbm:longitude>32.5776</icbm:longitude> <item> <!-- ... --> <icbm:latitude>31.5077090</icbm:latitude> <icbm:longitude>-82.3115156</icbm:longitude> </item> </channel> </rss>
With this method, you can therefore specify a location where someone may send a nuke if they have been particularly angered by something you published on that feed. Or more commonly, you might want to set a location relevant to the feed, like the location of the tautology club whose blog has a feed for, or the location of something mentioned within the feed.
There are other, more recent and more standard methods to refer to geographic coordinates in an RSS feed, and not just specific points. We will go over those some other time.
]]>icbm
namespace got created in a blog post for RSS 2.0, the Semantic Web Interest Group of the W3C devised a Basic Geo Vocabulary that allows for something very similar to ICBM addresses, but that does not require missiles and is integrated into RDF. It also adds the ability to specify an optional altitude, in meters.
This is meant to be used in RDF, so you would probably normally use this in a RSS 1.0 feed, but as with many other RDF namespaces, nothing really stops you from integrating that into RSS 2.0 or Atom, and many people have done so already.
<rss version="2.0" xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"> <channel> <!-- ... --> <geo:lat>30.0301</geo:lat> <geo:long>32.5776</geo:long> <item> <!-- ... --> <geo:lat>31.5077090</geo:lat> <geo:long>-82.3115156</geo:long> <!-- It's Moon time. --> <geo:alt>384400000</geo:alt> </item> </channel> </rss>
Two years later, in April 2005, GeoURL.org, a service that used to allow finding websites by their associated geographical location, introduced the geourl
namespace, adding another duplicate namespace on top of icbm
and geo
. I mention it here too because the W3C validator supports all three namespaces!
You can use it with xmlns:geourl="http://geourl.org/rss/module/"
and the <geourl:latitude>
and <geourl:longitude>
elements. I would however advise against using it as it increases the complexity for feed parser and feed reader developers; prefer the RDF geo
namespace instead, which is more widely known.
And as a last piece of advice, do not mix the icbm
, geo
and geourl
namespaces within the same channel or item, even if you intend to represent multiple coordinates at the same time! There are more complex but more flexible alternatives, which we will see in later posts, that allow to go beyond a single point.
GeoRSS is a standard that was developed by a mix of geospatial and syndication people and released in 2006 on georss.org. That website is now gone, but of course, the Internet Archive's got our backs. In 2017, that standard got republished by the Open Geospatial Consortium, the gods of geospatial standards, as OGC 17-002r1. I really like that quote from that version of the standard:
The initial goal for designing and documenting GeoRSS was to keep the encoding of geography on the Web from fracturing into various encodings the way RSS ended up, with multiple similar implementations.
Considering that the three namespaces we saw in the previous post appeared before GeoRSS, and that georss.org mentions the W3C Geo namespace, it doesn't seem like they were starting well. However, the few remaining feeds that I know of that include geospatial information do use GeoRSS only, so I guess they won in the end. The fact that only geospatial experts would be using geospatial coordinates within RSS feeds, and that most GIS software only supports GeoRSS or name all of their RSS support GeoRSS, must have helped.
GeoRSS defines two so-called "serializations", called Simple and GML. In this post, we will only consider GeoRSS Simple; GML requires us to delve deeper into the mess that is geospatial information, so we'll see that at some other point in time. The goal is to have most feed producers, those that are not geospatial experts, use GeoRSS Simple, which is simple enough to be understandable by them, and have geospatial experts use GML, which they probably prefer. You can convert from GeoRSS Simple to GeoRSS GML, but not necessarily the other way around.
Here's an example of yet another way to represent a point in an RSS feed, but using GeoRSS Simple this time:
<rss version="2.0" xmlns:georss="http://www.georss.org/georss"> <channel> <!-- ... --> <georss:point>18.5166670 33.6666670</georss:point> <item> <georss:point>18.5166670,33.6666670</georss:point> </item> </channel> </rss>
Where the other namespaces use two distinct tags to represent both coordinates of a point, GeoRSS uses only one tag, which is defined as a list of real numbers. While using one or two tags does not matter much whether you use one form or the other when using GPS coordinates, it does start to matter when you care a lot about altitude or work with other coordinate systems. GeoRSS Simple requires WGS84 (GPS) coordinates and represents elevation separately, so it won't ever matter in this serialization, but with GML, it will!
You may also note that in the first point
, I used a space to separate both coordinates, whereas in the second one I used a comma. The official XSD for GeoRSS Simple defines the point as holding a list of doubles (decimal numbers stored in 8 bytes) using the XSD <xs:list>
element, which defines a list of items as being space-separated only. But section 7.3 of the OGC standard states that a comma is also acceptable, so anyone wishing to parse GeoRSS will have to take that into account.
<rss version="2.0" xmlns:georss="http://www.georss.org/georss"> <channel> <item> <!-- ... --> <georss:point>-33.8735580,151.2344385</georss:point> <georss:featureName>Boat Syndication Australia</georss:featureName> <georss:featureTypeTag>shop</georss:featureTypeTag> <georss:relationshipTag>has-nothing-to-do-with</georss:relationshipTag> <georss:elev>5.25</georss:elev> <georss:floor>0</georss:floor> <georss:radius>4.5</georss:radius> </item> </channel> </rss>
Here's a description of all of those intriguing optional elements:
<georss:featureName>
<georss:featureTypeTag>
The type of that feature: whether it is a mountain, a country, etc. When unset, it defaults to location
. There are no other defined values, as the intent was to let the community form its own taxonomy.
While neither the original website nor the OGC standard specify any restriction, the official XSD typed it as a QName
, meaning an XML qualified name, or anything you can use as the name of an XML element, with or without a namespace prefix. This means you can use something like sandwich:blt
, but not food:sandwich:blt
because only one colon is allowed, and you cannot use spaces. All examples in both the original website and the OGC standard never use spaces, instead preferring kebab-case. So you should probably limit yourself to a QName, or maybe just to kebab-case.
<georss:relationshipTag>
seen-at
, or a feed of drawings where each drawing was inspired-by
. The default is is-located-at
. The relationship is always from the channel or item to the feature, not the other way around. It has the same confusing definition as a QName as featureTypeTag
.<georss:elev>
<georss:floor>
<georss:radius>
Next time, we will finally do more than just point at things, and use the other geometry types that GeoRSS offers.
]]><georss:line>
<georss:box>
<georss:polygon>
Here are some examples of each of those tags:
<!-- Part of Haaldersbroekerdwarsstraat, a long street name in the Netherlands --> <georss:line>52.4718867,4.8277792 52.4721926,4.8275892 52.4729501,4.8270419</georss:line> <!-- Some random grass not so far away from there --> <georss:box>52.5662344 4.7976189 52.5676983 4.8013674</georss:box> <!-- A building called ESPRESSO at the Very Large Telescope, because astronomers need coffee to go through the night --> <georss:polygon> -24.6273416,-70.4045081 -24.6273922,-70.4044894 -24.6274264,-70.4046014 -24.6274789,-70.4045820 -24.6276119,-70.4045330 -24.6275341,-70.4042780 -24.6274634,-70.4043041 -24.6274763,-70.4043463 -24.6273109,-70.4044074 -24.6273416,-70.4045081 </georss:polygon>
You can only specify one of these geometries at once, along with all the optional elements that I described in the previous post. Those new shapes enable some new interesting use cases for feeds:
<georss:line>
.<georss:line>
to show where the shadow of the eclipse will be moving on the Earth's surface. Maybe with a <georss:radius>
as the radius of the shadow to do some buffering and not only show the center of that shadow.<channel>
including the location of the weather station, or the area where the reports apply.<georss:polygon>
around the circumference of each featured building.<georss:point>
with a <georss:radius>
to show the precision, or as a <georss:polygon>
showing a triangle of the receivers that detected the signal.<georss:point>
when a stop is skipped or moved, a <georss:line>
when a line gets rerouted, etc.You can probably use any of the geoportals out there, the websites that list open geographical data mostly from governments, and get plenty more ideas for GeoRSS feeds.
And since we are now done with GeoRSS Simple, we'll look at GeoRSS GML next time.
]]>GML is an enormous XML schema designed to express any geospatial data under the sun. Geometries, features, coordinate reference systems, units of measurement, time, sensor measurements, data re-fetched automatically over the network, assigning coordinates to images, etc.
The language is not meant to be used alone, as supporting all of it is equivalent to implementing nearly every bit of geospatial software out there. Instead, GML profiles are defined, which are subsets of GML that are relevant to your needs, and are then used in application schemas, which define the specific XML format you are using that will contain some of that GML profile in it.
GeoRSS is an application schema using a dedicated GML profile that severely restricts GML so that we don't become too insane. You are limited to four geometries, one less that the five we saw in GeoRSS Simple: points, lines, boxes, and polygons. You do have access to some extra options though, and we'll look into that soon enough.
Here is an example I wrote previously for a single point in GeoRSS Simple, but rewritten for GeoRSS GML:
The only two differences are that there is now a new gml
namespace, and that the georss:point
element has been replaced with a georss:where
element to hold the point defined with GML. And now for the examples of other geometry types, which you would now place inside of the georss:where
:
<georss:where>
<georss:point>
.<gml:pos>
<gml:posList>
<gml:Point>
<gml:pos>
element to indicate its coordinates.<gml:LineString>
<gml:posList>
to list the coordinates of each point.<gml:Envelope>
<gml:lowerCorner>
and a <gml:upperCorner>
to specify its two corners.<gml:lowerCorner>
<gml:Envelope>
. Its value is the same as a <gml:pos>
.<gml:upperCorner>
<gml:Envelope>
. Its value is the same as a <gml:pos>
.<gml:Polygon>
<gml:exterior>
ring. GeoRSS GML forbids any interior rings, since the GeoRSS Simple <georss:polygon>
does not support interior rings, so the exterior ring is always alone.<gml:exterior>
<gml:LinearRing>
.<gml:LinearRing>
<gml:LineString>
, but there has to be at least four points, and the first and last coordinates must be equal, so that the line string forms a ring.The OGC standard, which is the only currently active standard, and the original archived pages for GeoRSS Simple and GeoRSS GML do not define any specific element to describe a circle. If you want to represent a circle, you can do so using the <georss:radius> element, which will create a buffer around a point.
However, the XSD defining the GeoRSS GML Profile and the one for GeoRSS Simple, both include ways to specify a circle separately. The OGC standard has links to the schemas and does not state that those schemas are not non-normative, as many other specifications do. That means that in theory, it is completely legal to use them.
<georss:circle>
<gml:CircleByCenterPoint>
<georss:where>
element. This should have a <gml:pos>
element to specify the coordinates of the center point of the circle, and a <gml:radius>
element to specify the radius.<gml:radius>
uom
attribute to specify the unit of measurement, which is by default m
to represent meters.uom
m
for meters, cm
for centimeters, [ft_i]
for feet (international definition) or [ft_us]
for U.S. feets. It is highly likely that most systems will only support meters.The GeoRSS documentation on ArcGIS Online mentions supports for circles on GeoRSS Simple, but excludes them from GeoRSS GML. I would therefore advise against trying to use a CircleByCenterPoint
. It is likely that the few GeoRSS implementations out there will only support <georss:circle>
, if they support circles at all.
The W3C Feed Validation Service does not support circles in either its GeoRSS Simple validator nor its GeoRSS GML validator.
Circles were probably either added before <georss:radius>
was introduced, or added, then partially removed when someone noticed <georss:radius>
could already do the job. Another possibility is that circles and curves are far less supported by GIS software than linear geometries, so they wouldn't be that usable anyway. This raises interesting questions: what happens if you use a circle, but also add a radius around it? Do you get a larger circle? Is the radius ignored? Does it become an approximation of a circle as a polygon, as is common with GIS software that doesn't support circles? Those questions will definitely remain unanswered, as with most things about RSS, the answer of most organizations nowadays will be "who cares?". This is why we can't have nice things.
If I ask you to give me something that will precisely point at some place, any place in the world, including the middle of the ocean, you’re likely to give me so-called GPS coordinates. Those actually are WGS 84 coordinates. They represent a location on Earth, assuming the Earth is a perfect ellipsoid (a sphere, but slightly squished at the poles), whose center is the planet’s center of mass. But there are a lot of other ways to produce coordinates. Even now that GPS coordinates are ubiquitous, many other systems are still in use, for historic reasons, due to technological constraints, or for an increased precision.
Let’s start with a simple one. How do you represent coordinates in three dimensions? We did see earlier that GeoRSS has <georss:elev>
to set the elevation in meters, but what if you are trying to represent a line that is sloped? For example, you are making your own Strava and want to show that you went up and down a hill. Your track won’t be perfectly at sea level, it will have an altitude that changes with each point. You need some way to include the altitude along with the latitude and longitude. In a geospatial database, the typical GPS coordinate system in use is numbered EPSG:4326; store this number next to your coordinates and the database knows you are speaking in WGS 84. But if you want to add a third coordinate for altitudes, you will have to use a different version of the system numbered EPSG:4979. It’s the same as GPS, but there’s a third axis for a height, starting from the ellipsoid defined by WGS 84, and measured in meters.
Let’s go further. With all the hype around a bunch of space agencies trying to build a moon space station and two moon bases and sending rovers and all, we have to start thinking about an equivalent of GPS for other planets, and a way to refer to places on any planet. Fortunately, space agencies have had this problem a long time ago, and they have their solutions.
If you define your own geographic coordinate system, you can make your own ellipsoid to describe the shape of the planet, set the origin point (the 0° north 0° east point), and define how altitudes are expressed if you want to have a third dimension. On top of that, you can define a projection to flatten your planet, but that’s a whole another can of worms and I won’t deal with that here. You could define a coordinate system for the moon, with an ellipsoid that has the size and shape of the Moon, centered on the Moon’s center of mass, and define wherever you want your origin point to be. And you can do the same thing for basically anything, assuming you can somehow trick geospatial databases into bending a spheroid hard enough to fit your needs.
And that’s what the IAU did. Those are the same people who said Pluto isn’t a planet, so I don’t know if you can really trust them, but I haven’t seen any other coordinate system for other planets that was in widespread use within the space industry. There are lots of coordinate systems and projections for planets and moons, including some for Earth because we clearly needed more. For the Moon, you’ll have to use IAU2000:30100, aka Moon 2000. This doesn’t mean the Moon is Y2K-ready, it just means this was adopted by the IAU in 2000. Moon 2000 is defined in a geospatial database like so:
The PRIMEM
specifies the primary meridian, at 0°; it is called Greenwich even though it definitely doesn’t exist on the Moon, because nobody cares about its name. We also don’t specify anywhere what the actual location of the origin point is, because databases don’t care about that either. The UNIT
specifies that we are using decimal degrees for coordinates, with the long number being the multiplier to convert degrees to radians. Those are almost always present in most coordinate systems.
What matters for the Moon is the SPHEROID
, with its two parameters, the semi-major axis and the inverse flattening. A spheroid is just another name for an ellipsoid.
In a sphere, the semi-major axis is the radius. In an ellipsoid, that would be the largest radius, as opposed to the semi-minor axis. The inverse flattening defines how hard you should squish the sphere to get an ellipsoid, so it allows calculating the semi-minor axis. Here, we have 1737400
as the semi-major axis, which matches the radius of the Moon in meters, and 0
as the inverse flattening, meaning this is a perfect sphere.
Remember how I mentioned in a previous post that GML is designed to represent anything about geospatial data? You can check out the GML representation of Moon 2000 if you wish to be spooked.
So let’s say we have some Moon 2000 coordinates, for example Tranquility Base, at 0.6875°, 23.433333°. How do you put that into GeoRSS?
Since databases don’t care one bit whether what you are doing makes any sense, you could convert directly from Moon 2000 to WGS 84. That would make the database assume that your coordinates are just on a very weirdly-shaped Earth. Since coordinates are in degrees, the size of the Earth doesn’t matter, and the coordinates will remain unchanged after this conversion; maybe with some slight changes to account for the differently-shaped ellipsoid. You are now in some weird place in Democratic Republic of the Congo.
To do the proper conversion, you will need to do some trigonometry. Your Moon 2000 coordinates and the Moon 2000 spheroid are related to the center of mass of the Moon. WGS 84 is the same for Earth. Knowing the distance between the Earth and the Moon’s centers of masses, and knowing the position of the Moon on the Earth’s surface at a given date and time, it should be possible to get the offset in degrees to add to the latitude and longitude to said position to get the position of your target on Earth, as well as the altitude from Earth.
That’s a mess, and you can do something easier than that: just make it someone else’s problem. GeoRSS GML lets you set a different coordinate system using the srsName
attribute. And if you are using any amount of dimensions other than two, you can use srsDimension
as well.
Here is an example of one of the telescopes of the VLT, the same that I mentioned in my post about circles in GeoRSS, but using a third dimension in its coordinates instead of the <georss:elev>
tag:
To specify that I am using EPSG:4979, I am using the srsName
attribute with a URN, specifically an OGC URN, which defined that def:crs:
defines a coordinate reference system. EPSG
says the authority defining this system is the EPSG, 9.0
is the version number of their Geodetic Parameter database, and 4979
is the identifier of the system within that database.
I am also using srsDimension
, which allows specifying how many dimensions the coordinate system has. While this could be guessed from the coordinate system, this allows feed parsers and validators to know that they should expect coordinates of this amount of dimensions without having to know about coordinate systems, which can simplify implementations. Perhaps you can just send the srsName
verbatim to some other software library specialized in coordinate systems.
And here is Tranquility Base! Since the IAU2000
coordinate systems and projections do not have a URN, I am instead using a URL to the GML definition of the coordinate system I want.
srsName
The spatial reference system used for this geometry. This should be either a URN for a common system, for example urn:ogc:def:crs:EPSG:<version>:<identifier>
, where <version>
is the version number of the EPSG database of spatial reference systems, and <identifier>
is the number of the system. For EPSG:4979
, you could use urn:ogc:def:crs:EPSG:9.0:4979
. For an SRS that does not have a URN, or has a custom definition, you can use a URL that points to the definition of this SRS in GML. For IAU2000:30100
, you could use https://spatialreference.org/ref/iau2000/30100/gml/
.
The W3C Feed Validation Service allows this attribute, but does not perform any validation on its value.
srsDimension
The number of dimensions of this spatial reference system. Since the default is the 2-dimensional WGS 84 (EPSG:4326), you will always need to set srsName
along with this. This is always implied by the system you are using, but this makes it easier to validate your data since a GeoRSS validator does not have to know how the SRS defined, or understand the concept of SRS, to be able to tell if you put the right amount of coordinates in your data.
You can set both of these attributes on <gml:Point>
, <gml:LineString>
, <gml:LinearRing>
, <gml:Envelope>
, <gml:Polygon>
or <gml:CircleByCenterPoint>
. You can also set these directly on <gml:pos>
and <gml:posList>
, but the XSD for the GeoRSS GML Application Profile says that "It is expected that this attribute will be specified at the direct position level only in rare cases".
While the W3C Feed Validation Service supports this attribute, it only validates that it is a valid positive integer, not that it matches the specified SRS, or that the coordinates specified in the geometries match this attribute. It also does not allow this attribute on <gml:pos>
or <gml:posList>
.
The NHC provides a myriad of feeds, with an RSS button available on the header of every webpage, but the list of feeds it links to is quite hard to read. Feed autodiscovery is supported, with 11 of their feeds listed as <link />
tags. Among this hodgepodge of feeds, you'll find:
That's a lot. Most of these feeds are divided by region (Atlantic, Central Pacific or East Pacific, per WMO conventions), and by "storm wallet". A storm wallet is a large binder or collection of binders that forecasters used to archive all of their data into once each cyclone dissipates, numbered 1 to 5, to match the maximum advisory level reached. In the case of those feeds, this means each storm wallet is actually the current advisory level for the storm. Some feeds also have versions in Spanish, updated by their Puerto Rico office when they feel like it.
To make it slightly easier for feed aficionados (afeedcionados?) to figure out what they might be interested in, I wrote a terrible script to build an OPML file listing every available feed. You can access it and add it to your feed reader here; feel free to remove the likely numerous duplicates from your reader afterwards.
Note that some of these feeds include a <gml:Point>
element in the items describing weather systems, but it isn't wrapped within a <georss:where>
element, making those feeds invalid. This strangeness is what made me have a deeper look into GeoRSS in the first place, leading to the series of articles I posted in the last few weeks.
With the help of the NHC's example files, published solely to help developers work with their feeds, and GIS RSS feeds documentation page, I cobbled together an XSD to better document this namespace.
Note that in this example, I am using the xsi:schemaLocation
attribute to tell any XML schema validators where the XSD for the nhc
namespace is located. This can help you if you are using an XML editor to write your RSS feeds, or want some automatic validation of the validity of your feeds with namespaces and features that go beyond the W3C Feed Validation Service. Let's have a look at all those new XML elements:
<nhc:Cyclone>
<nhc:center>
<nhc:type>
<nhc:name>
<nhc:wallet>
The storm wallet: Back before hurricane forecasting became computerized, all of the hurricane data was stored in binders, called wallets. There are five wallets for each of the three areas of responsibility of the NHC and the CPHC.
Storm wallets are numbered with two letters representing the area of responsibility, followed by a digit from 1 to 5 matching the storm advisory number. The two-letter codes for areas of responsibility are:
<nhc:atcf>
Storm identifier in the ATCF software. This is the software used for hurricane forecasting ever since it became computerized. It can be used to find the raw data from that software on the NHC's public file server.
ATCF IDs begin with a two-letter code for the area of responsibility, followed by a two-digit storm number and the four-digit year in which the storm occurs. The two-letter codes for areas of responsibility are:
AT
code for storm wallets.Storm numbers 01 to 30 are supposed to be unique storm numbers per season. Storm numbers 80 to 89 are used for training purposes and should be ignored when trying to process real ATCF data. Storm numbers 90 to 99 are areas of interest to forecasters that may not actually be storms and may be reused in the same season.
I recommend using storm numbers between 80 and 89 if you want to mess around and create fake storms, since those are explicitly designated as training or testing data that should be discarded. Also note that storm numbers 31 to 79 are not assigned, and that they assume there will never be more than 30 storms in one year. I'm sure climate change will fix that.
<nhc:datetime>
%I:%M %p %Z %a %b %d
. The commonly used timezones are CDT on the Atlantic reports, PDT on Eastern Pacific reports, and HDT on Central Pacific reports.<nhc:movement>
The direction and speed of movement of the storm, expressed as a cardinal direction and a speed, usually in the form [direction] at [speed] mph
. The direction can be a cardinal (N, S, E, W), intercardinal (NE, SE, SW, etc.) or secondary intercardinal direction (WNW, ESE, etc.). Speeds are non-negative integers, and always in miles per hour.
Note that this is a manually written value, not necessarily intended for machine consumption, and that nothing prevents other values from being set. Other known values include Stationary
and Nearly stationary
.
<nhc:pressure>
[pressure] mb
.<nhc:wind>
[speed] mph
.<nhc:headline>
…
, U+2026).All of the child elements of <nhc:Cyclone>
are required.
And as with most of the XML namespaces that I showcase on this feed, I added support for it on the XSLT that allows this feed to be displayed on most modern web browsers without RSS support. View this post in your browser and admire the additional hurricane information available!
]]>In the middle of an IRC conversation, I mentioned how I was only reading the blog articles of my friends that I spot in my feedreader, right after mentioning I read 40 of brennen's posts. He proceeded to build an Atom feed with every single post from his website all the way to November of 2020, which is as of writing still available here. If you want to stress test your feedreader, or how much you like to read blog posts, add this feed. Don't expect it to be updated though since it was generated manually just once.
I went through every single of the 1960 entries of this special feed in just one month, and I read the other few dozen posts that were posted in the years since. It was really fun to follow along as ~brennen grows up. He told me he thought the older entries were embarrassing, and I can understand that since I also feel shame at things I put up online when I was younger, most of which I have deleted ever since. But going through all of his posts was fascinating. I wasn't laughing at young him or thinking any less of present him. I was just watching someone growing up a decade earlier than me, in a different country, with a different culture. I believe there is some great historical value in this online diary, just like how historians are studying the past by reading diaries. I hope ~brennen carries on with this great undertaking and continues shoving random tidbits of his life into this website. This is the World Wide Web at its finest: humans just being human.
]]>I could not find it anymore, but a while ago, I read a blog post about someone watching someone else use a PSP to access GOV.UK on some free Wi-Fi to do whatever business you might have on your government's website. They used that as an example of how good Web design allows accessibility, even to people whose only device might have an incredibly limited browser and who still need to fill out governement forms online. My own experience with browsing the web on a PSP teaches me that accessing any website nowadays is extremely difficult, but I am willing to believe that blog post because GOV.UK's design sounds like it could actually fit on a PSP, or at least still be readable.
GOV.UK's design system causes the website to often be listed on lists of "brutalist websites", due to the design being all about clearly displaying what people are looking for, unlike what most heavily monetized blogs or most web apps do now.
But we're not here to talk about website design of course. Another interesting and much more relevant part of the UK governement website is that they have feeds, and a lot of them. Integrating the UK transport accident investigation branches into https://tilde.town/~lucidiot/itsb/ was trivial, just pick the right filters and get your tailored Atom feed. I have started to randomly stumble upon the feeds of other UK public bodies, and I think I'll have to soon spend more time trying to list all the feeds they have because there's a lot to discover. A lot of feeds, of XML namespaces, of relationships to European projects, and probably more.
Let's just start with this rather simple feed: you can get updates on the current national terrorist threat level set by the MI5. I was both surprised at the fact that that's a feed, a feed that only gets updated at most twice a year, and at the fact that they have a separate threat level set for Northern Ireland. I'm fairly sure having a separate Vigipirate level in France for Corsican independentists would just be a self-fulfilling prophecy, causing them to attack more.
I naively hope the UK someday stops all its political bullshit, but only so they can keep feeding me more feeds, and inspire other countries to do the same.
]]>I like using RSS feeds as a means of getting not just blog posts. Getting warnings about delays or changes on a commute over RSS sounds really nice. Getting the kind of information that people think they need phone notifications for, or the kind of information companies want to you think you need a mobile app full of data collection for. And most importantly, getting it all into a system designed to handle a large amount of information, to sort it, to filter it, to display it conveniently, instead of just a sort of tray that doesn't fit more than a few notifications.
This reminds me of a BlackBerry Q5 I used before I switched to OnePlus. BlackBerry OS 10 is just Android with a BlackBerry UI, and its notification center was quite nice and could clearly better handle a large amount of notifications than Android or iOS' default interfaces. I guess this is just what happens when you design something to be used for more than just doomscrolling and swiping left or right on random faces.
Anyway, since I like to get all the interesting feeds neatly organized into even more XML, I wrote a script to generate an OPML file for all the Chicago Transit Authority feeds. Since they provide feeds for both a single line or category of alerts and for all at once, you'll get duplicates if you import that, but at least you will know about all the available feeds.
]]>These feeds are quite strange. They use non-namespaced extensions to RSS, which are illegal but will not break most feedreaders, to give some metadata on each item:
<reference>
<guid isPermaLink="false">
.<road>
M20
.<region>
<county>
<latitude>
<georss:point>
or a <gml:pos>
or a <icbm:latitude>
or a <geo:lat>
…<longitude>
<georss:point>
or a <gml:pos>
or a <icbm:longitude>
or a <geo:long>
…<eventStart>
<ev:startdate>
from the RSS 1.0 Event Module.<eventEnd>
<ev:enddate>
from the RSS 1.0 Event Module.<overallStart>
<eventStart>
. For road works that start before the event occurs (for example, road works that only close a road during a given time of day), this will be a different date.<overallEnd>
<eventEnd>
. For road works that end after the event occurs (for example, road works that only close a road during a given time of day), this will be a different date.Obviously, not a single feed reader will support these tags.
A larger issue however is just the sheer amount of items. Adding this OPML into a feedreader made it pull over fourteen thousand items. Now of course there are a lot of duplicates, since there are feeds by road, area, and for the whole network, as well as for incidents, roadworks, or both at once. The everything-everywhere firehose still has three thousand items, with tons and tons of roadworks everywhere.
Just the incidents-everywhere feed gets a new item from every few minutes to every few seconds, which is the fastest update rate I have ever seen on a feed. I guess those feeds really can only be used by software for further processing.
]]>The first OPML file that got generated automatically was for the feeds of the National Hurricane Center. It initially used a shell script that combined some JavaScript code via Node.js and a call to oq, a wrapper around jq that can convert between YAML, JSON and XML.
The JavaScript code was retrieving the NHC's RSS feeds list page and parsing it using a regular expression. It would then generate a JSON representation of an OPML file, which gets converted to XML almost as described in this nearly 14 years old article by oq. That's how I was used to generating feeds within my ITSB project.
But that did not mean I was really happy with this. I do not like having a lot of dependencies in my projects, particularly those that can be heavy or restrictive in terms of CPU architectures. Ideally, I would like to be able to do almost everything on Windows XP, since one of my many other niche interests is in older Windows systems.
While going through the XML-based standards category on Wikipedia to look for potentially interesting namespaces for RSS feeds, I stumbled upon XProc. XProc is an XML schema that lets you define pipelines, particularly to process XML data. This reminded me of the main XML file of ITSB, which holds both the contents of its homepage and the instructions to generate the hundreds of feeds it serves. A series of XSLT turned that file into either the HTML homepage, an OPML containing all the feeds, or a Bash script that can be executed to generate all of the feeds.
XProc looked like an interesting path to rewrite ITSB entirely and make it go beyond only generating feeds for transport accident investigation reports, which is something I have been thinking about for a while. However, the only mature implementations of XProc appear to be in Java, which is a hard no in most of my projects. Searching for xproc on GitHub made me find xproc.xq, an XProc implementation that relied on a Java implementation of another strange language, XQuery.
XQuery, the XML Query Language, is an extension of XPath, a language that you probably encountered if you have been working with XML for a while. XPath is used within XSLT, and it's often one of the quickest ways to extract something from an XML document with most XML libraries if you don't want to deal with the complexity of converting between the XML paradigm and your programming language's. You can even use it within your browser, with the Document.evaluate()
method of the DOM API.
While XPath is mostly meant to give a succint way to describe a filter on XML data, XQuery goes beyond that and allows iterating on and processing that XML. You can probably rewrite any XSLT into an XQuery script, and it will probably be easier to read. XQuery allows declaring functions, using variables, etc. and even provides a syntax that reminds me of SQL and SPARQL:
This example, in an imaginary XML document holding the universe, computes how many galaxies have at least three planets of a given type, then returns those planet types, starting with the type with the most galaxies, and including a unique identifier. The ordering is marked as stable, forcing the XQuery implementation to order any types that have the same count in the same order on every execution. This is a quite complicated expression, but you can do way worse in XQuery. This would probably be doable with an XSLT, but it would definitely be very painful.
The XProc implementation I found in XQuery was using some MarkLogic Server extensions. It is a document-oriented database that lets you run either XQuery or JavaScript to query its data, and it is proprietary, so I was definitely not interested in trying to use it. A fun thing to note however is it that it also provided unit tests via xray. You know you've got a strong query language when you can have a unit testing framework for it!
I went looking for an open-source XQuery implementation that does not require Java, and that could provide enough vendor-specific extensions to replace those used within xproc.xq. I first found Zorba, basically MarkLogic Server but open-source, and it provided an impressive amount of extensions. You can even interact with SQLite databases in it, to use a query language within a query language. Unfortunately, Zorba is an incredible mess to build, so I quickly gave up trying to package it for Alpine Linux and tried to find something else.
I gave up for a little while, then stumbled upon Xidel, an open-source tool written in Pascal that supports applying CSS selectors, XPath queries, XQuery scripts, as well as JSONiq, a JSON equivalent of XPath and XQuery that got merged into XPath and XQuery 3.1. It can make HTTP requests, parse HTML (not just XHTML), submit HTML forms found in pages, interact with the filesystem, run other processes, and more.
It could allow me to merge every single of the tools that I use in ITSB into just one dependency. And that dependency is just one statically compiled executable that I can easily download automatically in scripts if I need it. And Pascal can be compiled on a lot of platforms. Xidel does work on Windows XP!
I started playing around with it, and very quickly decided to rewrite my NHC OPML generator with it to drop the Node.js, oq and jq dependencies. That is how I ended up with the current implementation of the NHC generator.
I then rewrote the CSS sprites generator. I use a single image for all of the icons displayed in the web version of RSRSSS, and some CSS to take just a portion of the image each time to get one icon at a time. I also took the time to optimize the CSS.
That sent me on a roll, and I started writing a whole bunch of XQuery scripts, including one to use the W3C Feed Validation Service from the command line, and started looking for websites that would give me good OPML files to make.
That's how there has been a wave of OPML files coming in recently. I just want more excuses to write in XQuery! If I find the motivation to work on ITSB again, I will definitely be introducing Xidel in it and start slowly converting everything to it. I have also considered using it as a static site generator, and for a few other projects.
In some email exchanges, I have dubbed Xidel the Overwhelmingly Powerful Mother Of All Legendary XML/HTML/JSON Processor of Doom due to how impressed I was with how versatile it, and XQuery, are.
So, if you find yourself trying to extract data from HTML, XML or JSON documents, do check out Xidel. It might not be as trendy as other tools like jq, but it is a lot more powerful.
]]>Desktoptwo was emulating some mix of the Windows, Mac and Linux desktop experiences, in a web browser. You had a few simple apps available like a notepad, and some storage space to keep your files on their servers. You could also use OpenOffice Writer, Calc and Impress: those would open a remote desktop connection within the browser to the actual OpenOffice running on their servers. It was quite impressive for the time.
At a similar time, I had also played with the Microsoft Office 2007 "Test Drive", powered by Runaware. Using Internet Explorer only, you could for a short amount of time play around with Office 2007 on a Windows XP virtual machine, using Citrix. Runaware also had other demos running, including one for Sage. I remember spending a lot of time just messing with those virtual machines, without knowing anything about the technology that made this possible yet. I was like 10 years old!
A few years later, I had found eyeOS, yet another WebOS, but that you could self-host this time. It had more applications available, but they were less interesting—the whole thing was more meant to be a demo of what apps you could create with it. They had something akin to an app store. I tried both their own demo instance and hosted my own, as I had learnt a bit more about webservers by that time.
I now tend to be wary of large piles of JavaScript like those as the current state of the JavaScript ecosystem tends to make everything worse, so I don't generally care much about the current attempts at making a new WebOS, like OS.js… unless they are fun.
Windows 93 is a pretty well done recreation of the Windows 95 user interface that started in 2014. It has a community that provided plenty of new apps to give the system a lot of content and fill it with weird jokes and "viruses". This is definitely not meant to be a replacement for a desktop OS like Desktoptwo or eyeOS attempted, and just a weird bundle of apps and games that mix the modern web and 30-year-old designs.
Their RSS feed gives some rare status updates about the project. I thought the project had been pretty much dead by now, since the feed had gotten no updates at all in a long while, but they announced that they working on a full rewrite. Version 3 will be open source, and it will be based on Sys42, their own custom framework that skips one of the worst parts of modern JavaScript (Webpack) and allows building any web OS, not just a whimsy Windows 95 clone.
I am impressed by how much effort people sometimes put into these projects.
]]><h1>
tag. That will be your feed's title.<article></article>
tags.<h2>
tag, whose contents should start with an ISO 8601 date: 2023-02-31 Climbing the Reichstag dressed as Spider-Man
. The date will be the Atom entry's date, and the rest will be the post's title.If you like a verbose specification, well there's one. But the list above is the gist of it really.
Once you've got your page available somewhere online, you can use https://journal.miso.town/atom?url=
followed by your page's URL to get a feed. Or you can use this form if you can't be bothered with URL-encoding the URL, which is definitely understandable. You can also use a validator.
There are plenty of HTML-to-RSS tools, including some that use CSS selectors, let you do custom scripting, or are tailored to one specific platform like Twitter. But I really like the simplicity of HTML Journal: Just structure the page in the spirit that HTML5 intends, and suddenly, you've got a feed.
This is the 100th post on RSRSSS, and it recently became three years old. Thanks for following along, and I'll see you next year!
]]><h1>
tag. That will be your feed's title.<ul>
) to list all your pages.<li>
), have a <time>
tag to specify the ISO 8601 date of the post. That will be the Atom entry's date.<a>
tag. The href
should point to the page of your post, and the link's contents will be the post's title.Just like with HTML Journal, there is a longer specification for this.
And just like with HTML Journal, you can use https://blog.miso.town/atom?url=
followed by the blog's URL, or use this form, and you get a validator.
But here's an argument that I don't remember ever seeing in this constant bickering: the fact that there are technologies out there that rely on feeds. Moving those away from feeds would be very costly. Here are a few use cases that I found while going down different rabbit holes.
Podcasts are still very much popular. While most people nowadays will be listening to podcasts through some streaming services like Spotify, iTunes, or podcast-specific platforms, podcasts started out just as <enclosure>
tags within RSS feeds, and that's how those platforms fetch them.
Spotify imports podcasts from RSS feeds and have a specification for how they parse them. They also provide a feed if you are hosting your podcast on Spotify directly, so that you can share it elsewhere. All podcast hosting platforms provide feeds.
iTunes relies on feeds. They have their own XML namespace, which is likely to be found on pretty much every podcast feed as that became a de facto standard namespace for podcasts before the podcast namespace showed up.
Google Podcasts feeds on feeds, and also allows subscribing to an RSS feed directly without it having to be submitted to Google.
Obviously, a large amount of feeds are dedicated to news. Every single news website out there has an RSS or Atom feed hidden somewhere. Most of them will be sharing a link to it, either with an RSS icon somewhere on the page or with RSS Autodiscovery, but even if they don't, they still do have a feed. They have to have a feed in order to survive.
How can I say that so confidently? Well, because Google News feeds on feeds, Microsoft Start and MSN.com feed on feeds, Google Assistant feeds on feeds, Flipboard feeds on feeds, and just about any other news aggregator uses feeds.
It's the standard way to aggregate news articles, and a lot of people will start with a news aggregator to get their news, particularly Google News. It has so much weight on how news are accessed from that setting news.google.com
as your referrer on HTTP requests can unlock paywalls and that various laws have been drafted to make Google News pay news publishers.
Google has leaned rather heavily on RSS, including for ads. For example, I randomly found this sample feed for an ad tech called Dynamic Ad Insertion, which sounds like it is how soulless people marketers can insert ads into livestreams and VOD. Google Shopping also feeds on feeds. Those feeds can be really detailed because of Google Base, yet another product they killed. Google Docs supports feeds. They probably are in other places, but since Google's ads are incredibly obfuscated, I don't even want to try and dig deeper into their unhelpful help to find more examples.
Google Base's legacy is also found at other companies: Facebook lets advertisers send them a list of products as RSS and Atom feeds with Google Base attributes.
Real-time information that includes geolocations can be quite important, both in the public and private sectors. Waze for Cities exports data as GeoRSS. A lot of GIS software will support GeoRSS imports. And the GML and KML formats supports automatic updates. KML, the format behind Google Earth's data, is supported by the W3C Feed Validation Service for a reason.
Probably the only reason why the .NET Framework has a feed parser is because of feed support in WCF. WCF aims to represent business processes that mix a whole bunch of other apps together, like how hiring someone will require HR approval on some particular app, then payroll needs to be notified, security issues a badge, etc. You draw the diagrams of the processes in Visual Studio, implement every step as a bunch of .NET code that probably calls out to other apps, and then have a WCF server somewhere to handle that stuff.
IBM has an equivalent support in Business Automation Workflow.
Oracle HCM provides Atom feeds so that other apps can be notified of changes on more HR stuff.
Corporate applications are probably among the slowest-moving software out there, so it's very unlikely that those will drop their support for feeds any time soon.
Those few examples are far from an exhaustive list and they just show some of the things I have stumbled upon, but they are enough to prove that behind RSS and Atom feeds, there is money. And if a technology has been made necessary to make a profit somewhere, then changing it will be too risky and maintaining it becomes essential to capitalists. Even if the general public completely stops using feeds, they will still be out there somewhere, and thus tools, software libraries will still be out there to support them, and nothing will stop anyone from still using feeds.
]]>I wanted to find a transportation themed feed to post here due to the weird occasion. I found the Houston TranStar RSS feeds, for information about the various highways around Houston. I made yet another OPML file to list all of those feeds if you feel like adding them all.
So I can now say that I have walked onto a highway to write an XQuery script, while watching a cyclist pass by while shouting "The road is ours! The world belongs to us!" This is a very… interesting day.
]]>I particularly like the URL of this namespace.
The specification includes various considerations on ensuring that the entry was indeed deleted by the feed's authors and not someone else, as well as supporting the aggregation of deleted entries from multiple feeds into one. It is also possible to have a separate XML file that only contains the <deleted-entry>
tag, which would have a MIME type of application/atomdeleted+xml
, with an extension of .atomdeleted
. Do check out the RFC if you want to learn more.
Of course, this is totally unreliable, since any reader that does not support this namespace will ignore it completely, and some potentially evil readers might even highlight the entry as needing to become another example of the Streisand effect. But in some applications, such as automated processing of Atom feeds for synchronizing some data, knowing for sure that something has been removed can be useful.
I searched for code that was referencing this namespace and found that a few YouTube Atom feed parsers do handle deleted-entry
elements, so it might be possible that YouTube uses those tags. See here and here.
Nothing stops you from adding <at:deleted-entry>
elements to RSS feeds as well, just like how other Atom extensions can already be used there. But most feed readers will skip over this namespace in Atom feeds, so it is likely that almost nobody will support this on RSS feeds.
And if you were expecting actual tombstones from this namespace, well do know that RSS feeds for obituaries are a thing. You're welcome.
]]>I recently discovered that it has RSS 2.0 and Atom support, to let you monitor for emails received at your trash address. To activate this, you need to use an address on a domain that supports locking addresses with a password. Those domains are marked with the (PW)
suffix on the homepage. Pick an address that is unlikely to have already been used, and you can then set a password on the web UI. The password is removed after 3 months of inactivity.
Once you have a password, you can access the RSS and Atom feeds using the following URLs:
https://tempr.email/en/rss/[email address]:[MD5 hash of your password]
https://tempr.email/en/atom/[email address]:[MD5 hash of your password]
For example, if your address is nope@gmai.com
and your password is hunter2
, then your feeds will be the following:
https://tempr.email/en/rss/nope@gmai.com:2ab96390c7dbe3439de74d0c9b0b1767
https://tempr.email/en/atom/nope@gmai.com:2ab96390c7dbe3439de74d0c9b0b1767
Note that if you are trying to use md5sum
to compute the MD5 hash, do not forget to use print
or printf
, not echo
, because echo
will add a line break character that will also be hashed.
That's neat. I like the idea of emails to RSS feeds, since it lets you centralize even more things into your feedreader. There are obvious security concerns here with the automatic password removal, or the fact that MD5 hashes are used, which could imply passwords are stored as MD5 hashes, or worse, a plain text offender status. But it doesn't sound like it would be that difficult to secure your feed a little harder and get any email inbox as an RSS feed.
]]>