WIP. Brainstorm base32 ideas.

This commit is contained in:
Netscape Navigator 2020-04-04 17:35:07 -05:00
parent 50d4b29f1a
commit ae4c29ad0b
4 changed files with 52 additions and 4 deletions

View File

@ -44,7 +44,13 @@ Eg: `pigeon identity show` becomes `./pigeon-cli show`.
- [X] Don't double-ingest messages. It will screw up indexes.
- [X] 100% test coverage
- [X] Implement pigeon message find-all for peer feed. I will need to add index for `author => message_count`
- [ ] Stop using base64 in favor of base32 with no padding? Simplifies support for legacy systems. Easy to implement.
- [ ] Need a way of importing / exporting a feeds blobs. (see "Bundle Brainstorming" below)
- [ ] Need a way of adding a peers messages / blobs to bundles. (see "Bundle Brainstorming" below)
- [ ] refactor `Bundle.create` to use `message find-all`.
- [ ] Add mandatory `--from=` arg to `bundle create`
- [ ] Make the switch to LevelDB, RocksDB or similar (currently using Ruby PStore).
- [ ] Change all multihashes to Base32 to support case-insensitive file systems?
- [ ] Rename (RemoteIdentity|LocalIdentity)#public_key to #multihash for consistency with other types.
- [ ] Rename `message find` to `message read`, since other finders return a multihash.
- [ ] Don't allow any type of whitespace in `kind` or `string` keys. Write a test for this.
@ -68,3 +74,41 @@ Eg: `pigeon identity show` becomes `./pigeon-cli show`.
# Idea Bin
- [ ] Map/reduce plugin support for custom indices?
- [ ] Ability to add a blob in one swoop using File objects and `Message#[]=`, maybe?
# Bundle Brainstorming
Bundles export need to export two things:
* Messages
* Blobs
Currently, we're only exporting messages.
We need to export both, though.
## Idea: Output a directory (Zip it Yourself)
1. Create a `bundle_X/` directory. The name is arbitrary and can be defined by the user.
2. In the root directory of `bundle_x/`, a single `messages.pgn` file contains all messages.
* All messages are expected to be sorted by depth
* Messages from multiple authors may be included in a single bundle, but the messages must apear in the correct order with regards to the `depth` field.
3. Blobs are stored in a very specific hierarchy to maintain FAT compatibility:
* Blob multihashes are decoded from URL safe b64
* Option I:
* Blobs are re-encoded into base32 without padding (to support legacy filesystems)
* Option II (not safe for FAT16?):
* Blobs are encoded as Base16:
* `xxxxxxxxxxxxxxxx/blobs/sha256/HEZB3/IFXHL4Y3/H4MWGKZV/42VVMO4S.4VQ`
* `N` represents a single character in a blob's multihash
# Why Base32?
I want to support things like:
* Hosting bundles on file servers (SFTP, HTTP, etc..)
* Ingesting blobs on constraints devices (Eg: FAT16 filesystems, retro machines)
To do that:
* Base 16 (hex) creates blob filenames that are too long for FAT16 and visually awkward.
* Base64 can't be hosted on a web server
* URLsafeBase64 can't be used on case insensitive file systems

View File

@ -45,7 +45,7 @@ module Pigeon
read { store[MESG_NS].fetch(multihash, false) }
end
def find_all(author = Pigeon::LocalIdentity.current.public_key)
def find_all(author)
# TODO: Ability to pass an author ID to `find-all`
store = Pigeon::Storage.current
all = []

View File

@ -138,9 +138,12 @@ module Pigeon
desc "find-all", "Find a pigeon message in the local DB"
def find_all()
def find_all(author = Pigeon::LocalIdentity.current.public_key)
# TODO: Ability to find-all messages by author ID
puts Pigeon::Storage.current.find_all.join(Pigeon::CR) + Pigeon::CR
puts Pigeon::Storage
.current
.find_all(author)
.join(Pigeon::CR) + Pigeon::CR
end
desc "last", "Grab your last message. INTERNAL USE ONLY"

View File

@ -70,7 +70,8 @@ RSpec.describe Pigeon::Storage do
"me_myself_and_i" => Pigeon::LocalIdentity.current.public_key,
}),
]
results = Pigeon::Storage.current.find_all
me = Pigeon::LocalIdentity.current.public_key
results = Pigeon::Storage.current.find_all(me)
expect(results.length).to eq(3)
expect(msgs[0].multihash).to eq(results[0])
expect(msgs[1].multihash).to eq(results[1])