Saturday, May 13, 2017

What's wrong with bittorrent and what can we do about it? Vision of a next-gen p2p filesharing system.

In this short writeup I review the history of filesharing services, highlight weakness of currently most widely-used filesharing protocol (bittorrent) and propose directions for improvement.
Intended audience: anyone with an interest in p2p filesharing.



What's wrong with bittorrent and what can we do about it?
So, why does bittorent suck? Actually, by itself, it doesn't. Bittorrent is a pretty good file transfer protocol.
The first widely successful p2p filesharing application was Napster, appearing in 1999. It could transfer files directly between users' computers, but the list of files was stored on the central index servers, which handled the searches. It comes as no surprise that the company behind it was sued and shut down the service two years later.
Napster's weakness was centralization, its single point of failure. Next developments tried to address that. Still kind of not wanting to let go the client-server paradigm there appeared Kazaa and Edonkey network. They still had supernodes/servers, but as they were under user control one could not simply shut them all down overnight.
They had a problem with leechers, though. Since Edonkey relied on users' goodwill to share their upload bandwidth and Kazaa made an extremely unwise decision of trusting the client to correctly report its contribution to the network, many chose not to bother with sharing back. The p2p solutions worked, but one could do better.
Then came bittorrent. With a single conceptual change in the form of a Tit-For-Tat algorithm being a pure leecher became personally unprofitable. You could throttle your upstream and (on a swarm with a high leecher-to-seeder ratio) watch as no one wanted to trade with you, only occasionally gifting you some chunks during optimistic unchoking, and wait to get noticed by seeders. If, however, you granted the application a reasonable amount of upload speed, the popular download would easily saturate your pipe.
However, bittorrent entirely omitted (and thus outsourced) one critically important feature of any p2p filesharing system: metadata management. Bittorrent protocol did not specify how end users should work with metadata at all, this was left out as an implementation detail. What appeared next is tracker frontend websites, public and private, both of which now exist and have their own problems.
Public trackers struggle with retention, because users have no incentives, other than pure altruism, to keep sharing files they downloaded long ago. Even is you want to keep sharing, many bittorrent clients are not well-suited to handle thousands of torrents simultaneously.
Private trackers more often than not reek of unfounded elitism, they are a pain to get into, and most importantly, they impose rules which unnecessarily restrict filesharing and waste an unused surplus of upload bandwidth.
So, bittorrent is a good file transfer protocol, but it is not a filesharing solution. A filesharing solution consists of a protocol/software/framework to exchange both data and metadata, and bittorrent takes care only of the first part.

Vision of a next-gen p2p filesharing system.
Metadata is "data about data". The bytestream (file content) which gets interpreted as a certain container type and then split and decoded into audio, video, static images or any other form of media is data. The album name, list of tracks, movie title, its release date and all other data that describes the data is metadata. As a user of any p2p network who wants to download something, first you interface with metadata provider (perform a search), and then extract pointers to the data files themselves and let the application download them.

At the moment there are two models of working with metadata: centralized and distributed.

The most common example of the centralized model is a bittorrent sharing site. Torrents get uploaded by users, approved by moderators under the control of site admin, get categorized and tagged using site-local metadata schema, then users visit the site, use site search functions to locate desired torrents and download them.
Strengths: Centralization helps maintain quality of both data and metadata. Low quality data files are either rejected or eventually replaced with better ones, metadata is properly organized and timely updated.
Weaknesses: The website itself becomes the SPOF under threats ranging from simple funding issues of site admins to harassment by state police following orders of media cartels. If the site is taken down all metadata creation effort frequently goes down the drain.

Distributed model is represented by self-contained filesharing solutions such as Perfect Dark or its predecessor Share, or, even earlier, serverless (DHT) Edonkey network.
Strengths: Resilience.
Weaknesses: The task of supplying correct metadata is completely in the hands of hordes of end users - uploaders. Instead of a proper schema the metadata is flattened into simple filename strings with dumb regex search as the only way to query. File collections are hacked into the system through archives with all the downsides it implies. Difficult to impossible to fix incorrect metadata. Searches can be slow and incomplete.
These problems of distributed-type networks, in my opinion, contributed to their lesser share today, relative to bittorrent protocol and its supporting websites.

Wouldn't it be great if we could somehow combine strengths of both approaches without their weaknesses? But wait...
There are two aspects to metadata in filesharing networks. One is tagging/updating/fixing errors, you can also call it "write access" or "creating/managing metadata". Another is querying/searching/browsing, call it "read access" or "using metadata".
In centralized model both "read" and "write" metadata access is centralized. Centralized model is good at "creating/managing", is ok at "using" and it gets worse with size (site popularity requires money to maintain and pay for traffic/CDN and ultimately attracts law enforcement).
In distributed model both "read" and "write" metadata access is distributed. Distributed model is very bad at "creating/managing", somewhat bad at "using" and is mostly indifferent to size (but higher numbers of active users make it harder to target individual users).

Key insight: one should centralize "write access" and keep "read access" distributed. Details on how to create centralized write access and improve read access to the level of centralized model over distributed network are below:

The role of the site admin (root administrator for a certain collection of metadata) can be performed by any user with a private-public keypair. Moderator access is granted by signing their public key with the root key, and checked by signature verification. Public keys and signatures are broadcast into the network. Anyone can generate a keypair and become a "site admin".
Each user can pick an arbitrary amount of keys to trust as root keys. For those keys the user keeps a complete local copy of the associated metadata database, together with the log of all updates. Metadata updates are database changesets signed and broadcast by any user, which are then checked and signed (or rejected, by letting them expire over time) by moderators. Owner of the root key ("site administrator") monitors published updates, resolves update conflicts, imposes an ordering on updates, signs them with root key and broadcasts these updates into the network, which then get accepted by software of the users who chose to trust that root key and get added to their local database. A search would simply be a query to this local database. An equivalent of RSS bittorrent feeds would be an ability to set precise file download triggers based on contained metadata.

Why is this an improvement over bittorrent with websites?
- Metadata is public and secure, spread among all users. By keeping full log of changes not even root key user can maliciously erase it at will. Easy to fork if root starts slacking.
- It relieves end users of local file storage micromanagement. You can stop bothering yourself with putting downloads into appropriate directories on your hard drive(s) according to your chosen criteria and 1)let the software place them automatically based on the metadata-to-directory mapping you decide or 2)let the software store files by hash and use its metadata to locate files you need instead of relying on the filesystem as an ad-hoc database (because a generic graph can do more than a tree). A FUSE module which plugs into the metadata db and provides a file view overlay is also a good idea.
- It should increase retention of old files by incentivizing users to keep downloaded files available, because TFT would account for any data transferred between peers**, not just data belonging to the same file or file collection (single torrent). Curiously, "public bittorrent" could also easily do that, but client authors didn't bother with adding that change for some reason (afraid of decreasing individual swarm performance despite increases in overall network health?). "Private bittorrent" tries to do almost just that (by tracking total ratios), except it does so in an inefficient and susceptible to forgery - in other words, plainly broken - manner.
**: actually, between owner groups of an end-user key. You could push a copy of your key to the seedbox, let it do most of the uploading work, and enjoy fast downloads to your home PC because your key would be recognized by the peers as a good exchange candidate.
- It allows you to "backup" your downloads by keeping references to downloaded files and backing up only that (which is megabytes at most, instead of gigabytes to terabytes). You'd have to redownload files in case of a hdd crash, obviously, but at least that would be automated. Bittorrent can also kind of do that, if you keep torrent files, but after several years you might find speeds lacking.

Half-baked ideas:
Separation of content - pure metadata vs. data links.
Pure metadata is metadata describing content without any references to actual files (say, album/movie title). Data links are, as their name implies, junctions between pure metadata and the space of files ('there is an instance of "this" movie in the p2p network and it has "that" hash' kind of statement). One could argue which files are worth adding to the set of available files: for example, TLMC never includes lossy transformations of original content (for reasons), while others might find value in mp3/ogg version of TLMC and music in general. There is less disagreement about pure metadata: it is an objective statement about the state of the world, rather than personal preference. Therefore, it makes sense to separate the two.

Some open problems: there is a number of difficulties I don't know how to solve at the moment. I don't know how much impact they will have on the viability of the whole idea.
Ease of use.
Users would have to download metadata database, which could grow into multi-gigabyte range (for example, for music, 1M albums x 1K metadata/album = 1G metadata) and then keep up with all updates. This is not something every casual user would accept and tolerate, just to download a couple of songs. It is thus not unreasonable to imagine lightweight clients, which store only metadata about local downloaded data and send search requests to nodes which do store complete copy, probably in exchange for a small chunk of bandwidth credit, but this adds levels of complexity I'm uncomfortable with and, what's worse, starts dangerously pointing in direction of central search servers, unless users are explicitly made aware of personal downsides (naturally slower searches? would that be enough to deter majority?) of choosing the lightweight option.
Degree of coverage.
Suppose there is a database of all touhou music. However, all touhou music is a subset of all doujin music, which in turn is a subset of all music. And if you go one step further, it is a subset of all media. Now, having one giant database of "all media" is clearly impractical because its schema would grow into an enormously complex beast (I might be wrong here, though), and the db itself would get huge. But if you keep the databases split at lower levels you'd either have to duplicate maintenance efforts, or you'd need regular grafts between them, or they'll just desync.
Post-moderation.
One attractive property of a centralized system is (an optional) post-moderation. Most users are assumed not malicious, so they are allowed to post content that will be verified by moderators later. In case a fake or otherwise undesirable file is shared, their privileges might be revoked. Significant reduction in share latency on all content outweighs occasional temporary rogue files, this can be further tweaked by requiring a certain level of trust to be established before content posting rights are granted.
It is unclear to me how to implement this in a distributed system, since metadata updates do not commute and there is no central point which can serve for automated conflict resolution. Maybe it's a reasonable choice to accept delays for pure metadata db and store changes signed by moderators and trusted uploaders as ephemeral updates for data link db until confirmation from root comes in.
Update poisoning.
The network should accept all metadata update requests and store them for some time for moderators to review. An attacker could flood the network with trash requests.
One option of dealing with this would be to require users who want to publish updates to perform one-time computationally intensive task to start participating. For example p2p software could require their identifying credentials to have a property of having a cryptographic hash function of their public key contain a certain number of leading zero bits. Also, nodes should limit their storage for each publisher key and keep only last something broadcasts (like square root of their previously verified updates). Turns out this particular one is not a big problem, after all.

27 comments:

  1. I thought v19 is out and nearly had a heart attack.

    ReplyDelete
    Replies
    1. I had a heart attack and died.

      Delete
  2. First off, are you okay? Second, WTF?

    ReplyDelete
    Replies
    1. I'm ok and if you want an informative answer you'd need to elaborate.

      Delete
  3. Have you decided where to register this trove since the nyaapocalypse?

    I've been helping seed for numerous versions/years and would like to continue in a properly coordinated manner.

    ReplyDelete
    Replies
    1. I've heard about nyaa pantsu, nyaa si and anidex, but for the last N torrents I just uploaded the file here and let whoever is interested to reupload it everywhere else.

      Delete
  4. When v19 could be released?
    We had waited for 2 years, do you encounter any problems?

    ReplyDelete
  5. Would love for v19 some time soon. :^)

    ReplyDelete
  6. Did you investigate IPFS? It allows deduplication of data, and it permits updating folders (IPNS and IPRS), and has even a RSS-like feature (PubSub).
    A very simple, yet working today, solution would be to host a GitLab instance on a site, that works as a centralized repository for changes, and that updates a folder containing all data and metadata, shared and updated by IPFS. T
    he folder would have a IPNS hash, so that one could visit the most recent version of the project even on a browser. IPNS nowadays is pretty barebone, but it's expected to become git-like in the future (IPRS), so that if someone wants to fork the project, he can anytime, but a lot of data and metadata will still be seeded by both forks, minimizing disruption.
    In the future the whole repository thing can be decentralized even further, but this solution works now for your project.

    Looking forward to the v19 release! Can't wait :)

    ReplyDelete
    Replies
    1. Thanks for the pointer, I'll look what IPFS can offer. It's definitely better to reuse an already existing and tested solution rather than to suffer from NIH syndrome and reimplement stuff from scratch, although I'm slightly worried that the scope of the project is too large.

      Delete
  7. I just want to understand one thing. What is wrong with the way this have been working in the past?
    I have downloaded, shared and used this collection with 0 problems.
    Can't wait for v1.9

    ReplyDelete
    Replies
    1. >What is wrong with the way this have been working in the past?
      I dunno, having to wait 2 years for the next release doesn't count?

      Delete
    2. Honestly, you could just put 1.9 into an IPFS folder, put a link to the folder and a (link) a tutorial on how to use IPFS. Then you could update as you feel is necessary like you seem to want to do.

      People should be able to have just as good a connection to the folder as they do the torrent, as long as they act as a server.

      Delete
    3. Original anonimous that posted the first comment here.
      Well, I had never complained about waiting 2 years. But it is a big ammount of time now you say it. Maybe relying on that IPFS thing works.

      Delete
  8. Rwx, consider that you can release v19 both via torrent AND IPFS (via "ipfs add --nocopy"). The GitHub/GitLab repo for metadata can be worked on later. By the way, how much in GB for all those .cue files? Maybe you'll need a selfhosted gitlab instance.

    ReplyDelete
    Replies
    1. Eh, I don't see additional benefit of IPFS as a mere filehoster. The whole idea of the p2p I'm describing is an effective means to work with metadata.
      Cue files are small, about 0.5-3.0 KB per file.

      Delete
  9. Do you have plans to make the collection available again? We wanna listeen... :)

    ReplyDelete
  10. I suggest you take a look a look at some of the draft BitTorrent Enhancement Proposals on bittorrent.org. There's some interesting stuff there.

    ReplyDelete
  11. What happened with http://otokei-douj.in/ ? Is offline :'c ...

    ReplyDelete
  12. Hi. You can upload your files on: Rutracker.org
    (Torrent)

    ReplyDelete
    Replies
    1. Why in the world would I want to inconvenience users by uploading to a private tracker?

      Delete
    2. It's semi-private :) DHT mandatory for Rutracker

      Delete
    3. This still does not answer my question. Why inconvenience users? Are there any upsides?

      Delete
  13. Rutracker provides magnet links to unregistred users, so it is almost no inconvenience. Anyway, only as a mirror to this site, so called multitracker torrent. About upsides, rutracker have several millions of users, more people will find this torrent, more seeds, etc... Rutracker is realy cool. Sad part - rutracker is beeng blocked in some countries :(

    ReplyDelete
    Replies
    1. >Rutracker provides magnet links to unregistred users
      Glad to see another pocket of common sense in our world.

      >rutracker have several millions of users
      But the internet as a whole has several billions of users.

      >more people will find this torrent, more seeds, etc...
      Might be true, but I somewhat doubt it. TLMC has been around for more than 10 years, I think everyone who had even a slightest interest in obtaining lossless touhou music has found it already.
      Hell, just using "touhou lossless" or "touhou flac" brings this site as a first result in anonymous google search from EU endpoint (if you are in a bubble it might be different, though).
      Also, after checking the rutracker frontpage, I don't see a single category which would properly fit the torrent.

      It's not that I'm actively against it, it's just... I don't see it being worth the effort.

      Delete
  14. I have trouble using torrent in my country(CN :(, even I can't access this site without proxy.
    I wish I could see a new downloading method soon, using some kind of distributing way.
    Thanks for your maintainence a lot! Can't wait next version :)

    ReplyDelete