Reading Summary 2019-04-23

This is a repository for my research, paper reading summaries/reviews, and relevant blog-like posts in markdown.

Reading Summary 2019-04-23

· by Aldrin Montana

Ivy: A Read/Write Peer-to-peer File System

Relevant Readings

Wide-area Cooperative Storage with CFS

Describes a file system, cooperative file system (CFS), that builds on top of chord
to provide a peer-to-peer approach to file storage by storing file blocks in a distributed
hash table (DHash). DHash is used in the Ivy paper, and so this paper is useful for
more details on DHash.

Problem Statement

The authors of Ivy are trying to design a peer-to-peer file system that removes a centralized server for file writes and provides integrity properties for a distributed system with untrusted peers.

The Ivy paper has an amazing related work section that very clearly distinguishes related research into various categories: (1) Peer-to-peer storage systems, (2) LFS, (3) Distributed storage systems, (4) Reclaiming storage–something akin to garbage collection for old data, (5) Consistency and conflic resolution, (6) Storage on untrusted servers.

The Ivy paper attempts to solve all of the following problems in the above categories of related work in order to provide a novel peer-to-peer file system:

Previous peer-to-peer storage systems were unable to distribute writes to the file system. In most or all previous work, clients were only able to submit writes through a centralized server.

Log-structures file systems were the motivation for each client of Ivy having their own log. The authors admit that the logs don’t gain them performance, but they do provide the ability to submit writes to an Ivy system in a distributed manner (each client can write in parallel).

Various distributed storage systems have existed up to the time this paper was written. These include zebra, xfs, Frangipani, and Harp. The problems with these storage systems, respectively, are that they require a single meta-data server, trustworthy servers, or use a centralized cluster of dedicated servers.

Previous work on storage on untrusted servers use the Byzantine Agreement model, where some elected servers are sent all write operations and consensus must be made by these servers as to which write operations to apply. Because this requires consensus, it is necessarily slow.

Proposed Solution

The author’s approach is for clients to write and make data modifications to their own log, and make reads from all logs in the Ivy system. In this way writes are incredibly efficient, and reads can be distributed, clients can choose which other clients they want to read from (tunable trust model), and read consistency can be determined at a single node, the reader.

Contribution

This work is important because it seems to provide the first peer-to-peer storage system that allows updates to be totally decentralized and scalable. The overall performance is slow compared to NFS, but this is an improvement on other peer-to-peer storage systems and likely has some areas that can be easily improved upon. I also suspect that this is important work leading to bitcoin-like systems.

Comments and Questions

  1. I think this would be a terrible file system for applications where there are several or more writers to the same content, but I think if there are relatively few writers, or writes are infrequent, or there is a simple way for users to agree to work on non-conflicting files, then this sounds like a very ingenious system.

  2. Although the authors mentioned that they are 3x slower than NFS v3, I think that’s mostly because NFS is a much simpler protocol to think about, and there are likely some clever ways to improve upon this system, including some of the improvements that have been made more recently for log-structured database backends.

  3. I think bloom filters would be very beneficial for this system, and at a glance it looks like a lot of the work on bloom filters has come after this paper (mid 2000’s and later).