Category: CommunityZFS On Linux

Share this post...Tweet about this on TwitterShare on Google+0Share on Facebook0
InterWorx has not been tested with ZFS, but it is an intriguing prospect. What do you think? Let us know in the comments!
ZFS File System

Photo Credits: Jemimus

In late March, the ZFS on Linux project announced that a stable version of ZFS was ready for production use on Linux systems. Other Unix variants, particularly the BSDs and Solaris, have been able to take advantage of the many benefits ZFS has over traditional file systems, but, until now, the Linux version has not been stable enough for widespread deployment.

ZFS contains both a file system and a logical volume manager. It was originally developed by Sun in the mid-2000’s, and incorporates many features that make it very attractive for businesses and those that require an extremely scalable and reliable file system on which to build their data infrastructure.

Traditional file systems have a built in upper limit to the amount of data that they can manage. For example, the maximum partition size for a ext3 file system is between 2 and 32 TB, depending on the block size, with individual file sizes having an upper limit of 2 TB or less. While ZFS has a theoretical upper limit, it’s so large as to be practically unlimited for most purposes (256 quintillion zettabytes).

Because ZFS also incorporates a logical volume manager, it is able to implement its own RAID-Z organization, which improves on previous RAID schemes by closing the write hole error.

ZFS is designed to abstract the block device level, and instead create virtual storage pools, which are composed of virtual devices, which themselves are built from various block devices. File systems are not limited to traditional partitions or drives, but can be spread across multiple drives of differing specifications. New drives can be added as needed, and designated hot swap devices can be swapped into the storage pool in the event of drive failure.

In a recent article, we discussed how the KDE project’s backup scheme was insufficient, causing a near disaster. One of the solutions they are considering is a shift to ZFS due to its snapshotting feature, which makes it much easier to take backups from a live file system.

Snapshots rely on the fact that ZFS implements the copy-on-write transactional model. Active data is never overwritten, but instead changes are copied to a new block, which results in the file system’s previous state being retained. Snapshots can be taken instantaneously and initially take up almost no disk space, as they simply reference data already on the disk. They do, however, grow as the data on the disk changes.

Snapshots can be cloned, which results in two separate file systems that use the same blocks. Snapshots can also be converted to a data stream, either as a representation of an entire snapshot or as a representation of the difference from a previous snapshot. These streams can be used to move the file system across networks to other machines, making it an excellent backup solution.

ZFS is also designed from the ground up for data integrity, using hashes and checksums to constantly monitor the state of a file system. Due to the ability to take snapshots and roll-back to previous versions, ZFS is extremely reliable.

If you’re planning on creating a Linux cluster and need a highly scalable and reliable file system, ZFS is definitely worthy of consideration.

File systemsReliabilityScalabilityZFS
May 2, 2013, 4:14 pmBy: InterWorx (1) Comment
  1. CloudHopping: We have been using zfs for some time now via Nexanta. While this is "not supported" it sure seems to work pretty well. Have you guys considered changing your minds in the last 6 months in regards to the ZFS storage support?
    June 13, 2013 at 1:29 am

Leave a Reply
Surround code blocks with <pre>code</pre>

Your email address will not be published.


Sign up to receive periodic InterWorx news, updates and promos!

New Comments

Current Poll

  • This field is for validation purposes and should be left unchanged.