1-888-310-4540 (main) / 1-888-707-6150 (support) info@spkaa.com
Select Page

Network Storage – Getting the most from your filesystem

Written by Mike Solinap
Published on March 10, 2011

In my last blog entry, I mentioned that I would discuss how to roll your own network attached storage device. At first, this might sound trivial. Take any commodity PC hardware, throw a large disk in there, install linux, configure NFS, done. Not so fast, there are numerous considerations that must be taken into account in order to have a secure, reliable server that performs well.

This week I’ll be focusing on what I believe to be one of the most important considerations when building a network attached storage server — the filesystem.

Most modern filesystems today have enough features to suit our needs. A system administrator would typically want to be able to do the following:

  • Easily resize the filesystem
  • Reliably recover the filesystem in the event of a system crash
  • Keep filesystem performance at a consistent level
  • Not worry about disk fragmentation
  • Maximize usable disk space

As network consultants, we provide network management services for clients who are in need of IT infrastructure solutions. At one of our clients however, we came across a special set of requirements. The client captures network data on the order of about 20 gigabytes per day. This data then gets parsed and inserted into a postgres database. At 20GB per day, the storage requirements based on their retention period are huge. This presents two problems. Network captures are highly compressible. If only there was a way to store these captures in a compressed state transparently, that way users would not need to spend time doing this separately. Secondly, with such a large database, how can a backup be taken consistently, in a reasonable amount of time?

ZFS filesystemLuckily, ZFS came to our rescue. ZFS is a filesystem developed by Sun, but unfortunately due to a conflict between GPL and CDDL licenses, a linux kernel based ZFS port has not been released yet. Some progress has been made by the http://zfsonlinux.org/ project, but I’m not sure it’s production ready yet however. Some of ZFS’ most powerful features include:

  • Storage pools (Similar to LVM)
  • Transparent compression (gzip and zlib)
  • Snapshots
  • Deduplication

The snapshot feature played an important part in backing up the large postgres database. To get all data files into a consistent state, previously the only way to do that was to shutdown the database completely. Then you could copy the files off to another server or off to tape. With over several terabytes of data however, this would mean hours of downtime. With snapshots on the other hand, the database remains running, and all files are consistent. To the database, this appears as a crash, and if restoring from a snapshot, the database will use crash recovery to come back online. Depending on how your application does transactions, this might not be acceptable.

The transparent compression feature was equally important. A 3U server that we had available supported (8) 3.5″ drives, for a total of 16TB raw capacity. With network captures as the main data source, the client could expect upwards of 25TB of usable compressed space. With 3TB drives becoming more common, the amount of potential space available in a 3U footprint is becoming a bigger value.

Unfortunately, these “free” features really do come at a price. For instance, if you are primarily a linux shop, then running FreeBSD or OpenSolaris to get ZFS may not be feasible. Also, to take advantage of transparent compression, you will need a more powerful file server than typically required. But if you can deal with these small limitations, ZFS provides a wealth of benefits.

Subscribe to our blog to keep informed on server storage solutions and other areas of IT Infrastructure.

Michael Solinap
Sr. Systems Integrator, SPK

Latest White Papers

The Next Chapter of Jira Service Management

The Next Chapter of Jira Service Management

The service industry is only becoming more competitive as the years pass, making efficient delivery vital to success. Development and Operations teams need to work together to deliver aid and Jira Service Management can help achieve this. [et_bloom_inline...

Related Resources

The Pitfalls of Outdated Tools in Managing Global Teams

The Pitfalls of Outdated Tools in Managing Global Teams

The dynamics of product development have shifted dramatically in the past 5 years.  Teams are no longer confined to a single location or even a single time zone.  Instead, they span continents, collaborating across borders to bring products to market faster than ever...